I met a man once who told me how he wanted to be a butcher but his hands weren’t steady enough. So, he decided to become a dentist instead.
The man, of course, was my dentist. And he said those words as he started up his drill. Good times.
I often think about that dentist from many years ago whenever I see and hear stories about failed migrations to Cloud platforms such as Windows Azure SQL Database (WASD). They have a system working just fine where it is (butcher shop) and they decide to lift and load it to a place that requires a bit more precision (dentistry).
So, why the move? What problem are they trying to solve by migrating? As the momentum for Cloud deployments increase you are going to find many posts like this one that make it enticing for any PHB to come along and utter the words “We should just put the database in the cloud to save money.” [Bonus points for any one of you out there reading this that reply back with "Yeah, and we can color it mauve, too."]
Can you pick out someone in a crowd and know if they are a butcher or a dentist? Probably not without some investigation work.
So why not do the same for your databases sitting inside of SQL Server?
Can you look at a particular application or database and quickly tell whether it is a good candidate for migrating to the Cloud?
If you can’t, then you won’t be about to explain the costs and benefits of moving to the Cloud.
Yesterday I took part in a webinar with Karen López (blog | @datachick) and hosted by Confio Software. The title was “Expert Design Tips for SQL Server and WASD“. This isn’t the first time I’ve discussed Windows Azure, having written before about Windows Azure SQL Database strategies.
In that post I compiled a list of eight strategies that you must consider when moving to WASD. What I didn’t cover in that post was the design considerations that you need for WASD deployments. We talked a bit about during the webinar and design and performance considerations which in turn made me think that I needed to write up a quick post to summarize my thoughts.
First, a stroll down memory lane. Years ago, all of our servers were physical. As virtualization became the new sexy we would spend time evaluating if a server was a candidate for virtualization. I even built a flowchart to help everyone understand the process as we went through our migration efforts from physical to virtual. I know…I know…I was drawing lines and boxes BEFORE they were cool.
Anyway, the point here is that we had a process in place for determining if a server was a candidate. Some big name SAN vendors had tools that did their best to track utilization of systems in an effort to virtualize. Our results did not always agree. So what did I do?
I turned to the data.
For example, we had a server with 8 logical cores that was averaging about 50% CPU utilization. The software flagged that box and said it was “too big to be virtualized”. I laughed because I knew the box wasn’t being by used and should have been an easy thing to virtualize. I was able to identify one query (yes, just one!) that was responsible for about 45% of the CPU consumption. After tuning the box showed 5% utilization and all of a sudden the box was now a candidate.
Today it hit me: Why aren’t I doing that for WASD? In other words, what would make an on-premises database a good candidate for WASD?
As luck would have it, I have some thoughts on the subject. Before I start drawing more boxes and lines let’s talk a bit about what applications work best for WASD. There is a whitepaper available that helps explain the scenarios titled “Is Your Application a Good Fit for Windows Azure“. I liked the paper as lays out what factors you need to consider for current systems. The problem is that the paper doesn’t help you to evaluate the actual workloads your systems are having right now. I could read that whitepaper and say to myself “Self, the RedFiveWidget application meets all those requirements, let’s move it to WASD right away!” But that may not be true because at no point are we given details on what to examine with regards to current workloads.
Even Conor’s whitepaper “Windows Azure SQL Database and SQL Server — Performance and Scalability Compared and Contrasted“, which has a ton of great information, doesn’t give such details. Between the two whitepapers I get the sense that WASD is currently designed for a true OLTP system, and not for a data warehouse. While things may change at some point in the future I believe it is a safe assumption to make that the workloads and systems you will want to consider moving to WASD first would be systems that are more heavy on the OLTP side of the fence and not as much OLAP.
So, how do you determine that your system is OLTP? I think it is easier to identify systems that are showing signs that they are OLAP. I do my best to look for the warning signs first and then evaluate as necessary.
Here’s where I would start looking:
Much as I did with that server that had 50% CPU utilization, sometimes you just need to tune a handful of queries in order to get your database to be considered a candidate. Here’s an example we showed during the webinar:
I can see that this query had some CXPACKET and PAGEIOLATCH waits. Is that enough information to rule out this database as a candidate? Absolutely not! But it does tell me that I want to know more about this query, how often it is executed, and look for any tuning opportunities before deploying it to WASD blindly.
I’ll look to compile a more comprehensive list over time and perhaps even draw more lines and boxes, too.
The post How To Avoid Butchering Your Windows Azure SQL Database Designs appeared first on SQLRockstar - Thomas LaRock.