To help professionals address the myriad challenges involved with performance optimization, SQL Server experts Patrick O’Keeffe and Richard Douglas co-authored the whitepaper “Ten Tips for Optimizing SQL Server Performance”. While there is no definitive “10 best” list for SQL Server performance tips, you can’t go wrong by starting with these suggestions, which offer DBAs and others detailed information on specific issues and ways to apply them to their environment.
In an earlier post, we outlined the general process for baselining and benchmarking methodology, including some guidelines for setting goals and establishing a norm from which to measure. In this post, we’ll discuss data management and analysis approaches.
How much data is needed for a baseline?
The amount of data required to establish a SQL database performance baseline depends on the extent to which your load varies over time. Talk to system administrators, end users and application administrators. They will generally have a good feel for what the usage patterns are. You should gather enough data to cover off-peak, average and peak periods.
It is important to gauge the amount which load varies and how frequently. Systems with predictable patterns will require less data. The more variation, the smaller the measurement interval and the longer you will need to measure to develop a reliable baseline. To take our climbing analogy a little further, the longer the length of rope you examine, the better the chance that you will spot the variations. How critical the system is and the impact of its failure will also affect how much scrutiny it must undergo and how reliable the sample must be.
Storing the data
The more parameters you track and the smaller the frequency, the larger the data set you will collect. It may seem obvious, but you do need to consider the capacity that will be required to store your measurement data. Once you have some data, it should be pretty straightforward to extrapolate the extent to which this repository will grow over time. If you are going to monitor over an extended period, think about aggregating historical data at intervals to prevent the repository from growing too large.
For performance reasons, your measurement repository should not reside in the same place as the database you are monitoring.
Limit the number of changes you make at one time
Try to limit the number of changes you make between each benchmark. Construct your modifications to test one particular hypothesis. This will allow you to meticulously rule in or out each improvement candidate. When you do hone in on a solution, you will understand exactly why you are seeing the change in behavior, and this will often reveal a series of additional potential improvement options.
Analyzing the data
Once you have made changes to your system, you will want to determine whether they have had the desired effect. To achieve this, repeat the measurements you took for your original baseline, over a similarly representative time scale. You can then compare the two baselines to:
To read the entire whitepaper, “Ten Tips for Optimizing SQL Server Performance”.