To help professionals address the myriad challenges involved with performance optimization, SQL Server experts Patrick O’Keeffe and Richard Douglas co-authored the whitepaper “Ten Tips for Optimizing SQL Server Performance”.  While there is no definitive “10 best” list for SQL Server performance tips, you can’t go wrong by starting with these suggestions, which offer DBAs and others detailed information on specific issues and ways to apply them to their environment.

In an earlier post, we outlined the general process for baselining and benchmarking methodology, including some guidelines for setting goals and establishing a norm from which to measure. In this post, we’ll discuss data management and analysis approaches.

How much data is needed for a baseline?

The amount of data required to establish a SQL database performance baseline depends on the extent to which your load varies over time. Talk to system administrators, end users and application administrators. They will generally have a good feel for what the usage patterns are. You should gather enough data to cover off-peak, average and peak periods.

It is important to gauge the amount which load varies and how frequently. Systems with predictable patterns will require less data. The more variation, the smaller the measurement interval and the longer you will need to measure to develop a reliable baseline. To take our climbing analogy a little further, the longer the length of rope you examine, the better the chance that you will spot the variations. How critical the system is and the impact of its failure will also affect how much scrutiny it must undergo and how reliable the sample must be.

Storing the data

The more parameters you track and the smaller the frequency, the larger the data set you will collect. It may seem obvious, but you do need to consider the capacity that will be required to store your measurement data. Once you have some data, it should be pretty straightforward to extrapolate the extent to which this repository will grow over time. If you are going to monitor over an extended period, think about aggregating historical data at intervals to prevent the repository from growing too large.

For performance reasons, your measurement repository should not reside in the same place as the database you are monitoring.

Limit the number of changes you make at one time

Try to limit the number of changes you make between each benchmark. Construct your modifications to test one particular hypothesis. This will allow you to meticulously rule in or out each improvement candidate. When you do hone in on a solution, you will understand exactly why you are seeing the change in behavior, and this will often reveal a series of additional potential improvement options.

Analyzing the data

Once you have made changes to your system, you will want to determine whether they have had the desired effect. To achieve this, repeat the measurements you took for your original baseline, over a similarly representative time scale. You can then compare the two baselines to:

  • Determine whether your changes had the desired effect - When you tweak a configuration setting, optimize an index, or some change SQL code, the baseline enables you to tell whether that change had the desired effect. If you receive a complaint about slower performance, you can say for sure whether the statement is accurate from the perspective of the database.
  • The most frequent mistake that most junior DBAs will make is to jump to conclusions too early. Often you will see someone jump for joy as they observe an instant and observable increase in performance after they make one or more changes. They deploy to production and rush to send emails declaring the issue resolved. But the celebrations can be short-lived when the same issues re-emerge sometime after or some unknown side effect causes another issue. Quite often this can result in a state that is less desirable than the original. Once you think you have found the answer to an issue, test it and benchmark the results against your baseline. This is the only reliable way of being sure you have made progress.
  • Determine whether a change introduced any unexpected side effects—A baseline also enables you to see objectively whether a change affected a counter or measure that you hadn’t expect it to affect.
  • Anticipate problems before they happen - Using a baseline, you can establish accurate performance norms against typical load conditions. This enables you to predict whether and when you will hit problems in the future, based on how resource consumption is trending today or based on projected workloads for future scenarios. For example, you perform capacity planning: by extrapolating from current typical resource consumption per connected user, you can predict when your systems will hit user connection bottlenecks.
  • Troubleshoot problems more effectively - Ever spent several long days and nights fire-fighting a performance issue with your database only to find it was actually nothing to do with the database itself? Establishing a baseline makes it far more straightforward to eliminate your database instance and target the culprit. For example, suppose memory consumption has suddenly spiked. You notice the number of connections increasing sharply and it is well above your baseline. A quick call to the application administrator confirms that a new module has been deployed in the eStore. It doesn’t take long to establish that that the new junior developer is writing code that doesn’t release database connections as it should. I bet you can think of many more scenarios just like this.
  • Ruling out the things that are not responsible for a problem can save a great deal of time by clearing the clutter and providing a laser focus on exactly what
  • is causing the issue. There are lots of examples where we can compare system counters to SQL server counters to quickly rule the database in or out of an issue. Once the usual suspects are ruled out, you can now begin search for significant deviations from baseline, collect related indicators together and begin to drill into the root cause.

 To read the entire whitepaper, “Ten Tips for Optimizing SQL Server Performance”.