To help professionals address the myriad challenges involved with performance optimization, SQL Server experts Patrick O’Keeffe and Richard Douglas co-authored the “Ten Tips for Optimizing SQL Server Performance” white paper earlier this year. While there is no definitive “10 best” list for SQL Server performance tips, you can’t go wrong by starting with these suggestions, which offer DBAs and others detailed information on specific issues and ways to apply them to their environment.
In this post, we’ll discuss how changing server settings can provide a more stable environment. Changing settings within a product to make it more stable may sound counter intuitive, but in this case it really does work. As a DBA, your job is to ensure
a consistent level of SQL Server performance for your users when they request data from their applications. Without changing the settings, you may experience scenarios that can degrade performance for your users without warning. These options can easily be found in sys.configurations, which lists server level configurations, available along with extra information. The Is_Dynamic attribute in sys.configurations shows whether the SQL Server instance will need to be restarted after making a configuration change. To make the change you would call the sp_configure stored procedure with the relevant parameters.
The Min and Max memory settings can guarantee a certain level of performance.
Suppose we have an Active/Active cluster (or indeed a single host with multiple instances). We can make certain configuration changes that can guarantee we can meet our SLAs in the event of a failover where both instances would reside on the same physical box.
In this scenario, we would make changes to both the Min and Max memory setting to ensure that the physical host has enough memory to cope with each instance without having to constantly try to aggressively trim the working set of the other. A similar configuration change can be made to use certain processors in order to guarantee a certain level of performance.
It is important to note that setting the maximum memory is not just suitable for instances on a SQL Server cluster, but also those instances that share resources with any other application. If memory usage is too high, the operating system may aggressively trim the amount of memory it can use in order to allow either itself or other applications room to operate.
• SQL Server 2008—In SQL Server 2008
R2 and before, the Min and Max memory setting would restrict only the amount of memory that the buffer pooluses – more specifically only single 8KB page allocations. This meant if you ran processes outside of the buffer pool (such as extended stored procedures, CLR or other components such as Integration Services, Reporting Services or Analysis Services), you would need to reduce this value further.
• SQL Server 2012—SQL Server 2012 changes things slightly as there is a central memory manager. This memory manager now incorporates multi-page allocations such as large data pages and cached plans that are larger than 8KB. This memory space now also includes some CLR functionality.
There are no options that directly aid performance but there is one option that can help indirectly: Backup compression default—This option sets backups to be compressed by default. Whilst this may churn up extra CPU cycles during compression, in general less CPU cycles are used overall when compared to an uncompressed backup, since less data is written to disk. Depending on your I/O architecture, setting this option may also reduce I/O contention.
We’ll offer more tips in future posts. If you want read about it all now—along with nine other great tips for SQL Server performance—click here.