Sign in

    4 Key Database Monitoring Activities Every DBA Should Know

    By: Spotlight Cloud

    DBAs play a vital role within an organization. As custodians of data, they are responsible for managing all aspects of database performance, including high availability, fast query processing time, and risk mitigation and disaster recovery. Additionally, DBAs are responsible for the business goal of maintaining the organization’s databases with an eye toward ROI and cost savings

    With all of the different hats they wear, DBAs have to work efficiently, and effective time management is their best friend. The best way to achieve efficiency is to focus first on the key activities that will help keep the databases performing optimally.

    Here are four database monitoring activities that should top every DBA’s “must-know” list.

    How (and Why) to Adjust the Default Settings in SQL Server

    Many DBAs run SQL Server as-is, straight out of the box. However, the default configurations aren’t always the best choice from a security or performance standpoint. Every organization’s databases are different and fulfill different business needs, so it only makes sense that not every database is configured the same way. 

    Depending on your specific database needs and preferences, there are several default SQL Server settings you may want to change:

    • Fill factor: If you build an index without specifying the fill factor value, the default value is 0. This means that the page will fill to capacity, and any insertions, deletions, or updates may cause excessive page splits and fragmentation.

      There is no universally “correct” fill factor value, but 80-90 is normally a safe choice. This value range allows 80-90 percent of the page to fill, leaving 10-20 percent free. 
    • Cost threshold for parallelism: The cost threshold for parallelism is the value at which SQL Server engine starts running parallel plans for your queries. The default value is five seconds, but this value is fairly low and could create a lot of unnecessarily complex queries, which will negatively affect performance.

      Start with a setting of 20 seconds and adjust as needed based on CXPACKET waits and CPU usage.
    • Database file autogrow: Autogrowth is a process that occurs when SQL Server engine increases the size of a database file when it is out of space. How much the file grows is set by default to 1 MB for data files and 10 percent for transaction log files.

      Every database will grow at different rates, so estimate how much you think the database will grow and set the value accordingly.
    • Database recovery model: The default recovery model is FULL out of the box, but that isn’t efficient for all databases.

      Change the setting to SIMPLE for databases that aren’t mission-critical and leave the setting at FULL only for high-risk production databases.
    • Max server memory: The default value is 2 TB, which means SQL Server allocates all memory from the operating system. This doesn’t leave any memory for the OS to use.

      Adjust the setting to maximize the amount of memory available to the SQL Server process, but leave a bit for the OS to use if needed. 
    • Max degree of parallelism (MAXDOP): MAXDOP controls how many processors are used for the execution of a query in a parallel plan. The default is 0, which means SQL Server gets to determine how many processors it can use. If you leave the cost of threshold for parallelism at the default value of 5, you may end up using all the CPUs for every query.

      The ideal MAXDOP setting will vary based on your specific system, but Microsoft offers some suggestions here.
    • Backup compression: The default setting for this feature is OFF. However, backup compression speeds up database backup operations and creates smaller backup files size, so you may want to turn it ON. 

    One final tip for adjusting SQL Server settings from the default values: Always test the system thoroughly after changing any settings to ensure no problems were introduced inadvertently. 

    How to Eliminate SQL Server Bottlenecks

    SQL Server bottlenecks are a common source of performance issues, including SQL Server hogging the processor, long query execution times, excessive I/O, and extreme activity on the disks. 

    There are many non-bottleneck reasons your database may experience these performance issues, but if the problem does stem from SQL Server bottlenecks, there are three main areas that are likely to be impacted: memory, I/O, and CPU. 

    Memory bottlenecks result from insufficient memory resources or SQL Server activities using too much available memory. Look out for longer query execution times, excessive I/O, out-of-memory messages in the application log, and frequent system crashes.

    I/O bottlenecks occur when there isn’t enough storage available to support regular database operations such as tempDB. Look out for long response times, application slowdowns, and frequent task time-outs.

    CPU bottlenecks are caused by insufficient hardware resources. Look out in your database monitoring for log data showing SQL Server is using excessive CPU.

    How to Prevent Growth of tempDB

    TempDB is a temporary workspace in SQL Server instances used to create and hold intermediate and temporary objects. TempDB is one of the most active resources in a SQL Server environment, so it is important to monitor and control excessive tempDB growth.

    TempDB is used frequently within an instance because it is used to store user objects, internal objects, and version stores. Excessive growth of tempDB can cause performance issues, so it is important to track large queries, temporary tables, and table variables that are using a large amount of tempDB disk space.

    To optimize tempDB size and growth, Microsoft recommends the following best practices

    • Set the recovery model of tempDB to SIMPLE 
    • Allow tempDB files to automatically grow as required 
    • Set the file growth increment to a reasonable size to avoid the tempDB database files from growing by too small a value 
    • Preallocate space for all tempDB files by setting the file size to a value large enough to accommodate the typical workload in the environment 
    • Make each data file the same size

    You can adjust the size and growth parameters of the tempDB data files using SQL Server management studio.

    How to Calculate Total Cost of Ownership

    Though you may not spend a ton of time thinking about your company’s budget, you better believe the CFO does. When it is time to ask for new performance monitoring technology, it is smart to come prepared with hard data to back up your request.

    One of the most influential pieces of data you can present is the potential total cost of ownership (TCO) of the new technology versus your current solution. In addition to direct costs, be sure to consider indirect costs such as infrastructure and resource costs such as maintenance.

    Reducing TCO is a common goal for DBAs who are looking into replacing their current database performance monitoring tool, so there are several factors to consider. As mentioned above, it is important to look at not only direct costs such as the purchase price, but also indirect costs such as storage and resource costs such as training.

    To determine TCO for the new tool, plug your specifics into a TCO calculator and see what the cost savings are, if any.

    SQL Server DBA Hero

    March 17, 2020 8:00:00 AM PDT
    Spotlight Cloud

    Written by Spotlight Cloud

    Understand your SQL Server infrastructure health at a glance by monitoring SQL Server's relational engine, Analysis Services, Windows, VMware ESX and SQL Azure with Spotlight.

    Topics