Try Spotlight Cloud ProSign in

      5 SQL Syntax and Query Principles for Better Database Monitoring

      By: Spotlight Cloud

      Adopting good SQL syntax and query-writing practices will improve your database monitoring efficacy and, as a result, improve your overall database performance. There are several tried-and-true ways to improve database performance through syntax and query adjustments. We picked 5 to focus on.

      1. CASE Expressions

      Here are a few best practices for getting the best performance from CASE expressions:

      • CASE expressions improve performance by moving old procedural code into declarative code that can then be optimized. For basic CASE expressions, use <simple case>, which is the most easily optimized. For extended CASE expressions, hashing and other techniques will be the most useful.
      • Because WHEN clauses are tested from left to right, it is best to arrange your clauses so the most likely ones are listed first. This will cut down on time spent on unnecessary scanning. 
      • Most optimizers also factor out redundant tests by considering the order of execution. As soon as a WHEN clause executes, there is no need to test that clause again. However, many optimizers aren’t good at reading nested CASE expressions, so it is best to flatten them out to a single level.

      2. Double-Dipping and the Window Clause 

      In SQL, “double-dipping” means visiting the same table more than once, which slows your query way down. Ideally, try to get all your jobs done in a single table visit. But if that isn’t possible, there are a few ways to mitigate impact on performance.

      Using temporary tables built from the same base table joined together really isn’t the best way to avoid double-dipping. A better solution is to use your SQL engine’s optimizer to display VIEWs and share the results as a working table. 

      If you’re limited to a less-than-stellar optimizer, use temp tables and do the work manually. 

      Although the above methods are perfectly acceptable, the best way to avoid double-dipping is to use the window clause because the subclauses operate similarly to subquery features. For example: 

      • The PARTITION BY clause is like a local GROUP BY that splits up the table into groupings. 
      • The ORDER BY clause imposes a sorted order. 
      • The window frame (ROW or RANGE) is like a local WHERE clause.  

      You can optimize window functions by following these rules:

      • In the index, sort on the columns of the PARTITION BY clause first, then on the columns used in the ORDER BY clause.
      • Include any other column referenced in the query as included columns of the index.

      3. Use Stored Procedures

      Object-relational mappers (ORMs) cause numerous performance issues when they generate their own code. If you have to use ORMs, write your own stored procedures so performance doesn’t suffer.

      There are many ways using stored procedures improves performance, including:

      • They push less data across the network, so transactions are faster 
      • They enable shorter calls 
      • Stored procedures are easier to trace in profile tools 
      • Because a stored procedure is an actual object in your database, it is easier to get performance statistics, which makes it easier to find performance issues 
      • Execution plan reuse increases
      • They make it easier to deal with edge cases and add auditing or change-locking behavior 
      • Stored procedures are not vulnerable to SQL injections attacks 

      4. DELETE and UPDATE in Batches

      Deleting or updating a lot of data from a large table scan uses a lot of resources because both statements run as a single transaction. This is a performance killer because if an error occurs while the transaction is running and you need to stop it, the system has to roll back the entire transaction. Rolling back large amounts of data consumes lots of time and blocks other transactions. 

      You can avoid this performance issue by doing updates and deletes in small batches. When you run these transactions in batches, if the transaction gets killed, the rollback is much smaller. Rolling back a small amount of data doesn’t take very long, and other transactions can do work while the batch is committing to disk.

      5. Do Not Retrieve More Columns Than You Need

      One of the major causes of poorly performing queries is retrieving extraneous columns. Avoid practices that commonly result in returning an excessive number of columns. Keep the following in mind:

      • Unnecessary use of “SELECT *” in a query will likely return more columns than you need. This is a waste of resources and locks resources from other users. It is especially important not to indiscriminately use “SELECT *” in columnar SQL databases.
      • Doing a cut-and-paste to reuse code can result in extraneous columns.
      • When you invoke a VIEW, you may end up with nested columns. Un-nest poorly performing queries to check which columns are there and get rid of over-size columns you don’t need. A good indication that you have a problem is that you have a WHERE clause with additional conditions and a FROM clause with additional outer joins.

      These are just a few of the ways taking a conscientious approach to  SQL syntax and query-writing can improve database monitoring. Don’t be afraid to try some others. A high-performing database is key to meeting business goals in every organization.

      SQL Server DBA Hero

      April 28, 2020 7:45:00 AM PDT
      Spotlight Cloud

      Written by Spotlight Cloud

      Understand your SQL Server infrastructure health at a glance by monitoring SQL Server's relational engine, Analysis Services, Windows, VMware ESX and SQL Azure with Spotlight.

      Topics