What if you went to all the trouble of implementing or upgrading to a new SQL Server for a better experience for your customers — and they complained that things were actually worse?
Wouldn’t you be griped if you jumped through all those business and technical hoops to get better SQL Server performance, and then didn’t?
Maybe you should have conducted some SQL Server performance testing before you went into production. Then you’d have known whether your performance was going to improve, stay the same or, worst of all, decline.
But at least you wouldn’t have been unpleasantly surprised.
The SQL Server performance bridge we all cross
Whether it’s switching from some other database to SQL Server or upgrading from an older version of SQL Server to a newer one, you had to cross the performance bridge eventually.
“Is it really going to run better than before?” your boss asked.
“Oh, sure,” you said. “With data virtualization in SQL Server 2019 we can run queries without moving or replicating data. And there’s automatic tuning with Intelligence Query Processing to scale up queries. The data’s gonna FLY out of servers.” You were so confident that you capitalized “FLY.”
So how could you have tested your performance better?
4 ways to test performance . . .
Brent Ozar, SQL Server maven extraordinaire, tells you how to check performance on a new SQL Server:
- the easy way, comparing before- and after-times on a full backup and CHECKDB
- the easy but wrong way, with a synthetic workload and TPC-C in HammerDB
- the harder way, testing individual queries with sp_BlitzCache, his plan cache analysis script
- the really hard way, running the same queries you execute in production
But as Brent makes clear, finding that your performance has dropped by X percent is only the start; you still have to figure out where the hit is coming from.
That means looking for trends in performance data over entire weeks and months, not over just the last few hours.
The more data points and history you have, the better the picture you can assemble of what’s going on inside, under and around your SQL Server environment.
And the more compute you can throw at all that data, the more quickly you can analyze it.
Sounds like a job for the cloud
Yep. It is a job for the cloud, the only place you can scale out far enough to collect and analyze all that data for a broad, deep picture of how SQL Server is consuming resources. And how it has been consuming those resources over time. For example:
- At certain times of the day or month, frequently run queries have to read from disk instead of from the buffer cache, which is the in-memory copy of recently used database pages. Do you need to allocate more memory to your buffer cache? Can you spare any more memory for it?
- Same for page life expectancy. If it’s low, it’s probably because SQL Server is evicting pages from the buffer cache too frequently and having to run lookups from storage instead of from memory. That hampers performance.
- Is maximum I/O wait time getting too high? How is it trending? That’s a hint that the I/O device may be saturated.
- How long is the processor queue now? How long is it usually? Too many threads constantly waiting can indicate processor congestion.
- Is the CPU running against the limits your server can handle? What if you can’t scale the server up anymore? If you’re sure that your queries are well-tuned and your other system resources are adequate, you may need to add a socket for physical hardware or vCPUs to your VMs.
- Fragmented indexes are slow indexes, but you don’t know they’re the culprit until you check fragmentation levels. Seeing the effects of reorganizing and rebuilding over time can help you put a reliable index maintenance process in place.
Finding issues like those in all your on-premises and cloud databases gets easier and faster when you can monitor from the cloud. Best of all, you can monitor from anyplace in the known galaxy when all the compute, storage and analysis is in the cloud and you’re only a few clicks away in any browser.
Start testing your SQL Server performance in the cloud
At Quest, nobody needs to tell us twice that our customers want cloud-based tools, whether for cloud migration, performance monitoring or Office 365. We’re making more of our products available as cloud offerings because technologies and markets are pulling them there.
So, how do you run performance testing on your new or upgraded SQL Servers?
- I run CHECKDB.
- I run HammerDB.
- I run Brent Ozar’s tools.
- I test with production workloads.
- I roll the dice and wait until users complain.
- I have people who do that for me.
- I use Spotlight Cloud from Quest. You should, too.