It may be too cold to think about road trips right now, but let me take you back to early Summer. The temperature is just right to hit the road roll the windows down, and just go somewhere. Most of the time you have an idea of where you’re headed. But even during those trips you might decide to take a turn that takes a bit longer. Just to see what you can find.
Sometimes server analysis is like that. When you’re actively monitoring your server looking for issues before they become critical, you can address them at a less frenzied pace. It’s the way I prefer to work, but it’s not always possible. But when it is possible, you may start looking at one alert telling you your % procedure cache ration has fallen below your current threshold.
You know you need to look for a lot of dynamic queries being run, or maybe a BA is doing some digging for some new intelligence. Either way, you know it’s not critical. So you have some freedom to poke around a bit more while you’re finding out what’s up.
To me it’s like seeing an interesting sign on the side of the road. You’re not on a schedule, and you see a sign for a scenic overlook or an old store. You make the call to turn off your current trail and check it out. Every single time I’ve done it, I was rewarded with a new experience.
When you’re digging around your server, you may start to find the same thing. A new and interesting experience.
While checking out why the % procedure cache had fallen, I found there was this one procedure that looped through records to process files that had recently been loaded into the system. The procedure wasn’t a performance hog by any measure. But I did notice it did this one SELECT that was costing it a second of wait time each time it ran. And it had already run 500 times in the previous hour.
I dug into the SELECT. I checked out the join criteria, and the columns being returned. I found that only one of the three columns affected were covered by an index. I quickly composed an index on my development box, ran the CREATE statement, and checked the performance of the SELECT. 0.6 seconds down to .001 seconds. Nice. Before putting this into production, I built a WHILE loop to emulate the processing that would happen on the table.
More or less it was just a read followed by a write. Easy enough. I set up a test of 1000 read and writes to this table. I captured the times. Over 90% of the time the read was 1 millisecond and the writes stayed around 50 milliseconds. 99% of the time the read and write completed more quickly than the read alone did in the control tests!
With this unit test, along with talking to the programmers about any other queries they run against these tables (to make sure I wouldn’t foul up other queries). I was ready to create this index in production.( I don’t yet have a QA environment set up… I’m working on that one!)
My point is, I wasn’t looking for missing indexes. I wasn’t looking for poorly performing queries. I was looking to see why we were having to create a bunch of fresh execution plans on the server. What I found turned out to be a little gem. And a success. Keep your eyes open for these opportunities. You never know what you may find along the way!