I’ve never claimed to be great at math, but until recently I thought I knew how to count to one. Zero… one. That’s what we learned in kindergarten.
Apparently SQL Server didn’t go to kindergarten.
Today I was working on some code samples for a user question, and I hit a weird roadblock.
There was a bunch of garbage in my execution plan that I couldn’t explain. And by ‘garbage’, I mean a nested loop to a whole branch of code that I hadn’t asked SQL Server to run – and a warning about an implicit conversion possibly causing problems with the quality of my execution plan.
I’m sometimes asked if the number of CPU cores used by a query determines the number of tempdb files that the query can use.
Good news: even a single threaded query can use multiple tempdb data files.
You can’t do everything with a columnstore index – but SQL Server’s optimizer can get pretty creative so it can use a columnstore index in ways you might not expect.
Today’s question is about why a query might be slow at first, then fast the next time you run it.
This is one of those little details that confused me a ton when I was first working with execution plans.
One problem with learning to work with plans is that there’s just SO MUCH to look at. And it’s a bit spread out. So, even when looking at a single tooltip, things can be confusing.
Let’s talk about the nested loop operator, who can be particularly weird to understand.
The best features are the ones that you use all the time. SQL Server 2016 Management Studio’s bringing improvements in navigating around execution plans.
I like SQL Server’s new Live Query Statistics feature a lot for testing and tuning large queries. One of my first questions was whether this could replace using actual execution plans, or if it’s useful to use both during testing.
Finding: Both are useful. And both can impact query performance.