Hinting cardinality estimation level and setting row goals (15 minutes)

Row estimates are critical, and you have a lot of influence over them!

Learn different ways to hint which cardinality estimator to use, and see if this speeds up our slow query.

Then learn the basics of setting row goals. When you use a hint to do this, it sets a goal for the whole query. In this case, maybe that’s a good thing? Check it out.

Transcript

Before we move on, there’s one more property I want to show you on our slow query execution plan.

Highlighting the select and looking at its properties, we can see the cardinality estimation model version

It is 130. I am using the latest and greatest version of the cardinality estimator on my SQL Server 2016 instance. We got a new major version of the cardinality estimator in SQL Server 2014.

There were big improvements made, the algorithms that help figure out: how should I estimate how many rows are going to come out of a join, for instance, or how many rows will come out of a windowing function after I do a complex partition by and order by with a rank.

Sometimes it’s worth asking, would my query be faster with the older cardinality estimator?

Or, if you’re using the older cardinality estimator on a more recent version of SQL Server, you could ask, is this going to be faster with the new version of the cardinality estimator? In my case, I can use a hint to say, I want to test this with the older version of the cardinality estimator. I can use trace flag 9481 to say, use that version, use version 70, which we had in SQL Server 2012 and prior.

And if I want to use the trace flag to turn this on, I can do that with a query hint. This query hint is placed at the end of my query, and says, option, query trace on, and then the trace flag number that I want to enable just for this query. Now there is a gotcha with query trace on.

What I’m doing is I’m turning on a trace flag for my query, and to turn on a trace flag in SQL Server, you have to have sys admin permissions. So not just anyone can put this in their query and run it in their query.

And there are some workarounds where a high-permission user can create a stored procedure with the query and the option query trace on in it, and that can be run by lower-permission users. You’ll see a demo of that later, when we talk about query optimizer hot fixes.

But that, this is kind of a bummer that it does take high permissions to run the query trace on.

But, does it even help in our case? Let’s go ahead and give our query a run, and see how this changes our execution plan and execution time. Well, that was definitely faster, I got my data back this time in, not close to 13 seconds, but close to six seconds.

So I got it back significantly faster. And looking at my execution plan, there’s something different about the operators in my plan. I still have a hash match. But instead of a merge join here, I have a nested loop join.

So my query plan did change, but I do still have a warning sign on my select operator. Let’s look at what that warning is about. Now again, I need to click around a little bit to make sure I’m seeing the properties for the select. And I’ve got my warning down here. Popping it up. I again had a memory allocation that was very high, but it wasn’t as high as last time. Instead of 600 megs of memory, it allocated almost 200 megs of memory. It still used less, but it still, you know, has an excessive memory grant on it. Just not quite as excessive as before.

Looking at my pipes and my row estimates, I can see, this time it estimated there were 117,000 rows. And if I go on back and look at, going into my filter is once again accurate, that’s the same.

What changed is, using that older version of the cardinality estimator, it was more conservative in estimating how many rows would be ranked number on

So that let, since it thought it was going to be fewer rows, it said, well, if I’m only dealing with that amount of rows, I am willing to go do a nested loop. With a nested loop for every one of the rows that comes into the operator, I’m going to have to go down and do this clustered index C. But for this estimated amount, it was willing to go do that lookup, it didn’t say, ah, it’s faster to just scan the whole thing and do a merge join.

So it decided to use a different join based on the lower row count, and that did help it be faster in this case. Looking at the properties of our select, we can confirm that it used that older version of the cardinality estimator by seeing, yes, you did use version 70, on the properties of the select.

Since I’m running SQL Server 2016 Service Pack 1, there’s another way that I can do this

Rather than using query trace on, I have a new feature called use hint. We got this in SQL Server 2016 SP1. And this means that I can do a query hint that requires lower permissions.

I don’t have to turn on a trace flag. I can now say option, use hint, force legacy cardinality estimation. So let’s highlight our query, and give it a run. We should see the same execution plan and the same performance, ‘cause I’m doing the exact same thing, just in a new way. I did get slightly faster elapsed time, there’s a little bit of variance, I’m running other stuff on my laptop.

When I look at my execution plan, I’ve got my hash match, I’ve got my nested loop join, and I have my warning on my select still. Let’s look at our properties here, and yes, sure enough, this new way to say use hint, force legacy cardinality estimation did, you know, put me at cardinality estimation model 70, just like the last one did. One other great thing about this new syntax is that, you know, I don’t know about you, but I hate remembering trace flags, I am not good at it.

There’s a dynamic management view, where I can run this and say show me all of the available hints that I can use with this, and I don’t even have to remember how to spell them. I can copy them from here, I can paste them. I really like this new use hint syntax that we got in SQL Server 2016 SP1. I think it’s a big improvement over having to put trace flags in there, especially with those permission batches. So I have sped up my query quite a bit, but is this the best I can do?

It is really nice that I have the option to use the old cardinality estimator, but leaving that in place, I mean, they’re improving the new cardinality estimator all the time, so if I leave that hint there, I’m going to have to test a lot over time to say hey, maybe the new one is even better now, right? I’m going to have to keep track of this query and constantly test it.

Maybe there is an easier way to do this, so let’s keep testing

I could try to use other hints.

There’s a hint called fast…

Which I think is really a funny hint, because it’s, um, like, I would like you to run my query fast, is what it sounds like. I mean wouldn’t, wouldn’t we always want our query to be run fast? What we’re actually saying, this hint is actually fast in, and it’s a query hint that we place here at the end of the query.

So in this case, I used a one. What I’m telling SQL Server is, run this query with the goal of getting me one row back as fast as possible. It may take you longer to run the rest of the query than it would if you optimized this normally, but just do whatever it takes to get me one row back, as fast as you can, and that is your big goal, is to get me that one row. And I can change the numbers here, but we’re going to run this first with fast one.

What we are setting is called a row goal in SQL Server

Row goals can appear out, you know, even if you don’t use this fast keyword. If you’re using top or other T-SQL syntax. Sometimes that automatically sets a row goal for part of your query or the whole query. I’m saying, for my whole query, with this hint, set a row goal, so, I do get one row back really fast. That has been accomplished.

And looking at my runtime, my runtime isn’t too terrible here. I am under five seconds, so I’m not that back in deep Slowville. I do still have a warning on my select operator, but check it out. My joins are different, I now have two nested loop operators. There is no hash join to be seen. So I actually got a different plan than I have seen so far. Well, what is the warning for this time?

Clicking on our select and looking at its properties, I did get an excessive memory grant. Not the hugest memory grant, but still, that 160 meg excessive grant. And look, I only used a tiny, tiny bit of it this time, but it still allocated more than it needed. How does that row goal look in our plan?

Looking at different operators, note that the estimated number of rows here is one

That might look like, whoa, SQL Server went crazy on the statistics, even if I go all the way to the right here, looking at how many rows it estimated to come out of this index scan, it says estimated number of rows, one. Well, it isn’t that, my statistics on my tables didn’t suddenly become crazy, this is because of what I did. I said, optimize for one row, and that impacts what I’m going to see in this execution plan. Its goal is to get me that one row back ASAP, and I’m seeing that right here in the plan. Everything is optimized for one row.

So it’s not that the statistics are wacky, it’s that I asked for something a little bit wacky, and it worked to deliver it.

What if I try to get crafty, and set my row goal specifically to the number of rows that I end up actually getting back?

What if I say I’m going to try to, you know, say, optimize for fast all of the rows that I actually get. Well, let’s see if that works out for me. I’m trying to be, you know, tricky here, and say, no, no, no, this is the exact number of rows I get.

Well, what this does is set that row goal all across the whole query, not just after the filter.

So, I do still get that execution plan with two nested loops everywhere, and if I look on this right side of the filter, I do have like a perfect estimate coming out of that filter, but of course, I also impacted the estimates everywhere in my query, right?

Because this fast hint is kind of a big, blunt tool. So that doesn’t necessarily lead to perfect performance. It’s not like using fast with your actual number of rows is some magic trick that makes everything, we actually are slower than our fastest execution here, and are at 7.4 seconds with the way that this query was run with these estimates, right?

So there’s no magic formula of just plug the number of rows into fast. Well, what if I put a huge number into my row goal, what happens then, just out of curiosity, right?

This is more than the number of rows in my table. Let’s go ahead and test our query with a huge row goal, and say, optimize for really large dataset. Well, my data is streaming back, and we’re past the 10 second mark.

Looking at our runtime, we are over 12 seconds now for the runtime, nine seconds of CPU time. And looking at our plan, we’re back to a hash match plan and a merge join plan. And going over to the right, let’s see, how many rows did it think were going to come out of our index scan?

Well, SQL Server is smart enough, it’s not saying, I think your row goal number of rows is going to come out of here. It’s saying, I think it’s going to be ~5.7 million rows. SQL Server does know estimates of how many rows are actually in the objects, and those still play into this, so, you know, my row goal here was four, way higher than that, right, if I put commas in this guy, it’s way more than five million.

And of course, I’m making it not work now, but right, I’m at 2.1 plus billion here for my row goal, and it didn’t fall for that. It did adjust based on its estimates, but this didn’t work out too well, either. And we’re back to, if we look at the properties on our select operator, we’re back to that 600 meg allocation of memory for the query. So setting a row goal for the plan can, in some cases, help speed it up, but it is impacting the way rows are estimated across the entire plan, right? This is kind of a large, blunt instrument, and yeah, this didn’t end up being better than hinting the old cardinality estimator, in my case.

We have more things that we can hint, though, and more things that we can try. We’re going to look at some more options next.