Demo: Give me correct data or give me death (15 minutes)

Transcript

What we’re going to take a look at now: I’ve been showing you and stepping you through examples where these were multi-statement transactions. Where we did ‘begin tran’ and selected the data.

You might think, okay well, this is only an issue for me if I have reports that have multiple select statements.

That’s not the only time you could get incorrect data, and that’s what we are going to show in this demo: how even a single select statement could see inconsistent data, and how repeatable read and serializable prevent that… and there are some tradeoffs there.

Time to restore our database and configure a temporal table

In our demo environment, we are going to give a fresh restore of the BabbyNames sample database. This is a small 43 MB database backup that is on github. And, we are we’ve restored in one second… it’s not very big.

We’re going to start off our demo in the read committed isolation level. This is the default isolation level if you have installed SQL Server or if someone has installed SQL Server for you and it’s not Azure SQL database. What we’re going to do: we’re going to be using this table named ref.FirstName.

The first thing I do is make it a temporal table. The locking behavior you’ll see in the demo stays the same whether or not it is the temporal table. The temporal table just allows me to say: what was the data historically? Because when we modify data, the temporal history table will record the history of the data.

So I’m going to create ref.FirstName_History, and then on the ref.FirstName table I’m adding some hidden columns to track the system time. I’m just setting up the temporal table. And now I will set system versioning on, tell it to use my history table.

Now, create two indexes so we can get a very specific execution plan

For the query in the demo, I’m going to create two non clustered indexes on a ref.FirstName. Because what we’re going to do is create a query plan that uses something called index intersection.

Index intersection means: oh, all the idea that I need for my query is not in one index on this table. But I could read from two non clustered indexes– in this case LastReportYear_Etc and FirstReportYear_Etc, and I can combine that data. I can intersect those indexes from the same table, and then I won’t have to go scan the clustered index.

Neither of these indexes is perfect for our query, but they do have all of the columns needed for our query.

Our query is looking at ref.FirstName and is just looking for names that were first reported in the year 1880 and last reported in the year 1980. So these names just had a hundred-year lifespan, and then seemed to go out of fashion. From the eighties to the eighties.

Turning on my actual execution plan with this button right here– I’m going to look at how this query was run. Looking at my execution plan, here is what the index intersection looks like.

The build phase

We are getting some data from LastReportYear_Etc, pushing it into a hash match operator. This is called the build phase: this one up here the top is called the build phase of the hash match. It runs first. So, we run the build phase, we get what we can from LastReportYear_Etc.

The probe phase

Then after that’s done, we’re going to run what’s called the probe phase and get what we can from the index FirstReportYear_Etc. We’re going to put these together in a hash match join, and then a return them to our user.

The build phase runs first

The important thing here is that since the build phase runs first, we can get data from LastReportYear_Etc, and we may be blocked when it comes to FirstReportYear_Etc.

So we’re going to see how this plays out.

Time to demo this isolation level madness

We are in the read committed isolation level for our select query. In another session– before we start our query– there’s a modification it happens to be running and it starts right before our select queries starts. The modification has two statements in it, and at this point it has updated ref.FirstName and changed TotalNameCount on a row that has the first name id 5466.

Now, before it gets to its second update, we happen to start our select query, which is in the read committed isolation level. That’s looking at those names that were first reported in 1880 and last reported in 1980.

Notice that my runtime here, I just froze this but we’re already up to nine seconds. This queries are really, really fast… we are blocked.

What locks does our select query have under read committed, when it is blocked?

I’m going to look at the locks that are happening using a free procedure that you can get at whoisactive.com. It is written by Adam Machanic.

I’m running sp_WhoIsActive with the @get_locks=1 parameter, because I want to see more about my blocking.

The query that is holding the lock: it did begin tran, and it updated ref.FirstName. It’s a modification query: even if it’s in the read committed isolation level, if I explicitly do begin tran and do that update, I’m going to hold on to the locks for that update until I commit. That modification: I did the update and it’s holding onto that lock.

And my select query that I’m running in the read committed isolation level is blocked, it is waiting on LCK_M_S. We’re waiting on a shared lock.

If I go over to the right, since I did @get_locks = 1, I can see the locks that I have been granted and the locks that I’m waiting for. My select query, who’s in read committed. This is all locks on ref.FirstMame. We are waiting on one key lock on the FirstReportYear_Etc index. That’s the one in the probe phase. We have intent shared on the object.

And, we have been granted intent shared on the FirstReportYear_Etc index. I’m going to ask for some shared locks, that was fine, but when it comes actually getting the shared lock, we’re waiting.

Notice that we have no locks at all on LastReportYear_Etc. Our query was able to run the build phase and read some data, and then, when we went to run the probe phase then it got blocked. But we let go of all the locks, because we’re in read committed, we get locks and leave them as soon as we’re done.

Because we leave locks as soon as we’re done, this allows a little loophole. That modification query, it has one more modification to do before it finishes up. It’s going to change a first name value. And it’s able to do that and commit the transaction. As soon as it commits, our select query is unblocked and is able to read the rest of the data and return it to us.

But there’s something kind of weird about this data. The row that we modified is FirstNameId 5466. Here’s what the data we get back looks like: it says the first name is Babe, and the total name count is 111,111.

Well, hmm, that’s interesting.

Is this data correct?

Before we modified this row, what did the data look like? Well, we can see from our temporal table. Here is what are the query returned: I’m not doing any cheating, here. I have copied in - it says Babe, 111,111. This is what our queries said the data is. If I go and I look at using the temporal table, what was the data here? Get it all on the screen at the same time.

The data first, before it was updated, it said the name was Babe. And, Babe had seven hundred – there were 728 Babes. We got the name from here. But the NameCount we got after the udpate. After the update– that update had two statements in it. It changed the first name as well as the name count.

So I got the name count from the after, but I got the name from before.

What if this was financial data? What this was safety data? I mean, this data is just wrong. This never existed in my database in a committed form, in that state. I got “committed sort of data.”

I got a non-repeatable read, is what I got.

And read committed can have non repeatable reads.

We can protect ourselves from this by using higher isolation

Repeatable read isolation and serializable are two choices we can make, they’re not the only choices. We are going to talk about more choices at the end.

But I’m going to reset my table. I’m putting the name back to Babe, the name count to 728. I’m cleaning out my version history table and then resetting.

We’re going to run through this scenario again and look at how is it different if I use repeatable read. So, what we’re going to do is over here: our update starts again. It’s updated, it’s run the first update statement.

Meanwhile, we start our transaction, but this time we are saying set transaction isolation level repeatable read. Now I go ahead and run my select query. What do the looks look like? If we look in sp_WhoIsActive, it looks pretty similar, because I’m asking for a shared lock. But, when we look in detail at the locks that I have. we are going to see some differences in there. I go over to my lock graph, and when I look at what locks I have and what locks I’m waiting for, there are more lines here. There are more locks involved. I am waiting on a key lock, on FirstReportYear_Etc.

Here’s my lock wait. That is the same. But, I have still– I’m still holding locks that I took out when I read data from LastReportYear_Etc. I was able to do the build portion of my hash join first and get some of the data, but instead of releasing those locks right away I kept them.

So, I’m saying if I read these rows again, they need to be the same. I’m going to keep these locks on there. So, now, things play out differently. When my other transaction goes to update its other rows, well, it takes a second look. Look, that took four seconds.

Uh oh, that’s a deadlock

It took a little longer because the deadlock manager stepped in.

Because I was still holding those locks, when it went to update the other index it was blocked by me on that index in my build phase. But, I was blocked by it on the index in the probe phase. We were blocked by one another, the deadlock manager had to step in and pick who to kill off. It killed off the select query.

So the select query doesn’t get any incorrect result.

Now this looks like a this is a problem, we have to rerun our transaction.

But this is a feature. I have been prevented from getting a non-repeatable read, which– a non-repeatable read, that data was just wrong. Most people would consider that data NOT OK and wrong.

So it says rerun your transaction.

Now, if I wanted my SELECT query to win, I can change my deadlock priority and I can make this select query win. If I don’t specify any priority, the deadlock manager will pick / figure out: okay who looks like it’s the least expensive to kill off, and that was my select query.

What about serializable?

We’re going to quickly reset again, we’re going to go back up, and we’re going to say: okay this next time, we are going to use serializable.

The update starts the exact same way. It’s run the first statement.

We have set our transaction level to serializable in this case. Now we run our select. Once again we are blocked.

We’re going to take a look at the locks in sp_WhoIsActive and something is a little bit different here. This time we have a different kind of lock that we’re wait for. This is a range lock. We’re saying: okay, I need to get a lock on the probe phase there, and I need a range lock, not just a shared lock, but a whole range. Because when I’m in serializable, I not only want to protect rows and make sure if I read them again they’re the same, but I have to prevent inserts as well.

When I look at the lock detail in sp_WhoIsActive, I can see that I am waiting on a range lock on FirstReportYear_Etc and I am holding– I already read the data from LastReportYear_Etc, and I am holding onto that range lock and saying you can’t put data –you can’t do inserts in there because it might mean that if I look at that set of data again, I get different results there.

Similarly, our transaction goes to do the update, I’m holding locks where it needs to update, we are blocking one another. The deadlock manager steps in, and, in this case eight says: the select query is the one who’s got to go. Let’s go ahead and set our isolation level back to read committed. Even though I have it commented out, I didn’t highlight the comment.

Now we’re back in read committed, so I don’t forget and leave myself in serializable.