What is the best way to get correct data? (7 minutes)

More info: Considering RCSI? Don’t forget possible race conditions there.

I don’t go into the tradeoffs for Read Committed Snapshot isolation (RCSI) deeply in this video. In addition to the overhead of versioning and timestamp information, there is also a risk of race conditions if you enable RCSI on an existing application. Learn more about that in my article here.

Transcript

So, what do we do? Does this mean that we should go around and use repeatable read and serializable? Well, not necessarily, because there are a lot of tradeoffs when it comes to blocking and deadlocks. And, that’s just not okay, right?

We’ve been talking mostly about classic isolation levels so far

Isolation levels in SQL Server: the classic isolation levels is what we’ve mostly been talking about today. These are the ones that we… a lot of times think of.

Read uncommitted, read committed, repeatable read, and serializable. I’ve drawn in the phenomena that can happen under them.

If we’re in read uncommitted, we can get dirty reads, non repeatable reads, and phantom reads.

In read committed, at least we don’t have the dirty reads, but we’ve still got the clown and the ghost.

Under repeatable read, we can still have phantom reads.

So of all these isolation levels, it’s very rare for me to find people using repeatable read. Because if you care enough to want to get rid of a non- repeatable read, you also want to get rid of phantoms.

So, if people have are using pessimistic isolation levels, and they are using read committed, and they want more protection, they go to serializable. Because they don’t want the phantom, either.

But the tradeoffs here are: as concurrent activity goes up, we get lots of blocking, lots of deadlocks, and things get slow

You saw, when the deadlock manager kicks in, the deadlock manager, you don’t want it to run constantly. So it checks in periodically, and it has to kill off somebody. Not only does the deadlock victim have to retry, but everybody else involved in the deadlock gets slowed down by the blocking and waiting for the deadlock manager to kick in.

But thank goodness we do have the deadlock manager, or they would be stuck there forever.

And this is because we’re using locking mechanisms.

If we are going to use lock based serializable against disk-based tables, we have to be really careful about setting our deadlock priority, so that the victims are who we want the victims to be. We have to handle those deadlocks really, really carefully, and make sure that things are getting retried, but we’re getting notified promptly. And, we need to be really good at tuning indexes, because tuning indexes to get around these deadlocks, so that we don’t end up in these situations, can be really helpful.

So we’ve got to be really good at getting the deadlock graphs, and tuning indexes to get around them. As we make code changes, we will keep running into problems. It’s tricky.

A good alternative that we’ve had since SQL Server 2005 is optimistic locking

Both read committed snapshot and snapshot isolation levels. The tradeoffs there are different.

These isolation levels, these optimistic isolation levels, as soon as we enable one of them or both of them, SQL Server starts doing a process of versioning for some of our modifications. So for that case when we’re updating data: as soon as that update’s happening, it creates a copy of the previous image in tempdb. We have a version of the data before the update, so that anyone who’s reading, if their transactions started at a point before that update committed, it can go read the previous version in tempdb. It doesn’t have to be blocked.

Under read committed snapshot and snapshot, readers do not block writers, writers do not block readers. But it is at the cost of both versioning in tempdb, as well as putting time stamps on the tables itself. So that we know when data was changed. So there’s some space in our database itself on every table, as well as tempdb usage.

But we don’t get all that blocking.

Read committed snapshot prevents phantoms and non-repeatable reads, but only for an individual statement

Not for an entire transaction. So for that example of the report that said: begin tran, select the sum of revenue in one statement, select the detail in another: in read committed snapshot, we could get different data from the two statements.

If we use snapshot isolation…

we will get data consistent with the first-time data is accessed in the entire transaction

Read committed snapshot, though is still very useful…

and it is the default isolation level in Azure SQL Database, because that whole example we worked through with the hash match and the index intersection, the build and the probe, we are protected at the statement level by read committed snapshot. We cannot get that inconsistent data there. And, we don’t have the deadlock or the blocking, because we just use the versions in tempdb. Readers don’t block writers, writers don’t block readers.

Which tradeoffs are better?

So if we are comfortable… this is really saying which tradeoffs do we want. Do we want the tradeoffs of versioning and with extra space for timestamps that comes with optimistic ways to raise our isolation. Or would we prefer to have more locks that come with pessimistic ways to raise our isolation?

In some cases, in some systems, we’ll go with pessimistic, because it’s just one query where we care. That’s rare.

A lot of times that applications that are using these more pessimistic ways, usually serializable, are legacy applications that were written a long time ago. And where people either didn’t realize, or of some of the code pre dated these optimistic locking options… even though we’ve had them since SQL Server 2005.

I prefer to be optimistic, like Azure SQL DB

In Azure SQL DB, your default isolation level is read committed snapshot. I prefer optimism as well.

And, it’s not that snapshot isn’t available in Azure SQL DB, it’s just that even if you don’t ask for anything, you’re going to get read committed snapshot. You can also use snapshot.

So I also prefer – I think, it’s really hard to know when you might get inconsistent data, is really hard no when you might need serializable, because of weird things like, okay: how do I know when I might get index intersection, and it runs concurrently with a modification pattern that could cause a race condition?

I’m not psychic. It’s hard to know!

So, I find optimism easier. I just have to make sure that I can manage and monitor the versioning, and that I have enough space for the timestamps on all of the tables as well. [Editors note: for RCSI, check out the link at the top of this page to consider race conditions as well.]

Thanks for joining me folks! I’ll see you next year.