Should I Upgrade to SQL Server 2016? (Dear SQL DBA Episode 22)

on November 17, 2016

It’s a big week for SQL Server! And it’s the perfect week to talk about this week’s question, which is about explaining to your management why it’s worth it to upgrade to SQL Server 2016, and which features you can use right away.

Watch the 24 minute video, scroll down to read the article, or subscribe to the podcast.

Three different people asked me a variation of this question recently at the SQL PASS Summit:

Dear SQL DBA,

We recently got budget to upgrade our SQL Server. My manager wants us to go with SQL Server 2014, because 2016 is so new. How can I convince management that 2016 is better?

Lucky DBA

As of yesterday, you have more reasons than ever to go with SQL Server 2016.

SQL Server 2016 Service Pack 1 was released, and not only do Service Packs give managers warm fuzzy feelings, this one was full of goodies.

SQL Server 2016 Service Pack 1 adds more features to Standard Edition (plus Web, Express, and Local DB)

I know most of y’all care about Standard Edition.

With SP1, you can use all sorts of (formerly) super-expensive features like data compression, partitioning, Columnstore, Change Data Capture, Polybase, and more in Standard Edition.

A few features have scalability caps in “lower” editions. There’s a memory limit for In-Memory OLTP (aka Hekaton), and Columnstore. Columnstore also has restrictions on parallelism.

This isn’t every feature. Transparent Data Encryption and licensing for high-availability features hasn’t changed. Memory and CPU limits for editions haven’t changed either: Enterprise Edition is still needed for scalability and HA/DR.

But overall, you get way more bang for your licensing buck in Standard Edition in SQL Server 2016 SP1.

Or, in the case of Express Edition, you get more for free.

Read the list of features by edition here: https://www.microsoft.com/en-us/sql-server/sql-server-editions

Quick DBA wins to start using in Standard Edition

Your management may ask, “which of these features can you use without code changes? And how much will they help?”

That’s a great question.

  • Database snapshots can be immediately useful as part of your release process. You take a database snapshot before deploying a change, use it for verification, if needed, and drop it after the change is done. You incur write overhead for modifications used while the snapshot exists, but it doesn’t copy the whole database.
  • Data compression can make better use of your storage and memory.
    • Fine print: don’t just compress all your indexes. You need to determine which tables might benefit, and which type of compression to use. There are CPU tradeoffs. Start with the Microsoft Whitepaper on Data Compression.
  • Table Partitioning - maybe. Technically, you can implement table partitioning without a bunch of code changes. But in practice, you frequently do need to tune or rewrite some queries because of quirks as to how things like TOP or GROUP BY might behave differently. For tables that are sensitive to read performance, it usually takes both developers and DBAs to implement partitioning. However, you may have some tables which are easier wins– like a table you almost exclusively write to for logging purposes, where table partitioning could be useful for truncating partitions instead of running deletes.

Security can be pretty persuasive

When it comes to Standard Edition, the biggest change with 2016 SP1 is that Always Encrypted is now available.

If your database stores anything like credit card numbers or social security numbers, you want this. You REALLY want this if you’re the DBA, because your life is simpler if there’s no way you could exploit that data. Think of this as a really nice protective vault for your data kryptonite.

Note: there are other security features in 2016 that may also be attractive, like Dynamic Data Masking. They were available in Standard Edition as of RTM, but now are available in Web, Express, and Local DB.

Fine print: Transparent Data Encryption (TDE) is still an Enterprise Edition feature.

How I’d start talking about In-Memory OLTP

sqlserverexpressedition

Well, you can, but you might break a few things.

In-Memory OLTP, aka Hekaton, has a memory cap in the “lower” editions of 25% of the Buffer Pool Memory Limit. For Standard Edition, you’ll be limited to 32GB of memory for In-Memory tables, and when you run out of memory, things get real dicy.

So you’ve got to be careful. But you’ve got to be careful even in Enterprise Edition, too– because your hardware doesn’t have an infinite amount of memory. Even in EE, you have to learn to set up monitoring of how much memory is being used by In-Memory tables, alert when it’s getting low, and learn how to adjust it.

Having In-Memory tables in Standard Edition gives you a much better place to learn.

I would look for less critical applications that might have a good use case for In-Memory tables. Applications where it’s not a disaster if you have an outage. You need something that has a lot of writes to make it worth your while for the experiment.

Logging databases where you don’t have to retain a ton of history come to mind– I’ve worked with a bunch of apps that can flip verbose logging to a database on and off, and the app is designed to keep going even if the logging database goes off the farm.

Essentially, you now have room to be creative and cautious with this feature. That’s gold.

What about Columnstore?

For Columnstore, there’s the same memory limit as for In-Memory tables: 25% of the Buffer Pool Limit for that Edition. Plus, Standard Edition is limited to 2 cores for parallel queries, Web and Express just get a single thread.

This is a bit easier to play with as writes don’t stop when you run out of memory (unless it’s Clustered Columnstore on In-Memory OLTP). Reading Columnstore data from disk just isn’t as fast as it is to read it from Memory.

You also have to balance out making sure the overheads of writes to a Columnstore index aren’t slowing you down (and that you can monitor this), and that you’re maintaining the index properly.

For Columnstore, look for relatively narrow tables that have many millions of rows. I say narrow because you’ve got limited memory and parallelism to burn for this. And Columnstore shines when you’ve got many millions of rows to compress.

It’s really not unusual to have many millions of rows in OLTP tables anymore, and to have a diverse amount of queries hitting them. OLTP tables are often narrow as well, so even with the limits, I see this as a big deal for Standard Edition to get this feature.

SQL Server 2016 has Query Store (all editions)

This hasn’t changed, but it was already awesome.

Query Store gives you a way to track  your query plans along with metrics on query resource usage and execution time. So you can do things like see if adding a filtered nonclustered Columnstore index made things faster… or oops, if it made things slower.

Query Store is a fantastic feature for DBAs and Developers. It helps DBAs and Developers work better together, because they aren’t trying to save as many giant XML strings for query text and plans in spreadsheets anymore.

Before yesterday, it was one of my biggest reasons to argue that a SQL Server 2016 upgrade is far more attractive, even for Standard Edition. Now it’s just another item on the list.

You’ve still gotta test it

Things can go wrong in service packs. You’ve got to test any upgrade, even if it’s full of goodies.

2016 isn’t the new kid anymore. Meet SQL Server v.Next CTP 1

This train is picking up speed.

SQL Server v.Next CTP1 is now available for download on Windows, Linux, and even Mac (via Docker containers).

Check it out online here: https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-vnext-ctp

So it’s not like 2016 is the “new version” anymore, anyway.