Does OPTION (RECOMPILE) Prevent Query Store from Saving an Execution Plan?

Recompile hints have been tough to love in SQL Server for a long time. Sometimes it’s very tempting to use these hints to tell the optimizer to generate a fresh execution plan for a query, but there can be downsides:

  • This can drive up CPU usage for frequently run queries
  • This limits the information SQL Server keeps in its execution plan cache and related statistics in sys.dm_exec_query_stats and sys.dm_exec_procedure_stats
  • We’ve had some alarming bugs where recompile hints can cause incorrect results. (Oops! and Whoops!)
  • Some queries take a long time to compile (sometimes up to many seconds), and figuring out that this is happening can be extremely tricky when RECOMPILE hints are in place

The new SQL Server 2016 feature, Query Store may help alleviate at least some of these issues. One of my first questions about Query Store was whether recompile hints would have the same limitations as in the execution plan cache, and how easy it might be to see compile duration and information.

Let’s Turn on Query Store

I’m running SQL Server 2016 CTP3. To enable query store, I click on the database properties, and there’s a QueryStore tab to enable the feature. I choose “Read Write” as my new operation mode so that it starts collecting query info and writing it to disk:

Query Store: ACTIVATE!

Query Store: ACTIVATE!

If you script out the TSQL for that, it looks like this:

USE [master]

And Now Let’s Test Drive that RECOMPILE Hint

Now that Query Store’s on, I make up a few queries with RECOMPILE hints in them and run them– some once, some multiple times. After a little bit of this, I check out and see what query store has recorded about them:

  CAST(qsp.query_plan AS XML) AS mah_query_plan
FROM sys.query_store_query qsq
JOIN sys.query_store_query_text qsqt on qsq.query_text_id=qsqt.query_text_id
JOIN sys.query_store_plan qsp on qsq.query_id=qsp.query_id
JOIN sys.query_store_runtime_stats qrs on qsp.plan_id = qrs.plan_id
WHERE qsqt.query_sql_text like '%recompile%';

Note: I’ve kept it simple here and am looking at all rows in sys.query_store_runtime_stats. That means that if I’ve had query store on for a while and have multiple intervals, I may get multiple rows for the same query. You can add qrs.runtime_stats_interval_id to the query to see that.

Here’s a sample of the results:

query store results for recompile queries

(Click to see the beauty of query store in a larger image)

YAY! For all my queries that were run with RECOMPILE hints, I can see information about how many times they were run, execution stats, their query text and plan, and even information about compilation.

And yes, I have the execution plans, too — the “CAST(qsp.query_plan AS XML) AS mah_query_plan” totally works.

We’re still really early days on the Query Store feature and there’s a lot that we don’t know, but wow am I excited to learn more.

My SQLPASS 2015 Session Evaluations


I’m so happy, I need a cat picture.

I recently gave a day long session at the SQLPASS Summit called “SQL Server Index Therapy Session“. 61 people attended, and let me tell you I was thrilled with that number. I’ve co-presented lots of day and week long sessions, but this was the first day long session that I’d ever done solo at a conference and 61 people was a terrific number.

I love the topic I presented on and got great questions from the audience. I felt like the session went really well from the feedback I got that day, but sometimes you learn more when session evaluations come back later.

And you know what? I did amazing. Here are my scores from 25 session responses, or 41% of attendees. 3 is a perfect score:

  • Overall Session – Did the title, abstract, and session level align to what was presented? 2.96
  • Session Content – Was the content useful and relevant? 2.92
  • Presentation – Was the speaker articulate, prepared, and knowledgeable on the subject matter? 3.0

I got a lot of great comments, but there’s one that made me so proud that I cried a little bit.

“Kendra was fantastic.  She provided a detailed explanation of how indexes work and differences between them.  There was so much that I thought I knew, and really didn’t.  She did a wonderful job presenting and kept the group engaged.”

That quote means so much to me because I love it when a speaker can show me something new inside something familiar. I’m so honored that I could pass that experience along to someone else.

I realize this isn’t a humble post. As technical people, it’s easy for us to always nit-pick ourselves and point out what wasn’t perfect. And of course I wasn’t perfect — I began the day putting sunscreen in my hair by accident, just to give you a little view into what Being Kendra is like.

It’s soooooo soooooo incredible to know that I did a great job and people learned a ton that day. It doesn’t make me feel lazy. It makes me want to work hard and try for that every. single. time.

The Mysterious telemetry_xevents Session in SQL Server 2016

When kicking the tires of SQL Server 2016 CTP3, I was interested to find a new session defined in the Extended Events folder. Hello, telemetry_xevents!

Why, I don't remember creating this myself.

Why, I don’t remember creating this myself.

Scripting it out, this session contains the following events:

ADD EVENT sqlserver.data_masking_ddl_column_definition,
ADD EVENT sqlserver.error_reported(
    WHERE ([severity]>=(20) OR ([error_number]=(18456) OR [error_number]=(17803) OR [error_number]=(701) OR [error_number]=(802) OR [error_number]=(8645) OR [error_number]=(8651) OR [error_number]=(8657) OR [error_number]=(8902) OR [error_number]=(41354) OR [error_number]=(41355) OR [error_number]=(41367) OR [error_number]=(41384) OR [error_number]=(41336) OR [error_number]=(41309) OR [error_number]=(41312) OR [error_number]=(41313)))),
ADD EVENT sqlserver.missing_column_statistics,
ADD EVENT sqlserver.missing_join_predicate,
ADD EVENT sqlserver.server_memory_change,
ADD EVENT sqlserver.server_start_stop,
ADD EVENT sqlserver.stretch_database_disable_completed,
ADD EVENT sqlserver.stretch_database_enable_completed,
ADD EVENT sqlserver.stretch_database_events_submitted,
ADD EVENT sqlserver.stretch_table_codegen_completed,
ADD EVENT sqlserver.stretch_table_remote_creation_completed,
ADD EVENT sqlserver.stretch_table_row_migration_results_event,
ADD EVENT sqlserver.stretch_table_unprovision_completed,
ADD EVENT sqlserver.stretch_table_validation_error,
ADD EVENT sqlserver.temporal_ddl_period_add,
ADD EVENT sqlserver.temporal_ddl_period_drop,
ADD EVENT sqlserver.temporal_ddl_schema_check_fail,
ADD EVENT sqlserver.temporal_ddl_system_versioning,
ADD EVENT sqlserver.temporal_dml_transaction_fail

There’s something quite odd about this session. It has no target! The data isn’t being written to memory in the ring buffer or to a file or even a counter.

So I did a little testing. I right clicked the session and selected ‘Watch Live Data’ to see if I could consume the data flowing through in SQL Server Management studio even though it didn’t have a target. And then I ran this in another session:


Sure enough, after a little while, my error appeared:



So just because the telemetry_xevents session doesn’t have a target doesn’t mean that the data can’t be consumed. I’m not sure exactly how this is being used yet, or if it will even be in the RTM of SQL Server 2016.

Did My Query Eliminate Table Partitions in SQL Server?

Working with table partitioning can be puzzling. Table partitioning isn’t always a slam dunk for performance: heavy testing is needed. But even getting started with the testing can be a bit tricky!

Here’s a (relatively) simple example that walks you through setting up a partitioned table, running a query, and checking if it was able to get partition elimination.

In this post we’ll step through:

  • How to set up the table partitioning example yourself
  • How to examine an actual execution plan to see partition elimination and which are accessed. Spoiler: you can see exactly which partitions were used / eliminated in an an actual execution plan.
  • Limits of the information in cached execution plans, and how this is related to plan-reuse
  • A wrap-up summarizing facts we prove along the way. (Short on time? Scroll to the bottom!)

How to Get the Sample Database

We’re using the FactOnlineSales table in Microsoft’s free ContosoRetailDW sample database. The table isn’t very large. Checking it with this query:

    reserved_page_count*8./1024. as reserved_mb
FROM sys.dm_db_partition_stats
WHERE object_id=OBJECT_ID('FactOnlineSales');

Here’s the results:


The table has 12.6 million rows and only takes up 363 MB. That’s really not very large. We probably wouldn’t partition this table in the real world, and if we did we would probably use a much more sophisticated partition scheme than we’re using below.

But this post is just about grasping concepts, so we’re going to keep it super-simple. We’re going to partition this large table by year.

First, Create the Partition Function

Your partition function is an algorithm. It defines the intervals you’re going to partition something on. When we create this function, we aren’t partitioning anything yet — we’re just laying the groundwork.

CREATE PARTITION FUNCTION pf_years ( datetime )
    FOR VALUES ('2006-01-01', '2007-01-01', '2008-01-01', '2009-01-01', '2010-01-01');

Unpacking this a bit…

DATETIME data type: I haven’t said what column (or even table) I’m partitioning yet — that comes later. But I did have to pick the data type of the columns that can use this partitioning scheme. I’ll be partitioning FactOnlineSales on the DateKey column, and it’s an old DateTime type.

RANGE RIGHT: You can pick range left or range right when defining a partition function. By picking range right, I’m saying that each boundary point I listed here (the dates) will “go with” the columns on the partition to the right.

This means that the boundary point ‘2007-01-01’ will be included in the partition with the dates above it. That’s the rest of the dates for 2007.

Usually with date related boundary points, you want RANGE RIGHT. (We don’t usually want the first instant of the month, day, or year to be with the prior year’s data.)

VALUES: Why doesn’t the partition function go to present day? Well, the Contoso team apparently decided to use some other database after the end of 2009. That’s the lastest data we have.

Second, Create the Partition Scheme and Map it to the Function

A partition scheme tells SQL Server where to physically place the partitions mapped out by the partition function. Let’s create that now:

    AS PARTITION pf_years

Let’s talk about “ALL TO ([PRIMARY])”. I’ve done something kind of awful here. I told SQL Server to put all the partitions in my primary filegroup.

You don’t always have to use a fleet of different filegroups on a partitioned table, but typically partitioned tables are quite large. Dumping everything in your primary filegroup doesn’t give you very many options for a restore sequence.

But we’re keeping it simple.

Now Partition the Table on the Partition Scheme

This is where it gets real. Everything up to this point has been metadata only.

Currently, the FactSales table has a clustered Primary Key on the SalesKey column and no nonclustered indexes. We’re going to partition the table by the DateKey column. The first step is to drop the clustered PK, like this:

ALTER TABLE dbo.FactSales 
  DROP CONSTRAINT PK_FactSales_SalesKey;

Now partition the table by creating a unique clustered index on the partition scheme, like this:

  on dbo.FactSales (SalesKey, DateKey)
ON [ps_years] (DateKey)

We made a couple of important changes. The table used to have a clustered PK on SalesKey, but we replaced this with a unique clustered index on TWO columns: SalesKey, DateKey. There’s a reason for this: if we’re partitioning on DateKey and we try to create a unique clustered index on just SalesKey, I’ll get this message:

Msg 1908, Level 16, State 1, Line 31
Column 'DateKey' is partitioning column of the index 'cx_FactSales'. Partition columns for a unique index must be a subset of the index key.

DateKey is elbowing its way into that clustered index, whether I like it or not.

All right, now that we have a partitioned table, we can run some queries and see if we get partition elimination!

Query the Partitioned Table and Look at the Actual Execution Plan

Our example query is this stored procedure:

CREATE PROCEDURE dbo.count_rows_by_date_range
  @s datetime,
  @e datetime 
  FROM dbo.FactSales
  WHERE DateKey between @s and @e;

exec dbo.count_rows_by_date_range '2008-01-01', '2008-01-02';

If we run that call to dbo.count_rows_by_date_range with “Actual Execution Plans” enabled, we get the following graphic execution plan:

It's a clustered index scan, but don't jump to conclusions.

It’s a clustered index scan, but don’t jump to conclusions.

We have a clustered index scan operator on the fact sales table. That looks like it’s scanning the whole thing– but wait, we might be getting partition elimination! This is an actual execution plan, so we can check.

Hovering over the Clustered Index Scan operator on Fact Sales, a tooltip appears!


Partitioned = True!

It knows the FactSales table is partitioned, and “Actual Partition Count” is 1. That’s telling us that it only accessed a single partition. But which partition?

To tell that, we need to right click on the Clustered Index Scan operator and select “properties”:

4_clustered_index_scan_properties 5_partitions_accessed


Decoding this: The clustered index scan accessed only one partition. This was partition #4.

Let’s re-run our query to make it access more than one partition! We’re partitioning by year, so this should touch two partitions:

exec dbo.count_rows_by_date_range '2007-12-31', '2008-01-02';

Running this query with actual execution plans on, right clicking the Clustered Index Scan, and looking at properties, this time we see it accessing two partitions, partition #3 and partition #4:


Just because you see “Clustered Index Scan” doesn’t mean you didn’t get partition elimination. However, even if you did get partition elimination, it may have needed to read from multiple partitions.

Can You See Partition Elimination in the Cached Execution Plan?

So far we’ve been looking at Actual Execution plans, where I’ve run the query in my session. What if this code was being run by my application, and I wanted to check if it was getting partition elimination?

If the execution plan was cached, I could find information on its execution and cached plan with this query:

  CAST((1.)*eqs.total_worker_time/eqs.execution_count AS NUMERIC(10,1)) AS avg_worker_time,
  CAST((1.)*eqs.total_logical_reads/eqs.execution_count AS NUMERIC(10,1)) AS avg_logical_reads,
    (SELECT TOP 1 SUBSTRING(est.text,statement_start_offset / 2+1 , 
    ((CASE WHEN statement_end_offset = -1 
      THEN (LEN(CONVERT(nvarchar(max),est.text)) * 2) 
      ELSE statement_end_offset END)  
      - statement_start_offset) / 2+1))  
    AS sql_statement,
FROM sys.dm_exec_query_stats AS eqs
CROSS APPLY sys.dm_exec_sql_text (eqs.sql_handle) AS est 
JOIN sys.dm_exec_cached_plans cp on 
CROSS APPLY sys.dm_exec_query_plan (cp.plan_handle) AS qp
WHERE est.text like '%FROM dbo.FactSales%'

Here’s the results for our query:


Sys.dm_exec_query_stats has great info!  The difference between the average logical reads and the last logical reads shows us that sometimes this query reads more than others– that’s because the first time we ran it, it had to scan one partition. The second time we ran it, it had to query two. If it was always scanning the whole table, we’d have the same number of logical reads for the average and the last.

We can also see that the same execution plan was reused for both queries. Clicking on the cached query plan to open it up, we see something similar… but it doesn’t have all the same info.


The clustered index scan is the same…


But in the properties we can only see that it knows the table is partitioned

The cached execution plan does not contain information on the number of partitions accessed or which ones were accessed. We can only see that in the Actual Execution plan.

TLDR; (Too long, didn’t eliminate partitions)

Here’s a quick rundown of what we did and saw:

  • We partitioned the FactSales table by creating a partition function and partition scheme, then put a unique Clustered Index on the SalesKey and DateKey columns
  • When we ran our query with actual execution plans enabled, we could see how many partitions were accessed and the partition number
  • When we looked at the cached execution plan, we could see that the same execution plan was able to be re-used across multiple runs, even though:
    • It was a parameterized stored procedure
    • The query accessed a different number of partitions on each run (one partition on the first run, two partitions on the second run)
  • The cached execution plan did not contain the number of partitions accessed. (Makes sense, given the plan re-use!)
  • We could see the average and last number of logical reads from sys.dm_exec_query_stats, which could give us a clue as to whether partition elimination was occurring

Super simple, right? :)

If you liked this post and you’re ready for something more challenging, head on over to Paul White’s blog and read about a time when partition elimination didn’t work.

Powered by WordPress. Designed by WooThemes