Archive | December, 2010

2010 liner notes and my theme for 2011

The MorningNews asked the following question recently:

Who you would recognize in your 2010 liner notes?

To all the people in your lives (maybe you know them, maybe you don’t) who deserve shout-outs, a la the album’s notes, the book’s acknowledgments, the piece’s title, the award’s dedication. Who would you include in your 2010 acceptance speech? Maybe your fifth grade teacher, the buddy who got you through a brutal summer, the musician who had you walking tall during a particular slog.

My grandparents

A lot changed in 2010

Sometimes life moves fast, whether you expect it or not. In 2010 I changed jobs, moved house, and I’m in a new relationship. My last grandparent passed away. I remembered that life is finite.

2010 – The liner notes

To My Mamaw:

You not only taught me how to draw, you taught me how to love it, and how to be patient and thoughtful when it doesn’t turn out exactly like I planned. When I create art, I’ll always think of you. You also showed me what it is to love someone more than anything else, and how to live gracefully with loss. I love that about you.

Me with Crys in West Palm Beach

To Crys Manson ( b / t ):

Thanks for convincing me to do things which I fear will make me:
a) exhausted, b) terrified, c) nauseous d) all of the above.
You’re a fantastic friend a the BEST DBA co-pilot ever. You make me a better person.
I’ll fly into West Palm Beach with  you anytime.

To Organizers of SQL Saturday Nashville, Iowa City, and Washington DC ( b ):

Thanks for your volunteer hours and putting on a great show, and for the speaking opportunities. I’ve loved the chance to travel to places I’ve never been and meet all sorts of smart people. Thanks for helping me learn and improve, and for having a great time doing it.

To Brent Ozar ( b / t ) and Tim Ford ( b / t ):

SQL Clown Car

Thanks for not just throwing an awesome party, but for really making friends. Thanks for being so helpful, not just to me, but to everyone you meet who needs it.
I still owe you guys a ride to Tacoma.

To Richard Kim:

Thanks for believing in me so much, and for understanding me better than I know myself, sometimes.

To Len Cozza and Richard Fried:

Thanks for the honest, thoughtful, sane conversations, in a crazy place and time.

To Dev Nambi ( bt ):

I love how passionate and good you are at what you do. I’m sure I’ll still think of calling you if I have crazy new performance issues to talk about. Except that’s probably not really appropriate now.

Carl, who wants nothing to do with ramps.

To Kevin Kline ( b / t ):

I’m not sure if you’re aware that I owe you an ice cream cone.

To Jeremiah Peschka ( b / t ):

Good choice asking me to be on your SQL PASS Quiz Bowl team!
Thanks for helping me remember the books I love to read, and for helping me think about things differently sometimes. For getting me to listen to new music, and teaching me to talk in dinosaur. And for creating your own datatype. Let’s go get some tacos and I’ll tell you the rest.

2011: Theme for next year

My theme for next year comes from a blog post by Penelope Trunk from 2007:

Here’s some practical advice: Do not what you love; do what you are. …

Relationships make your life great, not jobs.  But a job can ruin your life – make you feel out of control in terms of your time or your ability to accomplish goals –  but no job will make your life complete. It’s a myth mostly propagated by people who tell you to do what you love. Doing what you love will make you feel fulfilled. But you don’t need to get paid for it.

A conversation about cheesecake and President Clinton

In many ways, I’m lucky, because I love nerding out on SQL Server and talking about it. But I shouldn’t funnel all of my goals or too much of my personal fulfillment through my job. This next year I plan to be a great contributor at work (and my new job is interesting and exciting), but also to separate my hobbies out and make plenty of time for what I love.

I have two main goals:

Goal: Become a better speaker: because I love doing it. I want to develop more presentations and think through creating training materials around RDBMses, because I enjoy it.  I’ll probably learn plenty of things in doing this that make me better at  my job, and that’s icing.

Goal: Take time to have fun with people I love outside of work. Schedule it. Stick with it. Do it unless the world is on fire. And if the world’s truly on fire, take the time back within a week. I’m going to learn to be a better partner, to make plenty of time for zombie movies, and to sometimes think nothing at all about SQL.

Why I’m All For 24HOP Showcasing 24 Women Speakers

This is not the HOP you are looking for.

I’ve been thinking about the upcoming 24HOP event planned for March 15 and 16 which will showcase 24 women speakers. Karen Lopez (post | twitter), Jenn Stirrup (post | twitter), and Jen McCown (post | twitter) have all written posts about it. Kalen Delaney ( blog|twitter) has left some comments with her views, which I’ve also appreciated.

If you’re not sure what 24HOP is, prepare yourself for an acronym within an acronym: 24HOP = 24 Hours of PASS. PASS= the Professional Association for SQL Server. This is an online event featuring 24 one-hour sessions on all sorts of nerdy relational database topics.

This will be the fourth 24HOP event, I believe– there have been two general-topic English speaking events, and one Spanish and Portuguese language 24HOP event (LATAM HOP).

Some people think it’s a bad idea to have a woman-only 24HOP. And there are some thoughtful comments on why that might be the case.

However, I think it’s a very smart idea.

Think about this from the perspective of an editor who is putting together a collection of great fiction. You have a lot of choices when it comes to your book– you can choose items by theme, by time period, by genre, or by some factor of the author– their culture, their gender, their age.

A way to make a collection compelling is by a subtle thread. You want a variety of topics, and you want things in your collection to be different and interesting. There are many good types of way to build a collection, and one of those ways is by gender.

So if you’re a person who’s put on a few 24HOP events, do you want to put on something that’s very much like the one you did before? With maybe a little bit of new feature content?

Wouldn’t it be way more exciting, and a bit risky, to come up with a challenging idea that creates an interesting and different collection of speakers? It might spur your crowd of presenters to be creative and invested in the event more than a more general speaking engagement.

Well, I think so, at least. Regardless of the gender issue itself, I think this is a smart theme to build new content for the community.

High five, Tom LaRock.

Do I Want to Present?

I do!

I submitted a session. The title is “No More Bad Dates: Using Temporal Data Wisely”

(I am actually really excited about the topic. Yeah, dates and times— all sorts of weird fun to be had.)

Would I have Submitted a Session if it Wasn’t Women-Only?


Here’s why: SQL Rally is coming up, and I’ve submitted two sessions for that.  SQL Rally and 24HOP are going to both do selections by popular vote.

And popularity contests are, frankly, terrifying. I almost didn’t submit to Rally altogether because of the risks of being voted Least Likely to Present.  But I talked myself into submitting two sessions because I really love presenting, and I know the SQL community down in Florida is super passionate and interesting.

So if this was a general-topic 24HOP, but with sessions now being selected by public vote, I would not have submitted. I’m concerned that by being up for votes on two bills, I’ll hurt my chances on whichever is voted on second, even though I’ve submitted different abstracts. (People may feel like they voted for me before, so they want to give their vote for someone else on the other to be more fair.) Plus, it’s frankly twice the anxiety. Did I mention that popularity contests are terrifying?

But I do think this 24HOP idea is exciting and a little risky, and I would like to do what I can to try to make it great, if only by submitting. So I decided to submit a session, also.

24Hop: Hat Edition

Even if I’m not selected, I want to see how this 24HOP turns out. Maybe it’ll be remarkably like the ones that came before. Maybe there’ll be just something a little different that you can’t put your finger on. Maybe it’ll be really unique.

So, Should We Have 24 HOP Showcasing 24 Men Speakers?

Well, I actually thought that would be a good idea. Until I realized that we’ve already had 24HOP featuring 23 men speakers. I think that means the theme wouldn’t generate much but confusion. It would be all sorts of “So… what are we supposed to be doing?”

But there are other things to group by. New speakers. New presentations only. Mythbusters 24HOP. Present about your favorite feature. Maybe it’s the whole event, maybe each day has a different theme, but I like the idea of a focus. You could do a 24HOP like those playwriting events where a whole production is created in 24 hours.

However this goes, I hope we continue to have experimentation with 24HOP. I’d really like for it to continue on in an exciting direction where it doesn’t quite do what you expect– otherwise I think it’ll get to be SQL PASS Express Edition, with Auto Close enabled.

Filling in Data Potholes with Recursive CTEs

Data: it can break your foot.

Imagine that you are writing a script that looks at data grouped by the minute. You notice that there are no rows for some minutes, and you’d like to display a value when that is the case, probably showing a count of zero.

In thinking about this problem this week, I spent some time getting to know CTEs (Common Table Expressions) again. And I came to the conclusion that I should spend much more time with them. Maybe I won’t end up using them all the time, but I should be looking at them regularly as options when I’m writing queries.

Here’s the story of a handy way I found to work with this.

Let’s create some data

Our story starts with some data. It’s been lovingly scripted out, but it has a few holes.

CREATE TABLE dbo.MyImperfectData (
ItemDate DATETIME2(0) ,
ItemCount SMALLINT )

INSERT  dbo.MyImperfectData ( ItemDate, ItemCount )
VALUES  ( '2010-12-01 00:00:00', 12 ),
( '2010-12-01 00:01:00', 3 ),
( '2010-12-01 00:02:00', 6 ),
( '2010-12-01 00:03:00', 12 ),
( '2010-12-01 00:04:00', 24 ),
( '2010-12-01 00:05:00', 1 ),
-- Gap where 6 would be
( '2010-12-01 00:07:00', 122 ),
( '2010-12-01 00:08:00', 1 ),
( '2010-12-01 00:09:00', 1244 ),
( '2010-12-01 00:10:00', 23 ),
( '2010-12-01 00:11:00', 12 ),
( '2010-12-01 00:12:00', 24 ),
( '2010-12-01 00:13:00', 27 ),
( '2010-12-01 00:14:00', 28 ),
--Gap where 15, 16, 17 would be
( '2010-12-01 00:18:00', 34 ),
( '2010-12-01 00:19:00', 93 ),
( '2010-12-01 00:20:00', 33 ),
( '2010-12-01 00:21:00', 65 ),
( '2010-12-01 00:22:00', 7 ),
( '2010-12-01 00:23:00', 5 ),
--Gap where 24 would be
( '2010-12-01 00:25:00', 4 ),
( '2010-12-01 00:26:00', 6 ),
( '2010-12-01 00:27:00', 7 ),
( '2010-12-01 00:28:00', 77 ),
( '2010-12-01 00:29:00', 94 )


The data is at the minute level. We’re missing data for five minutes in this period– one three minute chunk, and two other minutes.

What’s the quickest way to show the missing rows?

At first I thought about querying the data itself to find what’s missing. This made my head hurt a bit, and seemed pretty expensive.

I thought about the fact that many data warehouse databases have calendar tables, where all sorts of information about months, days, years, hours, and minutes are normalized out into tables.

However, I didn’t have those types of tables around. For the scope of my problem I was dealing with short date ranges (and by short, I mean 3 hours) , and ideally I would not need to create a bunch of ancillary objects to fill in the gaps.

After some thinking, I realized that we can create a date time table at the minute level on the fly by using a recursive CTE.

Here’s a sample that counts out a few minutes:

AS ( SELECT   CAST('2010-12-01 00:00:00' AS DATETIME2(0)) AS [I can count!]
SELECT   DATEADD(mi, 1, [I can count!])
WHERE    [I can count!] < DATEADD(mi, -1,
CAST('2010-12-01 00:10:00' AS DATETIME2(0))) )
SELECT  [I can count!]

Our results:

Putting it all together

Taking the format of this CTE, we can change it to create a table with every minute in our time range.

We can then select from it and use a LEFT OUTER JOIN to our table with data, and use the CTE dates to fill in the gaps.

DECLARE @startDate DATETIME2(0) ,
@endDate DATETIME2(0) ;

SELECT  @startdate = MIN(ItemDate), @endDate = MAX(ItemDate)
FROM    dbo.MyImperfectData ;

AS ( SELECT   @startDate AS MyCTEDate
WHERE    MyCTEDate < DATEADD(mi, -1, @endDate) )
SELECT  MyCTEDate, CASE WHEN Itemcount IS NULL THEN '[Missing Row]'
END AS ColumnDescription,
COALESCE(ItemCount, 0) AS ItemCount
LEFT OUTER JOIN dbo.MyImperfectData ld
ON MyCTE.MyCTEDate = ld.ItemDate

And there we have it! No gaps:

No gaps allowed.

Use Cases

Check out the comments! In my initial posting, I didn’t say enough about where this is best applied, and how this scales.

I think this is mostly a party trick, but it’s also a nice simple example of recursion that got me thinking about CTEs.  And while there are some situations where it can come in useful, it doesn’t scale up to large date ranges. (Check out Brad Schulz’ post on recursive CTEs here.)

So in other words, this may be helpful in some ad-hoc situations.

However, looking at the “pseudo-recursive” parts of Brad’s post, I really feel a follow-up post or two coming on.

The 9th Day of SQL: Things Aren't as Simple as They Seem

The 12 days of SQL

Brent Ozar (blogtwitter) had an idea: a group of people should blog about writing which they’ve loved this year by people in the SQL community. For each “day of SQL,” someone picks a blog which they thought was great and writes about it.

Yesterday was Day 8, when Karen Lopez talked about a post by Louis Davidson and asked “What is your over/under?”  Karen is a great speaker, an inspiring writer, and just an incredibly interesting person. Check out her post!

On the 9th Day of SQL the engine gave to me: Something a little different than I expected.

Day 9: The Day of Paul White

This day of SQL is not about nine ladies dancing. (Sorry Karen!) Instead, it’s devoted to one New Zealander writing: his name is Paul White (blogtwitter).

First off, let me say that Paul White’s blog, “Page Free Space,” is just plain awesome. When I see Paul’s written a new post I know to allocate some time for it and read it through slowly, and that I should expect to have to think about what I’m reading to understand it.

I swear I can sometimes feel things moving around in my head when I read Paul’s posts. Apply the warning about overhead bins during flight: be careful, contents may shift while you’re reading Paul White’s blog.

So What’s My Favorite Post of the Year?

I picked Paul’s post, The Case of the Missing Shared Locks.

There’s a lot to love about this post. It is a great demonstration that things aren’t as simple as they seem.

Paul starts the post with the question:

If I hold an exclusive lock on a row, can another transaction running at the default read committed isolation level read it?

The answer to that would seem fairly straightforward. But in fact, things are pretty complicated. However, if you go through it slowly and really look at the examples, it can help you understand a lot about locking.

This is good.

Why is it Good that Things Are So Complicated? It’s CONFUSING.

Have you ever said something along these lines? “I’d like to give a presentation sometime, but I don’t have anything to talk about.”

Or, “I’m not sure that I have anything that interesting to fill a whole hour.”

Well, take a look at Paul’s post. He took something small, and he looked very closely at it. He played with it a couple of different ways, and he worked on it to see how it behaved. He stepped through it in a series of short, straightforward steps.

You can do the same thing with many things you’re familiar with. You can take a topic, or a feature, or a method of doing something and distill it into an interesting question. You can then look closely at the question and work with it carefully. Use it as a chance to explore something. You’re probably familiar with it, but by taking the time to write about it or present it, you’ll have the opportunity to get to know it better than you ever thought you could.

Who’s Next?

I’m handing the dreidl off to Crys Manson (blogtwitter) for Day 10.

Crys is a seriously great DBA, a fantastic friend, and she sometimes makes me snort liquid through my nose laughing.

Tag, Crys, you’re it!

How’d We Get Here?

If you want to check out where we’ve been so far, we’ve had:

A Little Present

You don’t need to be Jewish for this to be your favorite holiday song this year. Rock  on with the Maccabeats, y’all. (You will need to click the “watch on YouTube” link.)

Select * from dbo.FAIL: invalid metadata in views and user defined functions

This week a question on the Twitter #sqlhelp hash tag reminded me of a detail of SQL Server that I learned the hard way, but forgot to blog about. The question was:

SQLHelp in action

For those without images enabled (you anarchic luddites), that’s “Is there a good reason to have a SQL view that is just select * from a table with no filtering?”

Why Shouldn’t You Do That?

There are things you should know about how SQL Server handles metadata. When you update the schema for an underlying object (such as adding or removing a column in a table), the metadata for referencing objects is not automatically updated.

This can impact you whether or not you are using SELECT *. However it is far more likely to impact you, and to impact you more widely, when you have a practice of  using SELECT * in your object definitions for views, functions, etc.

Once Upon a Time…

There was a team of people working on optimizing their schema. Columns were being added and removed from several tables. The changes were checked in, tested, and handed off to operations for deployment. The operations DBA verified which changes were to replicated tables and validated that changes were approved by owners of subscriber systems. The change was put through pre-production and the change management system  appropriately.

Thirty minutes after the change went through to production, users of tools on a downstream system began to complain about large volumes of errors in the tool. It was discovered that many databases on the instance with the replication subscriber  had views and table valued functions referencing the article, using the “Select * syntax”. With the removal of the columns, literally hundreds of these views were returning 5402 errors, even though the users didn’t specifically need the columns which had been removed.

After the issue was identified, the operations team was able to resolve the incident by updating the metadata on the views and functions with the sp_refreshsqlmodule stored procedures.

For future schema changes to replicated tables, a script looping through all views and functions was used. This tended to be a bit problematic occasionally, as many users had longrunning queries using the impacted views and functions.

Learn More

To learn more, check out my sample script below, and also these MSDN articles on the stored procedures you use to update metadata:

Note that sp_refreshsqlmodule can also be used for views, so you likely want to just standardize with it.


  • Check your replication subscrbiers: If you’re changing the schema of replicated objects, you may need to update metadata for referencing objects in your subscriber databases, and in other databases on all instances where objects may be referencing the tables by three part names.
  • Prepare to be blocked: Your request to update metadata can and will be blocked by running processes using those objects. Make sure to plan for this and determine if and whether you should kill running processes or not to update the metadata.

Why Many People Don’t Know About This

If you’re used to controlling  your data access with stored procedures or ORM tools such as nHibernate, it’s easy to never quite learn this detail, or to forget it.

Stored procedures won’t usually have this issues because they recompile after the schema is changed on the underlying object, which automatically refreshes their metadata.


If you need to reference an object by a different name, create a synonym!

Example Script: views, functions, sprocs, synonyms, and metadata.

Here’s some sql code to walk through some basic examples of how this all behaves.

This script shows a simple example of how schema updates aren’t reflected in the metadata for views and table valued functions unless they are explicitly edited, and it also slows a basic example of a 5402 error.

--Create a base table for testing
create table dbo.LookAtMe (
	i int identity primary key,
	Msg nvarchar(128)

--Add a row
insert dbo.LookAtMe (Msg) VALUES('Hi, how are you?')

--Create a view looking at the table using select *
create view dbo.vImLookinAtYou AS
	FROM dbo.LookAtMe

--Create a table valued function looking at the table with select *
create function dbo.tvfImLookinAtYou ()
	FROM dbo.LookAtMe

--Create a procedure looking at the table with select *
create procedure dbo.prcImLookinAtYou
	FROM dbo.LookAtMe

--create a synonym for dbo.LookAtMe
create synonym dbo.synImLookinAtYou
FOR dbo.LookAtMe

--Our view, function, and proc each return two columns, i and Msg
select * from dbo.vImLookinAtYou
select * from dbo.tvfImLookinAtYou()
exec dbo.prcImLookinAtYou
select * from dbo.synImLookinAtYou

--Now, let's add a column
alter table dbo.LookAtMe
add lookTime datetime

--Our view and functions still work
--but they aren't returning the new column!
select * from dbo.vImLookinAtYou
select * from dbo.tvfImLookinAtYou()
--Our procedure, however does return the new column!
exec dbo.prcImLookinAtYou
--Our synonym also lets us see the new column.
select * from dbo.synImLookinAtYou

--Let's refresh the metadata for the view and function
exec sp_refreshview 'dbo.vImLookinAtYou'
exec sp_refreshsqlmodule 'dbo.tvfImLookinAtYou'

--Now we can see the new column in the view and function
select * from dbo.vImLookinAtYou
select * from dbo.tvfImLookinAtYou()

--Now let's remove a column from the table
alter table dbo.LookAtMe
drop column lookTime

--Oh no!
--Our view and function both return error 4502:
--&quot;View or function ... has more column names specified than columns defined.&quot;
select * from dbo.vImLookinAtYou
select * from dbo.tvfImLookinAtYou()
--Our procedure is fine, though
exec dbo.prcImLookinAtYou
--Our synonym is also fine. It's just a pointer.
select * from dbo.synImLookinAtYou

--Let's refresh the metadata for the view and function.
exec sp_refreshview 'dbo.vImLookinAtYou'
exec sp_refreshsqlmodule 'dbo.tvfImLookinAtYou'

--Now they work again.
select * from dbo.vImLookinAtYou
select * from dbo.tvfImLookinAtYou()

--Clean up our objects
drop table dbo.LookAtMe
drop view dbo.vImLookinAtYou
drop procedure dbo.prcImLookinAtYou
drop function dbo.tvfImLookinAtYou
drop synonym dbo.synImLookinAtYou

What the Business Wants: FEATURES (TSQL Tuesday #13)

It’s TSQL Tuesday Again…

This month’s #tsql2sDay is hosted by Steve Jones (blog | twitter), and the topic is “What the Business Says is Not What the Business Wants.” Steve asks the question:

What issues have you had in interacting with the business to get your job done?

I thought about this for a long time. Eventually I realized that I wanted to see the pattern in the trees from the last ten years.

What the Business Says: “We Want Features!”

As a DBA working with software developers, I’ve heard many times from the business that they want new features.

They need a new feature to land a big client, to be the first to market with something, to retain a client, to compete with x.

They need a feature to land on Mars, and if we can’t do that reliably, well, we should just do that once. They don’t care what it takes to get to that feature, we needed to get it last week.

And so the wheels squeak, and the features are prioritized with a mysterious calculus involving the business which is never fully revealed.

But although I have seen that the business does not always dictate the priorities directly and fully, I have  often seen that the business dictates the timeline for what is prioritized. And that timeline is usually pretty quick.

What the Business Wants: They Actually Do Want Features. They Just Don’t Want to Talk about Sacrificing Scalability/Availability/Recoverability for Speed of Delivery.

The thing is, the business is telling the truth. They do need features to make money– at least in the competitive markets I’ve worked in.

The Scalability Fairy sadly doesn't show up nightly when your application loses a tooth. Or a customer.

However, typically people are so busy stressing when they need the new features that no honest assessment and agreement on quality is reached.

Maybe people think it’s embarrassing to have a frank discussion about what type of SLA and reliability will come with a new feature. Possibly that opens a company to some legal challenges if a different level of service has been provided to the customer. However, this is exactly what the business should truly want: to promise the correct level of reliability to the customer.

Now, not all new features need to be high quality work. Sometimes you’re delivering a shiny toy, and nobody’s going to use it to shave, wear it as a diaper, or drive it to work. But a feature needs to have a clearly defined level of scalability and it needs to be safety-proofed to the appropriate level for that product. It shouldn’t enrage your large customers on a regular basis.

Commonly, this part of the process is overlooked. People don’t want to talk about it. Unfortunately, it leads to delivering a product that  may look good at first on the outside, but may not really please the customer until it’s been patched up in the middle of the night a few times, retrofitted on a weekend, and ultimately largely re-written and re-released.

So, What Do you Do?

As an operations person, you bring up the conversation about SLAs repeatedly and make sure all parties come to an agreement. You ask questions about disaster recovery and availability and make sure you have the time and resources to meet those needs.

You make sure there’s time in the release schedule for performance and load testing, and  you set your expectations to medium-low until you have strong data that shows how something will perform. Trickiest of all: you somehow get that data prior to release.

You document everything, because people won’t remember later.

But most of all, you establish a good relationship with people in the business by being genuine, open, and honest with them, and not pre-judging them. If your business people trust you, they are likely to ask you a few questions before making many promises.

Introduction to Partitioning: Resources!

This past Saturday I presented my shiny new Introduction to SQL Server Partitioning session at SQL Saturday 61.

There were two separate sessions scheduled for partitioning that day, but the room was still full– thanks everyone for coming to learn about how, why, and when you might use this feature.

Slides, Links and Scripts

My slide deck, Introduction to SQL Server Partitioning, is available for download.

Links to all sorts of resources about partitioning and my simple partitioning demo script are available here:

Whose Doll is That?

My partitioning slide deck features a lovely assistant who is sawed into partitions, and also offers pieces of advice.

This little beauty was found on Flickr and is the fine work of Lara604.

Cheerfully Demonstrating Partitioning of the Head

Review: A day of doing many things at once with @AdamMachanic

Parallel Puppy OperationsA day of doing many things

At SQLPass this year I was fortunate to attend “A day of doing many things at once: Multitasking, Parallelism, and Process distribution” by Adam Machanic (blog | twitter). This was a day long post-conference.

So, how was it?

This was a fantastic seminar. There was a really good flow to the talk, which started in CPU background and architecture, then moved through Windows Internals, SQL Server internals, and on to specifics of parallelism in queries. Then we finally moved on to administration topics, as well as different methods of process distribution. A full outline of the day is here.

I think the presentation worked very well because of the balance of theory and practice. Essentially, there was a very good ratio between ‘what, ‘why’, and ‘how’.

I’ll look back at the outline for this seminar when designing longer presentations myself.

Did I learn anything useful?

Yes! The information on plan shapes and tricks to manipulate them was incredibly interesting, and is something I know will be useful. I also learned some interesting specifics about how the DAC works, and have a much more holistic view of how SQL Server uses processors and parallelism. Check out my tweets below for a little more insight into what my day was like!

Free webcasts. Yep, free.

Adam has some webcasts on parallelism available for download which you can watch for free.

My tweetstream from the session…

Here’s what my day was like, according to Twitter.

  • Postcon fun with @AdamMachanic today! #sqlpass Processes do not run, *threads* do.
  • Quick discussion of fiber mode for SQL Server: very limiting (
  • Thread starvation: pre-emption by high priority threads can prevent some threads from ever running.
  • Threads running on client OS get a smaller amount of quantum units than on a server os (more frequent interrupt frequency)
  • Three types of processor affinity: none, hard affinity, and ideal affinity
  • Lots of love for sysinternals ( and theCPU-Z tool (w/ props to @BrentO for recommending
  • Interrupt counts include not just when a quantum expires, but also when a thread finishes.
  • Lots of cool WMI queries being run from inside SSMS
  • Mine is still getting even better 🙂 RT @whimsql: Amen Tom! RT @SQLRockstar Best. Summit. Ever.
  • Meeting the SQLOS! It’s a “cooperative” scheduling system: everyone’s equal
  • SQLOS provides an abstraction layer so storage engine, qp, etc can all talk to it instead of directly to the OS
  • Proc Affinity at sql server level may be worth testing w/ multi instances. WIth virtualization taking predominance, is less common.
  • Differences between resource waits and signal waits being explained
  • 484 Wait types in SQL Server 2008– plug for #sqlhelp hash tag for those with limited documentation.
  • I totally just got called on in a “what feature uses a hidden scheduler” Pop Quiz. #FAIL
  • @PaulRandal yep, we were all “so THAAAAAAAT’S how that works.”
  • Don’t think of operators in QPs as being parallelized. Think more of each set of rows as being prallelized.
  • Very few iterators are actually parallel-aware. Most do not need to be, even if being used by parallel streams.
  • OH: “I trust myself, but I don’t know if you should.” <– always an appropriate comment when referring to production environment
  • And now we return to our discussion of the “Big O” and the Query Processor.
  • We just covered tempdb spills and @crysmanson ‘s old enemy, the resource_semaphore wait type.
  • Few outer rows demo showing repartitioning scheme and rows redistributed on threads– very cool
  • Verrrrrrry interesting stuff with CROSS APPLY and parallelism
  • Cost threshold for parallelism default is still what it was set originally in 7.5, for many contemporary systems it may be too low.
  • And that makes me happy to hear since we do raise the default cost threshold for parallelism on our prod servers 🙂
  • @AdamMachanic just actually turned it up to 11.
  • If you hit THREADPOOL waits, don’t just up the max worker threads permanently, find the root cause for the situation.
  • Finishing up with a monitoring parallelism section — really nice flow to the talk today!
  • Piles o’ DMV fun, including the reason sys.dm_exec_requests has some funkiness: it shows wait state only for the root task
  • @AdamMachanic is demoing how sp_whoisactive will display your wait types, find your tempdb contention, and wash your dishes.
  • Demo of manipulating memory grants to cause a query to spill to tempdb purposefully… we’re not in kansas anymore.
  • @TheSQLGuru I’ve enjoyed it a ton– great combo of really interesting demos and information.