I don’t think it’s gotten enough press (or at least, not enough of the right kind of press), but my favorite feature in SQL Server 2005 is the row versioning – specifically how it enables the new “snapshot” isolation level (it also enables triggers, online indexing, and having multiple active result sets, but those are different blog posts). It’s the kind of concept that may not jump out as immediately obvious. The good news is, it doesn’t really need to be.
Have you ever read any Oracle topics on their MultiVersioning feature? It’s the idea that (through what IIRC they currently call undo logs, as opposed to the redo logs during transaction playback during a DB restore) you let other transactions write to the rows of your tables even when your transaction needs a consistent view of the entire database. Outside of the non-blocking writes, it also enables some side features like “Flashback Query” – which is a cute name for nothing more than specifying your query’s SCN manually instead of it being whatever the system SCN was at the start of your transaction.
In my decade+ of Oracle experience, it’s by far the most touted feature of the Oracle DBMS (second in my experience would be RMAN, as much as it makes me cringe). How does this magic work? It’s good to learn, because it’s incredibly similar between Oracle and SQL Server. First, go read Tom’s article that gives a good overview:
Everyone back? Great. So, now you have at least some understanding of what Oracle does – it lets writers happily keep updating/inserting/deleting rows while readers in other transactions are still processing. It does this by storing in the undo buffers a copy of the old row. When some other transaction needs a given row but it’s been subsequently updated, Oracle doesn’t get the row contents from the current data for that table, but instead goes to the undo buffer and gets the older copy of the row from there.
As you’ve probably noticed, this means that writing to a row actually means 2x the writes you would normally think of – first, you have to write the current row to the undo area (Oracle writes the data with both the old and new values to enable a later perf gain) and then you have to update the actual row in the table.
SQL Server 2005 does the same thing, although the mechanics are slightly different (SQL Server writes the data to the existing tempdb instead of having special undo buffers for this data). That’s not really interesting to the developer IMHO, since the result is the same thing: readers don’t block writers and writers don’t block readers.
To be technically correct, though, the snapshot isolation level is really all about the “writers don’t block readers” part of that phrase. Technically, you don’t need the snapshot isolation level for the “readers don’t block writers” – for that, all you need is to turn on the READ_COMMITTED_SNAPSHOT option on the database. Once you do that, your writes to tables do the “undo buffer” thing (copies go into the tempdb) and you’re able to satisfy read operations by going to those tempdb entries if the read needs to access a version of the row older than the current. Without that option, doing selects by default (in the default isolation level of Read Committed) will take a shared lock on the rows (unfortunately SQL Server does potential lock promotion to the page or table level – Oracle’s locking doesn’t need that lock promotion), which is why by default you have “readers blocking writers” – the readers are all taking a shared lock on the rows, so writers have to wait to get their exclusive lock to modify it.
That’s what actually ends up being the oddest thing to me – you can just turn on READ_COMMITTED_SNAPSHOT and you’re taking the perf hits of all those writes to tempdb (which is required, since the goal is to keep the selects from taking the shared locks, and therefore not blocking writers, so we need the tempdb “undo buffers” in order to satisfy selects). Now, admittedly, you don’t have to keep the entries around as long as you do in the snapshot (which is effectively serializable in terms of no dirty, phantom, or nonrepeatable reads) isolation level case, but you still have to write them. Why does the snapshot isolation level meaning keeping the rows around longer? Because it means you’re providing read consistency at the transaction-level instead of just statement-level.
Turning on snapshot isolation is a much bigger PITA. First, you have to do the alter database (much like for RCS) to turn on an option to enable it – ALLOW_SNAPSHOT_ISOLATION. Then, you have to specifically change the isolation level. If you’re using ADO.NET for your transactions, you can just modify your BeginTransaction call to pass IsolationLevel.Snapshot – however, if you’re not doing transactions in ADO.NET space (for instance, all your transaction BEGIN/ROLLBACK/COMMITS are all in stored procedures), then you’ll have to either add “set transaction isolation level snapshot” to all your sprocs, or run it on your SqlConnection right after opening it, or some other option.
Note that at the end of all this, SQL Server is actually being more flexible than Oracle – with Oracle, you don’t get the option for whether you want to use MultiVersioning – you’re using it, like it or not – your writes *will* go to the undo buffers. SQL Server gives you the option, which is kind of nice. Maybe snapshot’s the right fit for your app (I’ve actually seen it boost performance over Read Committed in a real-world app!), but maybe you need to avoid the tempdb hit and decide on serializable instead.