Partition Switching #TSQL2SDAY #77

t-sql-tuesday

 

The ALTER TABLE..SWITCH command allows you to almost instantly move large amounts of data from one table to another. It does this by not physically moving the data but by updating the meta data that points to the data. I have found many uses for this statement in the past and so I thought it would make a good choice for a post in this months T-SQL Tuesday. The T-SQL Tuesday #77 topic is “My Favourite SQL Server Feature” hosted by Jens Vestergaard (t|b).

IMO reading Books Online makes the SWITCH statement sound like it only works with partitioned tables, which are an Enterprise Edition only feature.

Switches a block of data in one of the following ways:

  • Reassigns all data of a table as a partition to an already-existing partitioned table.
  • Switches a partition from one partitioned table to another.
  • Reassigns all data in one partition of a partitioned table to an existing non-partitioned table.

The good news is that the SWITCH command works on regular tables and in any edition. This means I can quickly transfer all of the data from one table to another in Standard Edition.

In reality, I have found few uses for the regular table to regular table switch; the trick to add the IDENTITY property to a column with existing data is the most recent. SWITCH is most useful when partitioned tables are involved. Sorry, Standard Edition users.

Probably the most famous use case is in a slide-the-window solution. Once the oldest data in a partitioned table has met the retention period, set by the business, it’s time to remove it from the live system. The standard pattern is to SWITCH out the oldest partition to an archive table. Once the data is in the archive table you have a few options:

  • You could keep this data and SWITCH out to a different archive table each month
  • You could slowly load the data from the temporary archive table to a larger archive table on slower disks each month
  • You could export the archived data to a different database or even flat files
  • If you don’t need to keep the old data, then the archive table can be truncated. This is a much better approach than running a DELETE on the live table, as it wont put pressure on your transaction log*.

* SQL Server 2016 has an improved TRUNCATE command with an option to specify the partition. Take a look at example B here.

SWITCH can also help with data loading. Running a BULK INSERT…WITH TABLOCK can dramatically improve the time taken to load data into the database. If you are able to run with the SIMPLE or BULK_LOGGED recovery model, you can benefit from minimal logging. This article by Robert Sheldon covers the topic of bulk loading data in much more detail. The TABLOCK option makes the table unavailable until the import has completed; not great for production environments. A way to avoid this locking is to bulk load data into a staging table (doesn’t matter if this gets locked) and then SWITCH the data into an empty partition in the live table.

But it’s never that simple and there are some many conditions that your schema must comply with before the SWITCH statement is possible. Here are the basics:

  • Both tables must exist before the SWITCH operation
  • The receiving partition must exist and it must be empty
  • The receiving non-partitioned table must exist and it must be empty
  • Partitions must be on the same column
  • Source and target tables must share the same filegroup

This TechNet article lists further conditions but misses the requirement for compression details to be the same.

There is also the concurrency issue. The following is taken from this Books Online page describing the ALTER TABLE statement.

To execute the DDL for SWITCH or online index rebuild, all active blocking transactions running on a particular table must be completed. When executing, the SWITCH or rebuild operation prevents new transaction from starting and might significantly affect the workload throughput and temporarily delay access to the underlying table.

While the SWITCH process is quick it does acquire a schema-modify lock on both the source and destination tables. This means it will:

  • Issue the lock and wait for existing transactions to finish -new transactions will be blocked
  • Perform the meta data change to switch the data
  • Release the lock on both tables
  • Blocked transactions can now start

There are some rules to bare in mind but the SWITCH option is one of those very useful pieces of functionality to be aware of when building solutions for large amounts of data.

About the author

James Anderson

I'm a freelance SQL Server DBA in Southampton, UK. Performance tuning code and improving development processes are what I love to do for work. Outside of work, I live to travel to as many places as possible with my wife Sarah.

5 thoughts on “Partition Switching #TSQL2SDAY #77”

  1. Excellent

    I did all of this previously

    1) iused a staging table which was created from the metadata from the main table this was loaded with a heap. Then we created a natching primary key (aligned index ) with another function tgen this was switched in with a tablock to a new partition ( this was pre created it turned out to be 3 times as fast as loading straight into table
    2) created partitions using dynamic sql weekly
    3) switched to archive table using dynamic sql ( this used tablockx hint to get exclusive access to arch table)
    4) deleting older partitions after this
    5) deleting older archive partitions ysing dynamic sql creating new swich out table in the same way as the switch in table
    Dan guzman published this on ms blog for some of the creation and metadata copy stored procs it worked really well
    Also created a conversion to partition table using a parameter table using various functions currently 25 tables are being used some of the parameters are retention days archive days partition bit and switch in bit
    Metadata stored procs are create switch table and create index clustered andcreate ncindex and create defaults

    I banned triggers as this seemed not to be needed foreign keys were also a bit problemstical so did not test

    This was done in 2012 enterprise but wouldbe better in 2016 as you have mentioned

    1. Wow thanks for the walk through Mark. The new partition truncate in 2016 is a nice addition but I think I’ll still switch out so that I can triple check the data before removing.

    2. would you mind posting the link to the blog you mentioned by Dan Guzman? I searched for it and aside from some other excellent articles I could not find the one you mentioned.

Leave a Reply

Your email address will not be published. Required fields are marked *

Sign Up

James Anderson SQL Server DBA

Hi, I'm James Anderson. When I'm not blogging about SQL Server, I do this.

The SQL People

DBA Stackexchange Profile