I’m currently working on a project where we need to improve the performance of data retrieval from a SQL Server database. My team and I were given a suggestion to implement database partitions and indexes with file groups to help improve the performance when retrieving a subset of data from the main data table.
I started on a script to create the secondary database files and then add the partition scheme. And finally, I updated the database script to create a clustered primary key on the new partition scheme.
The one concern we had was that a couple of our servers in the test environment had been stuffed, in one case, with almost half a billion rows of data. Since it took a significant amount of time to generate and insert the data, we wanted to update these databases.
After a lot of online reading with some trial and error, I learned that I could drop the original primary key from the table then add the new clustered primary key based on the partition scheme. This process migrated the existing data from the primary database file to the secondary file specified in the partition scheme.
Related Posts
December 9, 2015
Setup a Raspberry Pi Without a Monitor, Keyboard, or Mouse
July 13, 2015
SlowCheetah In VS 2015 RC
March 3, 2014
WordPress Theme Tips
January 22, 2014
TeamCity with SQL Server
November 20, 2013
TeamCity, Git, and Assembly Version Number
September 6, 2012
UpshotHelper Update
May 5, 2012
NHibernate One Shot Delete
April 24, 2012