Taking regular backups is a crucial and necessary part of an SQL database administrator’s routine job. Backup helps to safeguard SQL server database from any data loss. Usually, an organizational database contains huge amounts of important data that the organization can by no means afford to jeopardize. Hence, irrespective of how voluminous the database is and how tedious it might be, an SQL Server database must be thoroughly backed up.
However, the process is not very straightforward. With an increase in the size of the database, the time taken to back it up also rises. You might very well have a backup process stretching over days. That could put everything else on hold, but frankly, which organization has the time for that? So what seems to be the solution? Some cool tips and tricks to improve SQL backup performance.
Need for Speed in SQL Backups
A single database having a size in gigabytes can have millions of rows of data. Extend that size to terabytes, and the database will contain a couple of billion rows of data occupying massive disk space and spread across multiple Filegroups. Naturally, backing up such mammoth volumes of data is going to take time. However, for any organization, time is money. Putting an SQL Server database for backing up could mean holding up other tasks on it. That means inaccessibility of the server for the users. And when it comes to work, no user would be happy wasting time to server inaccessibility even for a couple of hours, let alone days. So something needs to be done to speed up things a bit.
Techniques that can help speed up SQL Backups
There are a lot of things that you can do in order to bring down the time it takes to backup the entire SQL server database. Some helpful techniques and tips are mentioned below:
- The SQL 2008 Database backup compression feature (introduced as Enterprise Edition only feature in SQL 2008) proves to save quite some amount of disk space and improves backup performance. This technique also helps in speeding up the restore process that can be extremely helpful for reducing downtime in a disaster recovery situation. However, the catch here is more CPU utilization and maximum available IO bandwidth use.
- Take care, not to use the same drive for storing the database files. Reading from and writing to the same drive takes significantly more time than 2 separate drives. In addition to contention issues, any damage to the sole drive could mean you lose the database and the backup.
- Since extra CPU and IO bandwidth are needed for the process, and you’ll preferably need other drives for storing the backup, using Solid State Drive units is advisable. It will also further give the performance a boost.
- Using advanced parameters like BLOCKSIZE, MAXTRANSFERSIZE, BUFFERCOUNT etc also adds massive gains.
- Instruct the BACKUP command to keep deleting old backup files of the same database and of the same type as part of the whole operation. Using ERASEFILES, ERASEFILES_ATSTART, or ERASEFILES_REMOTE options with the BACKUP command can achieve this.
- To identify the files to delete, the headers of all the files in the backup folder are read. The larger the number of files stored in the folder, the longer it takes to read all of the file headers. Hence, to reduce the number of file headers to be read each time, store backups in folders according to database name and backup type. Including the <DATABASE> and <TYPE> tags in folder paths can achieve this.
- Another helpful step towards saving backup time is selecting to stop deleting backup and restore history that’s older than <n Days | Hours> (available from Tools > Server Options > File Management). Usually this option is selected and causes the stored procedure msdb..sp_delete_backuphistory to delete history from the msdb database that contains a lot of history thus adding to the overall backup time. But if you clear this option and perform the delete at some other convenient time manually, you can save quite a bit.
Now that we’ve taken care of the basics, let’s jump head on into the steps to speed up SQL backup through compression.
How to speed up SQL backups through compression
Step 1: The option to compress database backup is disabled by default. To enable it, you’ll need to add the word “COMPRESSION” to a T-SQL backup query in the SQL Server Management Studio (SSMS) GUI. In this GUI, you can find the Compression feature as follows:
Right click on the database to be backed up -> Tasks -> Back Up -> Options -> Compression (at the bottom).
Step 2: Once the “Compress Backup” option has been selected, you can generate the TSQL statement by clicking on the Script option.
Step 3: If you want to enable backup compression as the default option for all databases, execute the following command:
USE master
GO
EXEC sp_configure backup compression default, ‘1’;
RECONFIGURE WITH OVERRIDE
Step 4: If you’re using Solid State Disks for the backup as suggested earlier, running the above query will result in more than 1400 MB / sec being read on average for each disk (that’s roughly equivalent to reading 2 CD ROMs full of data each second).
Step 5: A feature not much talked about is the option to specify multiple file destinations to improve throughput. You can implement this to give the backup performance a further boost by running the following command:
DBCC TRACEON (3605, -1)
DBCC TRACEON (3213, -1)
BACKUP DATABASE [TPCH_1TB]
TO DISK = <filepathtodisk1>
DISK = <filepathtodisk2>
DISK = <filepathtodisk3>
DISK = <filepathtodisk4>
DISK = <filepathtodisk5>
WITH NOFORMAT, INIT, NAME = N'TPCH_1TB-Full Database Backup',
SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 10
DBCC TRACEOFF (3605, -1)
DBCC TRACEOFF (3213, -1)
Step 6: As mentioned in the techniques to improve throughput, setting advanced backup parameters can add significant points to increase backup throughput.
- BUFFERCOUNT – indicates the number of I/O buffers to be used while backing up
- MAXTRANSFERSIZE – indicates the largest unit of transfer to be used between SQL server and the backup media (in bytes).
- BLOCKSIZE – indicates the physical block size in bytes.
Adding these parameters to your query isn’t tough. Just add the following snippet after the line starting with SKIP and right before the ‘DBCC TRACEOFF’ code lines:
,BUFFERCOUNT = <count>
,BLOCKSIZE = <size>
,MAXTRANSFERSIZE
GO
Note: You can use the special option TO DISK = ‘NUL’ in order to estimate how quickly data can be read from a database or Filegroup.
To sum it up
Why let SQL backup make you wait for hours together when you can reduce the time it takes with some simple steps? Do keep compression in mind and also use our helpful tips where you can to gain major perks regarding backup throughput.
Was this article helpful?