Table of Content
    Learning with Stellar

    Is your SQL Server backup running slow? Here’s how you can speed it up


    Table of Content

      Taking regular backups is a crucial and necessary part of an SQL database administrator’s routine job. Backup helps to safeguard SQL server database from any data loss.  Usually, an organizational database contains huge amounts of important data that the organization can by no means afford to jeopardize. Hence, irrespective of how voluminous the database is and how tedious it might be, an SQL Server database must be thoroughly backed up.

      speedup SQL backup process

      However, the process is not very straightforward. With an increase in the size of the database, the time taken to back it up also rises. You might very well have a backup process stretching over days. That could put everything else on hold, but frankly, which organization has the time for that? So what seems to be the solution? Some cool tips and tricks to improve SQL backup performance.

      Need for Speed in SQL Backups

      A single database having a size in gigabytes can have millions of rows of data. Extend that size to terabytes, and the database will contain a couple of billion rows of data occupying massive disk space and spread across multiple Filegroups. Naturally, backing up such mammoth volumes of data is going to take time. However, for any organization, time is money. Putting an SQL Server database for backing up could mean holding up other tasks on it. That means inaccessibility of the server for the users. And when it comes to work, no user would be happy wasting time to server inaccessibility even for a couple of hours, let alone days. So something needs to be done to speed up things a bit.

      Techniques that can help speed up SQL Backups

      There are a lot of things that you can do in order to bring down the time it takes to backup the entire SQL server database. Some helpful techniques and tips are mentioned below:

      • The SQL 2008 Database backup compression feature (introduced as Enterprise Edition only feature in SQL 2008) proves to save quite some amount of disk space and improves backup performance. This technique also helps in speeding up the restore process that can be extremely helpful for reducing downtime in a disaster recovery situation. However, the catch here is more CPU utilization and maximum available IO bandwidth use.
      • Take care, not to use the same drive for storing the database files. Reading from and writing to the same drive takes significantly more time than 2 separate drives. In addition to contention issues, any damage to the sole drive could mean you lose the database and the backup.
      • Since extra CPU and IO bandwidth are needed for the process, and you’ll preferably need other drives for storing the backup, using Solid State Drive units is advisable. It will also further give the performance a boost.
      • Using advanced parameters like BLOCKSIZE, MAXTRANSFERSIZE, BUFFERCOUNT etc also adds massive gains.
      • Instruct the BACKUP command to keep deleting old backup files of the same database and of the same type as part of the whole operation. Using ERASEFILES, ERASEFILES_ATSTART, or ERASEFILES_REMOTE options with the BACKUP command can achieve this.
      • To identify the files to delete, the headers of all the files in the backup folder are read. The larger the number of files stored in the folder, the longer it takes to read all of the file headers. Hence, to reduce the number of file headers to be read each time, store backups in folders according to database name and backup type. Including the <DATABASE> and <TYPE> tags in folder paths can achieve this.
      • Another helpful step towards saving backup time is selecting to stop deleting backup and restore history that’s older than <n Days | Hours> (available from Tools > Server Options > File Management). Usually this option is selected and causes the stored procedure msdb..sp_delete_backuphistory to delete history from the msdb database that contains a lot of history thus adding to the overall backup time. But if you clear this option and perform the delete at some other convenient time manually, you can save quite a bit.

      Now that we’ve taken care of the basics, let’s jump head on into the steps to speed up SQL backup through compression.

      How to speed up SQL backups through compression

      Step 1: The option to compress database backup is disabled by default. To enable it, you’ll need to add the word “COMPRESSION” to a T-SQL backup query in the SQL Server Management Studio (SSMS) GUI. In this GUI, you can find the Compression feature as follows:

      Right click on the database to be backed up -> Tasks -> Back Up -> Options -> Compression (at the bottom).

      Step 2: Once the “Compress Backup” option has been selected, you can generate the TSQL statement by clicking on the Script option.

      Step 3: If you want to enable backup compression as the default option for all databases, execute the following command:

      USE master
      GO
      EXEC sp_configure backup compression default, ‘1’;
      RECONFIGURE WITH OVERRIDE

      Step 4: If you’re using Solid State Disks for the backup as suggested earlier, running the above query will result in more than 1400 MB / sec being read on average for each disk (that’s roughly equivalent to reading 2 CD ROMs full of data each second).

      Step 5: A feature not much talked about is the option to specify multiple file destinations to improve throughput. You can implement this to give the backup performance a further boost by running the following command:

      DBCC TRACEON (3605, -1)
      
      DBCC TRACEON (3213, -1)
      
      BACKUP DATABASE [TPCH_1TB]
      
      TO DISK = <filepathtodisk1>
      
      DISK = <filepathtodisk2>
      
      DISK = <filepathtodisk3>
      
      DISK = <filepathtodisk4>
      
      DISK = <filepathtodisk5>
      
      WITH NOFORMAT, INIT, NAME = N'TPCH_1TB-Full Database Backup',
      
      SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 10
      
      DBCC TRACEOFF (3605, -1)
      
      DBCC TRACEOFF (3213, -1)

      Step 6: As mentioned in the techniques to improve throughput, setting advanced backup parameters can add significant points to increase backup throughput.

      • BUFFERCOUNT – indicates the number of I/O buffers to be used while backing up
      • MAXTRANSFERSIZE – indicates the largest unit of transfer to be used between SQL server and the backup media (in bytes).
      • BLOCKSIZE – indicates the physical block size in bytes.

      Adding these parameters to your query isn’t tough. Just add the following snippet after the line starting with SKIP and right before the ‘DBCC TRACEOFF’ code lines:

      ,BUFFERCOUNT = <count>
      ,BLOCKSIZE = <size>
      ,MAXTRANSFERSIZE
      GO

      Note: You can use the special option TO DISK = ‘NUL’ in order to estimate how quickly data can be read from a database or Filegroup.

      To sum it up

      Why let SQL backup make you wait for hours together when you can reduce the time it takes with some simple steps? Do keep compression in mind and also use our helpful tips where you can to gain major perks regarding backup throughput.

      Was this article helpful?

      No NO

      About The Author

      Bharat Bhushan linkdin

      Bharat Bhushan is an experienced technical Marketer working at Stellar Data Recovery - expertise in data care. He is skilled in Microsoft Exchange Database, MSSQL Database troubleshooting & data warehousing. He is a Management Post Graduate having a strong grip in Technology & certified in SAP-SD, Oracle 10g & Informatica Powercenter 9.1.

      24 comments

        1. Hello James,

          By using two Dynamic Management Views (DMVs) you can easily identify the missing indexes:

          1. sys.dm_db_missing_index_details
          2. sys.dm_db_missing_index_columns

          Good Luck!

      1. Nicely explored the topic. When the user takes a full backup, it takes time and space as compared to differential backup. A full backup is not always required, and I also created differential backup many times. So always make the backup according to your requirement. ​

        1. Hello Edward,

          Generally, filestream feature should be used for storing BLOB fields (when size of file is more than 1 GB). The backup of SQL database files with large number of filestream is always be slow as compared to the backup of only SQL database files.​

          I hope it will help you!

          1. Users should also consider the plan for Solid State Disk. It will be helpful to improve the performance of SQL server backup.

      2. Yes, backup performance is a big issue and you described in an easy manner. Thanks for sharing this information.

      3. I tried some techniques to speed up SQL backup and saw the difference. Thanks for this and hoping that you will write more blog with these kind of information.

        1. Hi Kelvin,

          You can set the database compression option as a default features. To do this just writes down the following command:
          USE master
          GO
          EXEC sp-configure backup compression default, ‘1’ ;
          RECONFIGURE WITH OVERRIDE

          I hope it will help you!​

        1. Hello Jim,

          To compress the large amount of data, your server will use the extra cycles of CPU and also use the maximum bandwidth. It will not affect your CPU performance because after compression it will like same as before.

      4. This post can really simplify the DBA’s daily life. It is not a “nice to read post” it is a “must read post” if you are concerned about your SQL Server backup.

      5. The blog is nicely written and informative too. Sometimes it’s easy to perform some tasks like this and it is just because of such types of articles. ​

      6. Dear,
        Commendable blog and I like all the steps that you have shared in this bog. Few steps are new for me but I will try them to speed up the performance of my server.

      Leave a comment

      Your email address will not be published. Required fields are marked *

      Image Captcha
      Refresh Image Captcha

      Enter Captcha Here :

      Related Posts

      WHY STELLAR® IS GLOBAL LEADER

      Why Choose Stellar?

      • 0M+

        Customers

      • 0+

        Years of Excellence

      • 0+

        R&D Engineers

      • 0+

        Countries

      • 0+

        PARTNERS

      • 0+

        Awards Received