Blog Post

SQL Server Support Blog
4 MIN READ

SQL Backup using Azure File Share

SaniyaSamreen's avatar
SaniyaSamreen
Icon for Microsoft rankMicrosoft
Aug 10, 2022

SQL Server limits the maximum backup size supported using a page blob to 1 TB. The maximum backup size supported using block blobs is limited to approximately 200 GB (50,000 blocks * 4 MB MAXTRANSFERSIZE).

In order to get over the above limitation, we can opt for Azure File Share.


SQL Backup is more than 12TiB, you can choose to backup using Azure File Share

 

Standard file shares can span up to 100 TiB, however this feature is not enabled by default. If you need a file share that is larger than 5 TiB, you will need to enable the large file share feature for your storage account.

 

Premium file shares can span up to 100 TiB without any special setting, however premium file shares are provisioned, rather than pay as you go like standard file shares. This means that provisioning file share much larger than what you need will increase the total cost of storage.

 

In local and zone redundant storage accounts, Azure file shares can span up to 100 TiB, however in geo- and geo-zone redundant storage accounts, Azure file shares can span only up to 5 TiB.

Prerequisites

Steps

1. Storage Account with Premium FileShare

 

Enable Large File Share

 

 

2. Register for the SMB Multichannel preview with the following commands.

Connect-AzAccount

# Setting your active subscription to the one you want to register for the preview.

# Replace the <subscription-id> placeholder with your subscription id.

$context = Get-AzSubscription -SubscriptionId <your-subscription-id>

Set-AzContext $context

 

Register-AzProviderFeature -FeatureName AllowSMBMultichannel -ProviderNamespace Microsoft.Storage

Register-AzResourceProvider -ProviderNamespace Microsoft.Storage

 

You can also verify if the feature registration is complete

Get-AzProviderFeature -FeatureName AllowSMBMultichannel -ProviderNamespace       Microsoft.Storage

 

 

3. Enable SMB Multichannel

    Once you have created a File Storage account, you can follow the instructions to update SMB Multichannel settings for your storage          account

     

 

 

Note: If the SMB Multichannel option is not visible under File share settings or you get a failed to update setting error while updating the configuration, please make sure that your subscription is registered, and your account is in one of the supported regions with supported account type and replication.

 

4. Create a file share

 

 

You can set max capacity up to 100TB

 

5. Connect to FileShare from Window either using Active Directory or Storage Account

Connecting to a share using the storage account key is only appropriate for admin access. But mounting the Azure file share with the Active Directory identity of the user is preferred.

 

$connectTestResult = Test-NetConnection -ComputerName testsmbfileshare.file.core.windows.net -Port 445

if ($connectTestResult.TcpTestSucceeded) {

    # Save the password so the drive will persist on reboot

    cmd.exe /C "cmdkey /add:`"testsmbfileshare.file.core.windows.net`" /user:`"localhost\testsmbfileshare`" /pass:`"SxbRsNuwc1*******/8lk1TyUkqC+2+************==`""

    # Mount the drive

    New-PSDrive -Name Z -PSProvider FileSystem -Root "\\testsmbfileshare.file.core.windows.net\sqlbackup" -Persist

} else {

    Write-Error -Message "Unable to reach the Azure storage account via port 445. Check to make sure your organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to tunnel SMB traffic over a different port."

}

 

Copy this script and run this in PowerShell to map this as a network drive locally.

This script will check to see if this storage account is accessible via TCP port 445, which is the port SMB uses. If port 445 is available, your Azure file share will be persistently mounted.

 

Note: The script will only work on Windows Server 2012 and above

 

Once we had the above script is executed, we could see the Z drive as network Drive in My computer / This PC

 

6. On the SQL Server you need to first enable XP_cmdshell so we can configure backups to this file share.

        Enable the Advance SQL Configuration

        EXECUTE sp_configure 'show advanced options', 1; 

        GO 

        -- To update the currently configured value for advanced options. 

        RECONFIGURE; 

        GO 

        -- To enable the feature. 

        EXECUTE sp_configure 'xp_cmdshell', 1; 

        GO 

        -- To update the currently configured value for this feature. 

        RECONFIGURE; 

        GO

 

We must mount the Z Drive in SQL server and opt for it to be available for backups. Therefore, we map it using the script below

 

xp_cmdshell 'net use Z: \\testsmbfileshare.file.core.windows.net\sqlbackup /u:localhost\testsmbfileshare SxbRsNuwc1*******/8lk1TyUkqC+2+************==`'

 

Get the storage account, username and access key from Step 5

 

 

 

7. Backup the database now to the file share subdirectory using the below command

    BACKUP DATABASE [AdventureWorks2019] TO DISK = 'Z:\AdventureWorks2019.bak' with stats = 5

    

 

 

Reference (optional)

SMB file shares in Azure Files | Microsoft Docs

Create an Azure file share - Azure Files | Microsoft Docs

https://technet.microsoft.com/en-us/library/dn435916(v=sql.120).aspx#limitations


 

 

 

 

 

Updated Aug 15, 2022
Version 4.0
  • BPODFWPonder's avatar
    BPODFWPonder
    Copper Contributor

    KramMS, you could out the mapping into a SP to map each time the backup runs. You can setup the Fileshare to allow permissions to your AD accounts and assign permissions to the SQL Service and Agent account to the Fileshare without needing the Key.

     

    Your SP would be similar to below and put code in backup script to check the @Success parameter to run backup or fail and notify you.

     

    CREATE procedure [ADMIN].[sp_ZDriveMap_Azure] @Success int output
    as
    begin
    exec sp_configure 'xp_Cmdshell', 1
    Reconfigure
    WAITFOR DELAY '00:00:02';
    --drop table #tmp_MapDrive
    create table #tmp_MapDrive(field1 varchar(255))

    insert into #tmp_MapDrive
    exec xp_cmdshell 'dir X:\'

    --select * from #tmp_MapDrive

    if exists(select 1 from #tmp_MapDrive where field1 = 'The system cannot find the path specified.')
    begin
    print 'Adding Drive'
    exec xp_cmdshell 'net use X: \\mybackup.file.core.usgovcloudapi.net\myfileshare /PERSISTENT:YES'
    end
    delete from #tmp_MapDrive

    insert into #tmp_MapDrive
    exec xp_cmdshell 'dir X:\'

    if exists(select 1 from #tmp_MapDrive where field1 = 'The system cannot find the path specified.')
    begin
    print 'does not exist'
    select @Success = 0
    end
    else
    begin
    print 'exist'
    select @Success = 1
    end

    exec sp_configure 'xp_Cmdshell', 0
    Reconfigure
    end

     

  • KramMS's avatar
    KramMS
    Copper Contributor

    The persist toggle in xp_cmdshell 'net use' does not work and needs to be reset after each time SQL Server is restarted, any way to solve this without just putting the key into an agent job?

  • SQLJazz's avatar
    SQLJazz
    Copper Contributor

    SaniyaSamreen 
    There is a limit of 12.8 TB for backups to storage account container.

    So this means that this limit is set by the storage account and not set by SQL Server?
    I always thought it was a SQL Server limitation. 

  • BPODFWPonder's avatar
    BPODFWPonder
    Copper Contributor

    I would like to add that if you are using IaaS VM servers (I am using SQL Server IaaS servers in this case) with Azure managed disks then you can reach the throughput limit on disk if you are not careful and affect the performance on the VM possibly using SMB multichannel. We use the default Azure Fileshare settings and it limits the throughput to 300MB/s which does not cause the disk throughput issues, backups do take longer but VM performance is not effected by throughput overwhelming the managed disks. We also mirror the SQL backups to BLOB storage and BLOB storage definitely has the ability to exceed the disk throughput on the VMs but paired with the FileShare it limits the throughput to 300MB/s for both, the FileShare acts as to limit to disk throughput. The only gotcha is that with BLOB backups you will probably hit the 50K block limit on large DBs that go over 12TB in size with backup files and if you decide to mirror the SQL backups then you are limited to 64 files for both the FileShare and BLOB backup files, you will need to use multiple files to avoid the 50K block limit. On our largest DB I backup to Fileshare only (64 files) and then copy up to BLOB storage from the FileShare with AZCopy, limited to 300MB/s on any one server but I use more than 1 server to copy to BLOB storage so it does not take as long as you think to move 13TB+ of data to BLOB storage. Once there it can be managed by storage policy to keep only so many weeks of backup; restores from BLOB are very fast compared to FileShare (300MB/s without the SMB multichannel). It also means you can restore to another region without having to copy the data to that region, the BLOB storage is accessible and the throughput is very high from the BLOB storage.