Azure SQL VM
6 TopicsSQL Backup using Azure File Share
SQL Server limits the maximum backup size supported using a page blob to 1 TB. The maximum backup size supported using block blobs is limited to approximately 200 GB (50,000 blocks * 4 MB MAXTRANSFERSIZE). In order to get over the above limitation, we can opt for Azure File Share. SQL Backup is more than 12TiB, you can choose to backup using Azure File Share Standard file shares can span up to 100 TiB, however this feature is not enabled by default. If you need a file share that is larger than 5 TiB, you will need to enable the large file share feature for your storage account. Premium file shares can span up to 100 TiB without any special setting, however premium file shares are provisioned, rather than pay as you go like standard file shares. This means that provisioning file share much larger than what you need will increase the total cost of storage. In local and zone redundant storage accounts, Azure file shares can span up to 100 TiB, however in geo- and geo-zone redundant storage accounts, Azure file shares can span only up to 5 TiB. Prerequisites Storage Account Azure PowerShell Storage Module https://www.powershellgallery.com/packages/Az.Storage/3.7.0 Steps 1. Storage Account with Premium FileShare Enable Large File Share 2. Register for the SMB Multichannel preview with the following commands. Connect-AzAccount # Setting your active subscription to the one you want to register for the preview. # Replace the <subscription-id> placeholder with your subscription id. $context = Get-AzSubscription -SubscriptionId <your-subscription-id> Set-AzContext $context Register-AzProviderFeature -FeatureName AllowSMBMultichannel -ProviderNamespace Microsoft.Storage Register-AzResourceProvider -ProviderNamespace Microsoft.Storage You can also verify if the feature registration is complete Get-AzProviderFeature -FeatureName AllowSMBMultichannel -ProviderNamespace Microsoft.Storage 3. Enable SMB Multichannel Once you have created a File Storage account, you can follow the instructions to update SMB Multichannel settings for your storage account Note: If the SMB Multichannel option is not visible under File share settings or you get a failed to update setting error while updating the configuration, please make sure that your subscription is registered, and your account is in one of the supported regions with supported account type and replication. 4. Create a file share You can set max capacity up to 100TB 5. Connect to FileShare from Window either using Active Directory or Storage Account Connecting to a share using the storage account key is only appropriate for admin access. But mounting the Azure file share with the Active Directory identity of the user is preferred. $connectTestResult = Test-NetConnection -ComputerName testsmbfileshare.file.core.windows.net -Port 445 if ($connectTestResult.TcpTestSucceeded) { # Save the password so the drive will persist on reboot cmd.exe /C "cmdkey /add:`"testsmbfileshare.file.core.windows.net`" /user:`"localhost\testsmbfileshare`" /pass:`"SxbRsNuwc1*******/8lk1TyUkqC+2+************==`"" # Mount the drive New-PSDrive -Name Z -PSProvider FileSystem -Root "\\testsmbfileshare.file.core.windows.net\sqlbackup" -Persist } else { Write-Error -Message "Unable to reach the Azure storage account via port 445. Check to make sure your organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to tunnel SMB traffic over a different port." } Copy this script and run this in PowerShell to map this as a network drive locally. This script will check to see if this storage account is accessible via TCP port 445, which is the port SMB uses. If port 445 is available, your Azure file share will be persistently mounted. Note: The script will only work on Windows Server 2012 and above Once we had the above script is executed, we could see the Z drive as network Drive in My computer / This PC 6. On the SQL Server you need to first enable XP_cmdshell so we can configure backups to this file share. Enable the Advance SQL Configuration EXECUTE sp_configure 'show advanced options', 1; GO -- To update the currently configured value for advanced options. RECONFIGURE; GO -- To enable the feature. EXECUTE sp_configure 'xp_cmdshell', 1; GO -- To update the currently configured value for this feature. RECONFIGURE; GO We must mount the Z Drive in SQL server and opt for it to be available for backups. Therefore, we map it using the script below xp_cmdshell 'net use Z: \\testsmbfileshare.file.core.windows.net\sqlbackup /u:localhost\testsmbfileshare SxbRsNuwc1*******/8lk1TyUkqC+2+************==`' Get the storage account, username and access key from Step 5 7. Backup the database now to the file share subdirectory using the below command BACKUP DATABASE [AdventureWorks2019] TO DISK = 'Z:\AdventureWorks2019.bak' with stats = 5 Reference (optional) SMB file shares in Azure Files | Microsoft Docs Create an Azure file share - Azure Files | Microsoft Docs https://technet.microsoft.com/en-us/library/dn435916(v=sql.120).aspx#limitations20KViews6likes4CommentsA quick way to generate Azure SQL migration assessments
Overview Azure Migrate doesn’t allow users to import the SQL Server Inventory and discovery information from sources other than Azure Migrate appliance as of today. As a result, users who cannot deploy Azure Migrate appliance to discover their SQL Server estate for technical, business or compliance reasons or have already discovered their SQL Server estate using tools like rvtools and want a quick and approximate way to check readiness and total cost savings by moving to Azure, cannot leverage the rich capabilities SQL assessment and business case capabilities that Azure Migrate offers. To help customers quickly get migration readiness and cost savings by migrating to Azure, we are introducing “Import SQL Server Discovery data as CSV” feature in Azure Migrate that allows users to import the SQL Server inventory as a CSV file and use the discovery data to generate assessments and business cases. This feature is launched in private preview. In this blog post, we will do a deep dive on how to use this feature to generate quick assessments for your off-Azure SQL Server estate. Here are the high-level steps for importing your off-Azure SQL Server inventory as a CSV: 1) Step-1: Identify the SQL Server Instances and extract the discovery data into a CSV file. 2) Step-2: Validate and upload the inventory data to Azure Migrate 3) Step-3: Generate and view Assessments Pre-requisites: Import SQL Discovery data as a CSV feature is built on top of the new Azure Migrate experience which is also in Private Preview phase. To use Import SQL Discovery data as CSV feature, you must onboard to the private preview of new Azure Migrate experience and here is the form to sign up for Private preview: Azure Migrate Private Preview Sign-Up Survey. We will share the instructions to onboard via the contact details you provided. Make sure you have the credentials to connect to the SQL Server Instances of interest Make sure you have the requisite roles and privileges on each SQL Server Instance to access and extract the required metadata from Dynamic Management Views. To generate a login with required privileges, please run the scripts listed here: Configure custom login for SQL Server Discovery An Azure Subscription. Please note you don't need to deploy an Azure Migrate appliance to use the SQL import CSV feature. Step-1: Identify and extract the discovery data of SQL Server Instances you intend to migrate to Azure SQL into a CSV file Identify the list of all off-Azure SQL Server Instances you intend to migrate to Azure SQL from either on-premises infrastructure or other cloud platforms like Amazon Web Services (AWS), Google Cloud Platform (GCP). To extract the discovery data, please run the SQL Script below against the SQL Server Instance which you would like to import into Azure Migrate via CSV and store the output into a CSV file. /* PLEASE ENTER THE FULLY QUALIFIED DOMAIN NAME OF THE HOST IN THE LINE BELOW BEFORE RUNNING THE QUERY */ Declare @FullyQualifiedDomainName varchar(200) = ''; With ServerCores AS( SELECT cpu_count AS [Logical CPU Count], hyperthread_ratio AS [Hyperthread Ratio], cpu_count/hyperthread_ratio AS [Physical CPU Count], sqlserver_start_time FROM sys.dm_os_sys_info ), Config AS( SELECT value_in_use AS [Max Server Memory(MB) In Use] FROM sys.configurations WHERE name = 'max server memory (MB)'), UserCount AS( SELECT count(database_id) AS [Number User Databases] FROM sys.databases WHERE database_id > 4 ), DbSize AS( SELECT SUM(CAST(Size AS bigint))*8/1024 AS [Max server memory (in MB)] FROM sys.master_files WHERE database_id > 4 ), TempDbSize AS ( SELECT SUM(CAST(Size AS bigint))*8/1024 AS [TempDb Size (In MB)] FROM sys.master_files WHERE db_name(database_id) = 'tempdb' ), Logins AS (Select count(name) AS [NumOfLogins] from sys.syslogins), Properties AS ( select db.name as database_name, db.database_id as database_id, cast(db.compatibility_level as int) as database_compatibility_level, db.collation_name as database_collation, db.is_broker_enabled as is_service_broker_enabled, '0' as [Number of Nics], SERVERPROPERTY ('servername') as instance_name, SERVERPROPERTY ('productversion') as instance_version, SERVERPROPERTY ('edition') as instance_edition, SERVERPROPERTY ('productlevel') as instance_level, SERVERPROPERTY ('IsHadrEnabled') as is_hadr_enabled, SERVERPROPERTY ('collation') as server_collation, CASE WHEN SERVERPROPERTY ('IsClustered') = 1 THEN 'TRUE' ELSE 'FALSE' END as is_failover_clustered, COALESCE(SERVERPROPERTY ('IsIntegratedSecurityOnly'), 0) as is_integrated_security_only, COALESCE(SERVERPROPERTY ('IsPolyBaseInstalled'), 0) as is_polybase_installed, CASE WHEN 'A' = 'a' THEN 0 ELSE 1 END as is_server_case_sensitive, /* '' + @host_platform + '' as host_platform,*/ db.state as database_state, /* db.is_auto_close_on as is_auto_close_on,'+ CASE when @check_cdc = 1 THEN 'db.is_cdc_enabled' else '0' END + ' as is_cdc_enabled,*/ CASE when db.user_access = 0 THEN 1 ELSE 0 END as is_multi_user, CASE when db.recovery_model = 0 THEN 0 ELSE 1 END as is_full_recovery_model, db.is_read_only as is_read_only, case when source_database_id IS NULL THEN 0 ELSE 1 end as is_snapshot, case when is_published = 1 OR is_subscribed = 1 OR is_merge_published = 1 or is_distributor = 1 THEN 1 ELSE 0 end as is_replication_enabled, db.is_encrypted from sys.databases db where db.name not in ('master', 'tempdb', 'model', 'msdb') and is_distributor <> 1), EachDbSize AS( SELECT database_name = DB_NAME(database_id), CAST(SUM(CAST(size AS bigint)) * 8. / 1024 AS DECIMAL(12,2)) AS [Database size (in MB)], convert(varchar(12),CAST((sum(case when physical_name like '%ldf'then 0 else size end)*8.0)/1024.0 AS numeric(8,2))) AS [Data file size (in MB)], convert(varchar(12),CAST((sum(case when physical_name like '%ldf'then size else 0 end)*8.0)/1024.0 AS numeric(8,2))) AS [Log file size (in MB)] FROM sys.master_files WHERE database_id > 4 GROUP BY database_id) select @FullyQualifiedDomainName as [Fully qualified domain name], instance_name as [Instance Name], p.database_name as [Database Name], database_state as [Database Status], instance_edition [SQL Edition], instance_version as [SQL Version], [Logical CPU Count], [Hyperthread Ratio], [Physical CPU Count], [TempDb size (in MB)], is_failover_clustered as [Is FCI Enabled], '' as [Failover cluster name], '' as [Failover cluster instance role], '' as [Failover cluster shared disk count], '' as [Is failover cluster multi subnet], [Max server memory (in MB)], [Number of Nics], instance_level as [Service pack], '' as [ESU status], 'FALSE' as [IS AG enabled], instance_level as [Build version], 'Online' as [Service status], 'FALSE' as [Is database highly available], 'FALSE' as [Is part of distributed availability group], [Database size (in MB)], [Data file size (in MB)], [Log file size (in MB)], database_compatibility_level as [Compatibility level], '' as [Availability group ID], '' as [Availability group name], '' as [Availability group type], '' as [Availability replica ID], '' as [Availability replica name], '' as [Commit mode], '' as [Replica type], '' as [Replica state], 'FALSE' as [Is AG multi subnet], '' as [AG replica sync status], '' as [AG replica seed mode], '' as [AG replica read mode] from ServerCores, Config, UserCount, DbSize, TempDbSize,Logins, Properties p LEFT JOIN EachDbSize s on p.database_name = s.database_name order by p.database_name Collate the output of all SQL Server Instances of interest into a single file. Step-2: Validate and upload the CSV file To validate and upload the generated CSV file with SQL Server discovery data: 1) Go to the Azure Migrate experience link you would have received from Microsoft after you signed up for the private preview of this feature. Click on “Create Project” to create a new Project (if not done already): 2) Fill in the required fields in create project wizard 3) Click on the created project and select “Using custom Import” option under “Start Discovery” button. 4) In the Discover page, select “File Type” as “SQL Server inventory (CSV)” 5) (Optional) Click on “Download” button under Step 1: Download template to view and understand the template expected by Azure Migrate. Optionally, you can skip this step as the CSV file generated by SQL script shared above confirms to the template. 6) Browse and select the inventory file in Step-2 and click on Validate to validate the uploaded SQL Inventory data. 7) If the uploaded doesn’t contain any warnings or error messages, it will show validation completed successfully with 0 errors and warnings. If there are any errors or warnings Azure Migrate will show a count of warning and error messages. To view and resolve the warnings or errors (if any) with the imported file, click on Download ErroDetails.csv, which displays errors/warning messages corresponding to each database along with remedial action. Please note warnings are non-blocking messages, i.e., user can still go ahead and complete the Import operation. Whereas errors are blocking in nature i.e. users will not be allowed to import unless the error messages are resolved. 8) Once the validation phase is completed, click on Import button to complete the import operation. Step-3: Create Assessments using imported SQL Discovery Data SQL Server Instances and the databases uploaded via CSV import route are treated at par with the SQL Server instances that are discovered by Azure Migrate appliance. You can view the SQL Server Instance properties like Version details, Number of databases, Support status and all other properties uploaded via CSV file. The quality of property details and the assessments generated is directly dependent on the quality of data uploaded via CSV. To create SQL assessments for the uploaded SQL Server Instances, go to the Azure Migrate project's resource page and click on Create Assessments button: Give a name for the assessment and click on Add workloads to add the imported and appliance discovered workloads: In the following page, select the SQL Server instances that have value “Import” for Discovery Source column. You can also create assessments on a combination of workloads that are imported and appliance discovered. Click Add button to add the workloads and click on Review + Create Assessments to create assessment. For imported SQL Server instances, the quality of assessments is only as good as the data uploaded via CSV, please ensure that the data uploaded via CSV accurately depicts the metadata of the SQL Server instance. Like regular SQL assessments, you will have all the assessment configuration options like choosing Sizing criteria (performance-based vs as on-premises), migration strategy (optimizing cost vs migration to PaaS), comfort factor etc. By default, Azure Migrate will try to create a performance-based incase the performance-metrics are not available, it defaults to As-on-premises assessment. Assessment created on the imported SQL Servers will help you know which SQL Server Instances are ready for migration to Azure SQL, the recommended Azure SQL Target type. Linking and Rediscovery Linking If the host where the imported SQL Instance currently hosted is discovered via import of VM Servers or via appliance, the SQL Server Instance will be automatically associated with that host. SQL Server Instance will be rolled up under that VM Server in the discovery view. The SQL Server instance will be uniquely identified based on the FQDN and the SQL Server Instance name. The combination of attributes is used for conflict resolution as well. Rediscovery and precedence rules 1) If the SQL Server instance is originally imported via CSV and is reuploaded again via CSV, upload operation will rewrite all the properties. 2) If the SQL server instance is first imported via CSV and then discovered via Azure Migrate appliance and the user has provided credentials to do deep discovery via Azure Migrate appliance, then the appliance discovered data will override the data imported via CSV. 3) If the SQL server instance is first imported via CSV and then discovered via Azure Migrate appliance and the user has not provided SQL Server credentials to do deep discovery, then the appliance discovered data will only update the additional data that appliance discovered. 4) If the SQL Server Instance is first discovered via the Azure Migrate appliance and is deep discovered (correct SQL Server credentials are provided), and if the SQL Server is imported via CSV, data imported via CSV will be discarded. 5) If the SQL Server Instance is first discovered via the Azure Migrate appliance and is only inventoried (correct SQL Server credentials are not provided), and if the SQL Server is later imported via CSV, upload operation will completely overwrite the appliance discovered details. How to onboard to Private Preview To onboard Preview of SQL Inventory import as CSV, please fill the Signup form for Private preview What’s Next 1) In the next release will ship out capabilities to create Business case for SQL Servers imported via CSV. 2) We will also build support for Availability Groups and Failover cluster Instances for the SQL Server instances imported via CSV.TSQL Scripts for Troubleshooting Common Issues with SQL Server
This blog provides TSQL samples for troubleshooting common SQL Server issues as listed below. You can modify the parameters (i.e. database name, table name, keyword name, duration, etc.) based on customized environment settings and requirements. (Test these scripts before implementing in PROD environment. Please be aware of all potential risks for implementing the script in your PROD environment.) 1. Performance Troubleshooting(blocking, high CPU, memory, idle, query execution) 2. Deadlock and Database Object ID mapping 3. HA (Clustered SQL Server) 4. Backup & Transaction Log Related Issue 5. Query Store (QDS) 6. Database Encryption (TDE) 7. Tool (Profiler trace and X-event) Performance Troubleshooting ========================= (blocking, high CPU, memory, idle, query execution) 1.List all active sessions and its queries that contain your target table name or specific TSQL structure (input your target key word) SELECT sqltext.TEXT, req.session_id, req.status, req.command, req.cpu_time,req.database_id, req.total_elapsed_time FROM sys.dm_exec_requests req CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext where sqltext.text like '%Your Target Key Word%' 2.List all sleeping user sessions that have been idle for over 15 minutes with detailed queries (You can customize the session’s status and idle time) SELECT CURRENT_TIMESTAMP as currenttime, datediff(minute,last_batch,GETDATE()) as 'idletime_in_minute' ,sp.status,sp.spid,sp.login_time,sp.program_name,sp.hostprocess,sp.loginame,text FROM sys.sysprocesses sp CROSS APPLY sys.dm_exec_sql_text(sp.sql_handle) AS QT where sp.status = 'sleeping' and datediff(minute,last_batch,GETDATE()) >15 and spid>50 3.List top 10 high CPU queries that currently running in this SQL instance SELECT s.session_id,r.status,r.blocking_session_id 'Blk by',r.wait_type,wait_resource,r.wait_time / (1000 * 60) 'Wait M',r.cpu_time,r.logical_reads,r.reads,r.writes,r.total_elapsed_time / (1000 * 60) 'Elaps M',Substring(st.TEXT,(r.statement_start_offset / 2) + 1, ((CASE r.statement_end_offset WHEN -1 THEN Datalength(st.TEXT) ELSE r.statement_end_offset END - r.statement_start_offset) / 2) + 1) AS statement_text, Coalesce(Quotename(Db_name(st.dbid)) + N'.' + Quotename(Object_schema_name(st.objectid, st.dbid)) + N'.' + Quotename(Object_name(st.objectid, st.dbid)), '') AS command_text,r.command,s.login_name, s.host_name,s.program_name,s.last_request_end_time,s.login_time, r.open_transaction_count FROM sys.dm_exec_sessions AS s JOIN sys.dm_exec_requests AS r ON r.session_id = s.session_id CROSS APPLY sys.Dm_exec_sql_text(r.sql_handle) AS st WHERE r.session_id != @@SPID ORDER BY r.cpu_time desc 4. List top 10 high memory usage queries that currently running in this SQL instance SELECT mg.session_id,mg.granted_memory_kb,mg.requested_memory_kb,mg.ideal_memory_kb,mg.request_time,mg.grant_time,mg.query_cost,mg.dop,st.[TEXT],qp.query_plan FROM sys.dm_exec_query_memory_grants AS mg CROSS APPLY sys.dm_exec_sql_text(mg.plan_handle) AS st CROSS APPLY sys.dm_exec_query_plan(mg.plan_handle) AS qp ORDER BY mg.required_memory_kb DESC 5. List detailed memory usage for each memory clerk DBCC MEMORYSTATUS 6. List memory usage for plan cache and its maximum size based on current setting. By removing the WHERE condition, you will get a full list for all memory cache clerks’ information select name, type, buckets_count from sys.dm_os_memory_cache_hash_tables where name IN ( 'SQL Plans' , 'Object Plans' , 'Bound Trees' ,'Extended Stored Procedures') select name, type, pages_kb, entries_count from sys.dm_os_memory_cache_counters where name IN ( 'SQL Plans' , 'Object Plans' , 'Bound Trees' ,'Extended Stored Procedures') 7. List progressive (on-going) execution plan for a specific session (Starting with SQL 2016) For SQL 2016 and 2017, please first run below TSQL in the query session of which the execution plan you wish to extract later set statistics profile on Run below query to extract the on-going execution plan for your target session (input the SPID in the bracket) SELECT * FROM sys.dm_exec_query_statistics_xml(59); 8.List all block header queries that currently detected in this SQL instance declare @blocker varchar(100),@sql varchar(100) print convert(varchar(20), getdate(),120) select distinct blocked into #blocker from sysprocesses where blocked <> 0 DECLARE blocker CURSOR FOR select spid from sysprocesses where spid in (select * from #blocker) and blocked = 0 OPEN blocker FETCH NEXT FROM blocker INTO @blocker WHILE @@FETCH_STATUS = 0 BEGIN set @sql='DBCC inputbuffer(' + @blocker + ')' execute (@sql) set @sql= 'select * from sysprocesses where spid=' + @blocker execute (@sql) FETCH NEXT FROM blocker INTO @blocker END CLOSE blocker DEALLOCATE blocker drop table #blocker 9.Kill all sleeping sessions that has been idle over 1 hour (You can customize the idle duration) DECLARE user_spid INT DECLARE CurSPID CURSOR FAST_FORWARD FOR SELECT SPID FROM master.dbo.sysprocesses (NOLOCK) WHERE spid>50 AND status='sleeping' -- only sleeping threads AND DATEDIFF(HOUR,last_batch,GETDATE())>=1 -- thread sleeping for 1 hours AND spid<>@@spid -- ignore current spid OPEN CurSPID FETCH NEXT FROM CurSPID INTO user_spid WHILE (@@FETCH_STATUS=0) BEGIN PRINT 'Killing '+CONVERT(VARCHAR,@user_spid) EXEC('KILL '+@user_spid) FETCH NEXT FROM CurSPID INTO user_spid END CLOSE CurSPID DEALLOCATE CurSPID GO 10.Kill all block headers. This script will continuously scan every 5 seconds for all block header sessions and kill all block headers Use master go while 1=1 Begin declare @blocker varchar(100),@sql varchar(100) print convert(varchar(20), getdate(),120) select distinct blocked into #blocker from sysprocesses where blocked <> 0 DECLARE blocker CURSOR FOR select spid from sysprocesses where spid in (select * from #blocker) and status='sleeping' OPEN blocker FETCH NEXT FROM blocker INTO @blocker WHILE @@FETCH_STATUS = 0 BEGIN set @sql='DBCC inputbuffer(' + @blocker + ')' execute (@sql) set @sql= 'kill ' + @blocker execute (@sql) FETCH NEXT FROM blocker INTO @blocker END CLOSE blocker DEALLOCATE blocker drop table #blocker waitfor delay '0:0:05' End Deadlock and Database Object ID mapping =================================== Below are 3 examples to map the issued object based on the ID to its source database, index or schema. 1. Key type wait resource waitresource=KEY: 6:12345678990 (987654321a9b) database_id = 6 hobt_id = 12345678990 hash value = (987654321a9b) We can use below TSQL to check the database name based on DATABASE ID select db_name(6) Use below TSQL to check the schema, object, and index details related to this key USE DatabaseName GO SELECT sc.name as schema_name, so.name as object_name, si.name as index_name FROM sys.partitions AS p JOIN sys.objects as so on p.object_id=so.object_id JOIN sys.indexes as si on p.index_id=si.index_id and p.object_id=si.object_id JOIN sys.schemas AS sc on so.schema_id=sc.schema_id WHERE hobt_id = 12345678990 2. Object Wait resource type waitresource=OBJECT: 6:1234567890:4 database_id = 6 Object ID = 1234567890 We can use below TSQL to check the specific object name based on the waitresource details select OBJECT_NAME(1234567890,6) 3. Page Wait Resource Type waitresource=“PAGE: 6:3:70133 ” = Database_Id : FileId : PageNumber database_id=6 data_file_id = 3 page_number = 70133 HA (Clustered SQL Server) =================== List the latest error encountered by local AG replica for connection timeout select r.replica_server_name, r.endpoint_url, rs.connected_state_desc, rs.last_connect_error_description, rs.last_connect_error_number, rs.last_connect_error_timestamp from sys.dm_hadr_availability_replica_states rs join sys.availability_replicas r on rs.replica_id=r.replica_id where rs.is_local=1 List the current redo rate, redo queue size, log send rate, log send queue size for current AG replica SELECT CURRENT_TIMESTAMP as currenttime,drs.last_commit_time,ar.replica_server_name, adc.database_name, ag.name AS ag_name, drs.is_local, drs.is_primary_replica, drs.log_send_queue_size, drs.log_send_rate, drs.redo_queue_size, drs.redo_rate FROM sys.dm_hadr_database_replica_states AS drs INNER JOIN sys.availability_databases_cluster AS adc ON drs.group_id = adc.group_id AND drs.group_database_id = adc.group_database_id INNER JOIN sys.availability_groups AS ag ON ag.group_id = drs.group_id INNER JOIN sys.availability_replicas AS ar ON drs.group_id = ar.group_id AND drs.replica_id = ar.replica_id Restart endpoint Check SQL AG endpoint name SELECT * FROM sys.endpoints Restart AG endpoint on this local replica ALTER ENDPOINT <Your AG Endpoint Name> STATE=STOPPED ALTER ENDPOINT <Your AG Endpoint Name> STATE=STARTED Backup & Transaction Log Related Issue ================================ List all backup history and backup file details for the past 7 days (You can modify the date) SELECT CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server, msdb.dbo.backupset.database_name, msdb.dbo.backupset.backup_start_date, msdb.dbo.backupset.backup_finish_date, msdb.dbo.backupset.expiration_date, CASE msdb..backupset.type WHEN 'D' THEN 'Database' WHEN 'L' THEN 'Log' END AS backup_type, msdb.dbo.backupset.backup_size, msdb.dbo.backupmediafamily.logical_device_name, msdb.dbo.backupmediafamily.physical_device_name, msdb.dbo.backupset.name AS backupset_name, msdb.dbo.backupset.description FROM msdb.dbo.backupmediafamily INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id WHERE (CONVERT(datetime, msdb.dbo.backupset.backup_start_date, 102) >= GETDATE() - 7) ORDER BY msdb.dbo.backupset.database_name, msdb.dbo.backupset.backup_finish_date List all databases’ data and log file path and logical name USE master; SELECT name 'Logical Name', physical_name 'File Location' FROM sys.master_files; List each database’ transaction log size usage and space DBCC SQLPERF(LOGSPACE) List VLF size and counts for all databases as well as detailed size for your target database select * from sys.dm_db_log_info(5); /*input here your DB ID to get detailed size for existing VLFs on this database*/ SELECT [name], s.database_id, COUNT(l.database_id) AS 'VLF Count', SUM(vlf_size_mb) AS 'VLF Size (MB)', SUM(CAST(vlf_active AS INT)) AS 'Active VLF', SUM(vlf_active*vlf_size_mb) AS 'Active VLF Size (MB)', COUNT(l.database_id)-SUM(CAST(vlf_active AS INT)) AS 'In-active VLF', SUM(vlf_size_mb)-SUM(vlf_active*vlf_size_mb) AS 'In-active VLF Size (MB)' FROM sys.databases s CROSS APPLY sys.dm_db_log_info(s.database_id) l GROUP BY [name], s.database_id ORDER BY 'VLF Count' DESC GO For the purpose of shrinking transaction log file, check the log_reuse_wait_desc type for all database select name,database_id,log_reuse_wait, log_reuse_wait_desc from sys.databases Please refer this link for how to fix each of the log_reuse_wait_desc type when it is not “NOTHING” https://docs.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-databases-transact-sql?view=sql-server-ver15 Database Corruption Issue Force repair of database with REPAIR_ALLOW_DATA_LOSS option (data loss expected and sometimes can cause more damage. Please refrain from using this method unless as for a last resort. More details for fixing database corruption, please refer this link https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-checkdb-transact-sql?view=sql-server-ver15) Alter database YourDBName set single_user with rollback immediate; DBCC CHECKDB('YourDBName', REPAIR_ALLOW_DATA_LOSS); Alter database YourDBName set multi_user with rollback immediate; Query Store (QDS) ================= 1.List a database’s query store status select * from sys.database_query_store_options 2. List all databases that have query store configured select name as 'DATABASE NAME', CASE is_query_store_on when 1 then 'ENABLED' else 'OTHER' END AS 'QUERY STORE STATE' from sys.databases where is_query_store_on = 1 order by 1 ; Database Encryption (TDE) ====================== Check if the DMK exist in master database USE master GO SELECT * FROM sys.symmetric_keys WHERE name = '##MS_DatabaseMasterKey##' Check if the certificate is created in master database. A certificate is equivalent to an asymmetric key USE master GO select * from sys.certificates Check if the database is encrypted/progress (encryption_state = 3 encrypted; =2 in progress) USE master GO SELECT db_name(database_id) [TDE Encrypted DB Name], c.name as CertName, encryptor_thumbprint , dek.* FROM sys.dm_database_encryption_keys dek INNER JOIN sys.certificates c on dek.encryptor_thumbprint = c.thumbprint Check if the DMK in the master database is now encrypted by the SMK (is_master_key_encrypted_by_server = 1) select is_master_key_encrypted_by_server, * from sys.databases where database_id = db_id(N'master') Tool (Profiler trace and X-event) ======================= Import SQL Profiler trace into SQL Server database tables USE DatabaseName GO select * into Sample from fn_trace_gettable('c:\trace\YourFolderToStoreTheTrace\YourTraceFile.trc',default) where eventclass in (10, 12) For event class ID, check the list in this link https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-trace-setevent-transact-sql?view=sql-server-ver15 Use XELoader to process X-event and import the data into SQL database table for data analysis purpose. XELoader can be downloaded from this opensource link https://github.com/suresh-kandoth/XELoader Use CMD script to load the X-event files into SQL Server database Sample script: C:\XELoader\6.2>XELoader.exe -D"C:\Users\yixin\Desktop\xeloader\xevent" -SYourServerName\InstanceName -dYourDBName Use TSQL query to check aggregated data result Sample TSQL: SELECT sum(c_duration) as SUM_DURAION,sum(c_signal_duration) as SUM_SIGNAL_DURATION,c_wait_type from [xel].[wait_info] group by c_wait_type order by SUM_DURAION desc DISCLAIMER : THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.13KViews1like0Comments