SQL
23 TopicsSetting up SQL Managed Instance Link to an Availability Group
On a recent case, a customer was trying to set up SQL Managed Instance Link to partner with an on premise Always On Availability Group (AG). Using the public documentation will work, but this will only be active against the primary node and a failover will cause the database on the managed instance side to stop syncing. This post will be using steps from the following documents. Prepare your environment for a link - Azure SQL Managed Instance Configure link with scripts - Azure SQL Managed Instance The steps to set this up are below Create a database master key on your AG nodes Enabling trace flags on your AG nodes (Optional) Testing network connectivity between your SQL Managed Instance and the AG Create certificates on your AG nodes Import AG certificate public keys to your SQL Managed Instance Import the certificate public key of your SQL Managed Instance to your AG nodes Import Azure-trusted root certificate authority keys to your AG nodes Alter the mirroring endpoint on your AG nodes Create a distributed availability group on your AG Set up the managed Instance Link Create a database master key You first need to create a master encryption key on all nodes of your AG if it does not already exist. You can check if it exists by running this query. -- Run on SQL Server USE master; GO SELECT * FROM sys.symmetric_keys WHERE name LIKE '%DatabaseMasterKey%'; If no results are returned, run this query, keep note of the passwords in a confidential and secure place. -- Run on SQL Server -- Create a master key USE master; GO CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<strong_password>'; Enabling trace flags on your AG nodes (Optional) To optimise the performance of your link, we recommend enabling the trace flags T1800 and T9567 on all of your nodes. Instructions on this are here and will require a restart of the service. Testing network connectivity between your SQL Managed Instance and the AG Instructions on how to run the connectivity tests can be found here, and can be done by SSMS or T-SQL. For these tests, your SQL Managed Instance will need to be able to connect to your Mirror endpoint using the IP of your availability group. All nodes of your AG will need to be able to connect to ports 5022 and 11000-11999 on the subnet of your SQL Managed Instance. Create certificates on your AG nodes On each node, run the query below to create a certificate. You can alter your @cert_expiry_date variable to a date that suits you. -- Create the SQL Server certificate for the instance link USE MASTER -- Customize SQL Server certificate expiration date by adjusting the date below DECLARE @cert_expiry_date AS varchar(max)='03/30/2025' -- Build the query to generate the certificate DECLARE @sqlserver_certificate_name NVARCHAR(MAX) = N'Cert_' + @@servername + N'_endpoint' DECLARE @sqlserver_certificate_subject NVARCHAR(MAX) = N'Certificate for ' + @sqlserver_certificate_name DECLARE @create_sqlserver_certificate_command NVARCHAR(MAX) = N'CREATE CERTIFICATE [' + @sqlserver_certificate_name + '] ' + char (13) + ' WITH SUBJECT = ''' + @sqlserver_certificate_subject + ''',' + char (13) + ' EXPIRY_DATE = '''+ @cert_expiry_date + ''''+ char (13) IF NOT EXISTS (SELECT name from sys.certificates WHERE name = @sqlserver_certificate_name) BEGIN PRINT (@create_sqlserver_certificate_command) -- Execute the query to create SQL Server certificate for the instance link EXEC sp_executesql @stmt = @create_sqlserver_certificate_command END ELSE PRINT 'Certificate ' + @sqlserver_certificate_name + ' already exists.' GO Import AG certificate public keys to your SQL Managed Instance The public keys of your AG certificates need to be imported into your SQL Managed Instance. Run this query on each node to get the data required -- Run on SQL Server -- Show the name and the public key of generated SQL Server certificate USE MASTER GO DECLARE @sqlserver_certificate_name NVARCHAR(MAX) = N'Cert_' + @@servername + N'_endpoint' DECLARE @PUBLICKEYENC VARBINARY(MAX) = CERTENCODED(CERT_ID(@sqlserver_certificate_name)); SELECT @sqlserver_certificate_name as 'SQLServerCertName' SELECT @PUBLICKEYENC AS SQLServerPublicKey; You will get a result similar to below You can now import these into your SQL Managed Instance using Azure Cloud Shell. Log in using this script, replacing . # Run in Azure Cloud Shell (select PowerShell console) # Enter your Azure subscription ID $SubscriptionID = "<SubscriptionID>" # Login to Azure and select subscription ID if ((Get-AzContext ) -eq $null) { echo "Logging to Azure subscription" Login-AzAccount } Select-AzSubscription -SubscriptionName $SubscriptionID Then run this script, replacing the values of $CertificateName, $PublicKeyEncoded and $ManagedInstanceName. # Run in Azure Cloud Shell (select PowerShell console) # =============================================================================== # POWERSHELL SCRIPT TO IMPORT SQL SERVER PUBLIC CERTIFICATE TO SQL MANAGED INSTANCE # ===== Enter user variables here ==== # Enter the name for the server SQLServerCertName certificate – for example, "Cert_sqlserver1_endpoint" $CertificateName = "<SQLServerCertName>" # Insert the certificate public key blob that you got from SQL Server – for example, "0x1234567..." $PublicKeyEncoded = "<SQLServerPublicKey>" # Enter your managed instance short name – for example, "sqlmi" $ManagedInstanceName = "<ManagedInstanceName>" # ==== Do not customize the below cmdlets==== # Find out the resource group name $ResourceGroup = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName # Upload the public key of the authentication certificate from SQL Server to Azure. New-AzSqlInstanceServerTrustCertificate -ResourceGroupName $ResourceGroup -InstanceName $ManagedInstanceName -Name $CertificateName -PublicKey $PublicKeyEncoded Repeat this step for all AG nodes. Import the certificate public key of your SQL Managed Instance to your AG nodes Using the same Azure Cloud Shell, run this script to get the public key of your SQL Managed Instance certificate, replacing the value of the variable $ManagedInstanceName. # Run in Azure Cloud Shell (select PowerShell console) # =============================================================================== # POWERSHELL SCRIPT TO EXPORT MANAGED INSTANCE PUBLIC CERTIFICATE # ===== Enter user variables here ==== # Enter your managed instance short name – for example, "sqlmi" $ManagedInstanceName = "<ManagedInstanceName>" # ==== Do not customize the following cmdlet ==== # Find out the resource group name $ResourceGroup = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName # Fetch the public key of the authentication certificate from Managed Instance. Outputs a binary key in the property PublicKey. Get-AzSqlInstanceEndpointCertificate -ResourceGroupName $ResourceGroup -InstanceName $ManagedInstanceName -EndpointType "DATABASE_MIRRORING" | out-string On all nodes of your AG you can then run this query, replacing with the value of the PublicKey output in the previous step. -- Run on SQL Server USE MASTER CREATE CERTIFICATE [<ManagedInstanceFQDN>] FROM BINARY = <PublicKey> Import Azure-trusted root certificate authority keys to your AG nodes Use the steps here to import the Microsoft PKI root-authority certificate and DigiCert PKI root-authority certificate onto all nodes of your AG. Alter the mirroring endpoint on your AG nodes The mirroring endpoint on your AG nodes will also need to be updated to allow your Managed Instance to authenticate with your AG using the newly created certificate. This can be done by navigating to Server Objects > Endpoints > Database Mirroring. Right click on the endpoint and select Script Endpoint as > CREATE To > New Query Editor Window. You will need to add this command under AUTHENTICATION, changing the cert_name value and change the CREATE ENDPOINT to ALTER ENDPOINT. CERTIFICATE [cert_name] Once updated, execute the query. Create a distributed availability group on your AG Use this script to set up the distributed availability group, updating the values - Your choice of name - AG Name already created : - The IP of your AG listener and the endpoint listener port - Your choice of name - FQDN of your instance - Just the instance name -- Run on SQL Server -- Create a distributed availability group for the availability group and database -- ManagedInstanceName example: 'sqlmi1' -- ManagedInstanceFQDN example: 'sqlmi1.73d19f36a420a.database.windows.net' USE MASTER CREATE AVAILABILITY GROUP [<DAGName>] WITH (DISTRIBUTED) AVAILABILITY GROUP ON N'<AGNameOnSQLServer>' WITH ( LISTENER_URL = 'TCP://<SQLServerIP>:<EndpointPort>', AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT, FAILOVER_MODE = MANUAL, SEEDING_MODE = AUTOMATIC, SESSION_TIMEOUT = 20 ), N'<AGNameOnSQLMI>' WITH ( LISTENER_URL = 'tcp://<ManagedInstanceFQDN>:5022;Server=[<ManagedInstanceName>]', AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT, FAILOVER_MODE = MANUAL, SEEDING_MODE = AUTOMATIC ); GO Set up the managed Instance Link The final step is to set up the link, you can do this by running the script here. Please ensure that the $SQLServerIP is the IP of your AG listener and not the IP of the primary node. The in the link includes updated parameters from Az.Sql 6.0.0. This version is not currently available in Azure Cloud Shell. If you are using Azure Cloud Shell, replace the final section of the command... ...with the below New-AzSqlInstanceLink -InstanceName $ManagedInstanceName -Name $DAGName -PrimaryAvailabilityGroupName $AGNameOnSQLServer -ResourceGroupName $ResourceGroup -SecondaryAvailabilityGroupName $AGNameOnSQLMI -SourceEndpoint $SourceIP -TargetDatabase $DatabaseName Once this is executed, your MI Link should be up and running and failing over to another node on your AG will not stop your MI Link from syncing. Disclaimer Please note that products and options presented in this article are subject to change. This article reflects MI Link and the documentation in January 2025. I hope this article was helpful for you, please feel free to share your feedback in the comments section.888Views0likes0CommentsLeveraging GitHub Copilot for T-SQL Code Conversion: A Deep Dive into Code Explainability
Converting database code between different relational database management systems (RDBMS) is an essential part of database migration , especially when moving from Oracle’s PL/SQL to SQL Server’s T-SQL. Both Oracle and SQL Server have unique syntax, functions, and programming conventions that require significant adjustments when converting complex SQL queries, stored procedures, or functions. While manual conversion can be a painstaking process, GitHub Copilot, an AI-powered code assistant, can ease this burden by offering real-time suggestions and automating many aspects of the conversion. In the first part of this two-part blog, we explore how GitHub Copilot can be a powerful tool for code conversion, helping developers quickly adapt code from one language or framework to another. By providing context-aware suggestions and completions, Copilot simplifies the process of rewriting and refactoring code. In the second part, we dive deeper into Copilot’s explainability feature, showcasing how it enhances the code conversion process from Oracle to Azure SQL. Code explainability in GitHub Copilot refers to the tool’s ability to offer suggestions and comments that help developers understand what a given piece of code is doing. When dealing with database migrations, such as converting Oracle SQL code to Azure SQL (SQL Server), GitHub Copilot’s ability to explain, suggest, and refactor code can ease the transition. It can assist in making the conversion process smoother, more efficient, and less error-prone by explaining the logic behind Oracle-specific queries and suggesting corresponding changes in Azure SQL’s syntax We'll go through multiple examples, analyze the differences between the two languages, and show how GitHub Copilot handles these challenges. Understanding the Key Differences: PL/SQL vs T-SQL Before jumping into examples, it's important to understand few of the fundamental differences between PL/SQL (Oracle) and T-SQL (SQL Server): Syntax: While both are procedural SQL dialects, there are key differences in the way they handle variables, control structures, and flow control. Functions: Each platform has its own set of built-in functions (e.g., string manipulation, date handling, etc.), and these need to be mapped correctly during conversion. Error Handling: Error handling and exception management differ between the two, with PL/SQL using EXCEPTION blocks and T-SQL using TRY...CATCH. Cursor Handling: While both support cursors for iterating through results, their syntax differs. Leveraging GitHub Copilot for Oracle PL/SQL to SQL Server T-SQL Code Conversion: A Deep Dive into Complex Examples with Explainability Converting database code between different relational database management systems (RDBMS) is an essential part of database migration , especially when moving from Oracle’s PL/SQL to SQL Server’s T-SQL. Both Oracle and SQL Server have unique syntax, functions, and programming conventions that require significant adjustments when converting complex SQL queries, stored procedures, or functions. While manual conversion can be a painstaking process, GitHub Copilot, an AI-powered code assistant, can ease this burden by offering real-time suggestions and automating many aspects of the conversion. In the first part of this two-part blog, we explore how GitHub Copilot can be a powerful tool for code conversion, helping developers quickly adapt code from one language or framework to another. By providing context-aware suggestions and completions, Copilot simplifies the process of rewriting and refactoring code. In the second part, we dive deeper into Copilot’s explainability feature, showcasing how it enhances the code conversion process from Oracle to Azure SQL. Code explainability in GitHub Copilot refers to the tool’s ability to offer suggestions and comments that help developers understand what a given piece of code is doing. When dealing with database migrations, such as converting Oracle SQL code to Azure SQL (SQL Server), GitHub Copilot’s ability to explain, suggest, and refactor code can ease the transition. It can assist in making the conversion process smoother, more efficient, and less error-prone by explaining the logic behind Oracle-specific queries and suggesting corresponding changes in Azure SQL’s syntax We'll go through multiple examples, analyse the differences between the two languages, and show how GitHub Copilot handles these challenges. Understanding the Key Differences: PL/SQL vs T-SQL Before jumping into examples, it's important to understand few of the fundamental differences between PL/SQL (Oracle) and T-SQL (SQL Server): Syntax: While both are procedural SQL dialects, there are key differences in the way they handle variables, control structures, and flow control. Functions: Each platform has its own set of built-in functions (e.g., string manipulation, date handling, etc.), and these need to be mapped correctly during conversion. Error Handling: Error handling and exception management differ between the two, with PL/SQL using EXCEPTION blocks and T-SQL using TRY...CATCH. Cursor Handling: While both support cursors for iterating through results, their syntax differs. Example 1: Complex Stored Procedure Conversion Let’s start with a complex example that involves handling parameters, cursors, and error handling. We'll look at a PL/SQL stored procedure in Oracle that processes employee records and outputs their details. CREATE OR REPLACE PROCEDURE GetEmployeeDetails (emp_id IN NUMBER) IS CURSOR emp_cursor IS SELECT employee_id, first_name, last_name, hire_date FROM employees WHERE employee_id = emp_id; emp_record emp_cursor%ROWTYPE; emp_full_name VARCHAR2(100); BEGIN OPEN emp_cursor; LOOP FETCH emp_cursor INTO emp_record; EXIT WHEN emp_cursor%NOTFOUND; -- Concatenate first and last name emp_full_name := emp_record.first_name || ' ' || emp_record.last_name; -- Output employee details DBMS_OUTPUT.PUT_LINE('Employee ID: ' || emp_record.employee_id); DBMS_OUTPUT.PUT_LINE('Full Name: ' || emp_full_name); DBMS_OUTPUT.PUT_LINE('Hire Date: ' || TO_CHAR(emp_record.hire_date, 'YYYY-MM-DD')); END LOOP; CLOSE emp_cursor; EXCEPTION WHEN NO_DATA_FOUND THEN DBMS_OUTPUT.PUT_LINE('No employee found with the given ID.'); WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('An error occurred.'); END; Oracle PL/SQL Example In Oracle, we might have a procedure that uses a cursor to fetch employee records and print them. The procedure also includes error handling for potential issues during execution. Key Points in Oracle PL/SQL: Cursor: PL/SQL uses CURSOR for iterating through rows. Concatenation: Oracle uses || for string concatenation. Error Handling: PL/SQL uses EXCEPTION blocks for handling errors such as NO_DATA_FOUND. Converting to SQL Server T-SQL with GitHub Copilot When you type the PL/SQL code into an IDE with GitHub Copilot enabled (e.g., Visual Studio Code), Copilot will start suggesting the equivalent SQL Server T-SQL code. Here’s how Copilot converts the procedure. Copilot’s Suggested SQL Server T-SQL Code CREATE PROCEDURE GetEmployeeDetails (@emp_id INT) AS BEGIN DECLARE @emp_id INT, @first_name NVARCHAR(50), @last_name NVARCHAR(50); DECLARE @emp_full_name NVARCHAR(100); DECLARE emp_cursor CURSOR FOR SELECT employee_id, first_name, last_name, hire_date FROM employees WHERE employee_id = @emp_id; OPEN emp_cursor; FETCH NEXT FROM emp_cursor INTO @emp_id, @first_name, @last_name, @hire_date; WHILE @@FETCH_STATUS = 0 BEGIN -- Concatenate first and last name SET @emp_full_name = @first_name + ' ' + @last_name; -- Output employee details PRINT 'Employee ID: ' + CAST(@emp_id AS NVARCHAR); PRINT 'Full Name: ' + @emp_full_name; PRINT 'Hire Date: ' + CONVERT(NVARCHAR, @hire_date, 23); FETCH NEXT FROM emp_cursor INTO @emp_id, @first_name, @last_name, @hire_date; END CLOSE emp_cursor; DEALLOCATE emp_cursor; BEGIN TRY -- Error handling code (example) END TRY BEGIN CATCH PRINT 'An error occurred.'; END CATCH END; Key Points in T-SQL: Cursor: SQL Server uses DECLARE CURSOR to declare a cursor, similar to PL/SQL but with different syntax for opening and fetching. Concatenation: SQL Server uses + for string concatenation. Error Handling: SQL Server uses TRY...CATCH for error handling, unlike PL/SQL’s EXCEPTION block. GitHub Copilot provides a T-SQL equivalent for an Oracle PL/SQL stored procedure. Explanation of the Conversion Process Cursor Declaration and Fetching: PL/SQL uses CURSOR with OPEN, FETCH, and CLOSE to process rows. T-SQL uses DECLARE CURSOR to define the cursor and fetches rows with FETCH NEXT. Both approaches are similar, but T-SQL requires explicit DEALLOCATE to release the cursor. Error Handling: PL/SQL uses EXCEPTION blocks to handle errors like NO_DATA_FOUND. T-SQL uses TRY...CATCH blocks to capture and handle errors, a major difference in error handling between the two systems. String Concatenation: PL/SQL uses || for concatenating strings, while T-SQL uses +. This is a simple yet crucial change when converting between the two dialects. Example 2: Complex Query with Date Handling Let’s now look at a query that involves date handling and conditional logic. These are often tricky when converting between PL/SQL and T-SQL due to differences in how each platform handles dates. Oracle PL/SQL Query Example SELECT employee_id, first_name, hire_date, CASE WHEN hire_date < TO_DATE('2000-01-01', 'YYYY-MM-DD') THEN 'Veteran' ELSE 'New Hire' END AS employee_status FROM employees WHERE hire_date BETWEEN TO_DATE('1990-01-01', 'YYYY-MM-DD') AND SYSDATE; Key Points in PL/SQL: TO_DATE: Used to convert strings into dates. SYSDATE: Returns the current system date. CASE: Used for conditional logic to categorize employees as either "Veteran" or "New Hire". Copilot’s Suggested SQL Server T-SQL Code SELECT employee_id, first_name, hire_date, CASE WHEN hire_date < CONVERT(DATE, '2000-01-01', 23) THEN 'Veteran' ELSE 'New Hire' END AS employee_status FROM employees WHERE hire_date BETWEEN CONVERT(DATE, '1990-01-01', 23) AND GETDATE(); Key Points in T-SQL: CONVERT: SQL Server uses CONVERT for date formatting, similar to Oracle’s TO_DATE, but with different syntax and style codes. GETDATE(): Equivalent to Oracle’s TO_DATE In conclusion, GitHub Copilot streamlines code conversion by offering both quick suggestions and detailed explanations. The first part highlighted its ability to assist with code transitions, while the second part focused on how its explainability feature enhances the Oracle to Azure SQL migration, providing guided, efficient, and error-free conversion. This combination accelerates development and reduces potential issues during migration.794Views2likes0CommentsA quick way to generate Azure SQL migration assessments
Overview Azure Migrate doesn’t allow users to import the SQL Server Inventory and discovery information from sources other than Azure Migrate appliance as of today. As a result, users who cannot deploy Azure Migrate appliance to discover their SQL Server estate for technical, business or compliance reasons or have already discovered their SQL Server estate using tools like rvtools and want a quick and approximate way to check readiness and total cost savings by moving to Azure, cannot leverage the rich capabilities SQL assessment and business case capabilities that Azure Migrate offers. To help customers quickly get migration readiness and cost savings by migrating to Azure, we are introducing “Import SQL Server Discovery data as CSV” feature in Azure Migrate that allows users to import the SQL Server inventory as a CSV file and use the discovery data to generate assessments and business cases. This feature is launched in private preview. In this blog post, we will do a deep dive on how to use this feature to generate quick assessments for your off-Azure SQL Server estate. Here are the high-level steps for importing your off-Azure SQL Server inventory as a CSV: 1) Step-1: Identify the SQL Server Instances and extract the discovery data into a CSV file. 2) Step-2: Validate and upload the inventory data to Azure Migrate 3) Step-3: Generate and view Assessments Pre-requisites: Import SQL Discovery data as a CSV feature is built on top of the new Azure Migrate experience which is also in Private Preview phase. To use Import SQL Discovery data as CSV feature, you must onboard to the private preview of new Azure Migrate experience and here is the form to sign up for Private preview: Azure Migrate Private Preview Sign-Up Survey. We will share the instructions to onboard via the contact details you provided. Make sure you have the credentials to connect to the SQL Server Instances of interest Make sure you have the requisite roles and privileges on each SQL Server Instance to access and extract the required metadata from Dynamic Management Views. To generate a login with required privileges, please run the scripts listed here: Configure custom login for SQL Server Discovery An Azure Subscription. Please note you don't need to deploy an Azure Migrate appliance to use the SQL import CSV feature. Step-1: Identify and extract the discovery data of SQL Server Instances you intend to migrate to Azure SQL into a CSV file Identify the list of all off-Azure SQL Server Instances you intend to migrate to Azure SQL from either on-premises infrastructure or other cloud platforms like Amazon Web Services (AWS), Google Cloud Platform (GCP). To extract the discovery data, please run the SQL Script below against the SQL Server Instance which you would like to import into Azure Migrate via CSV and store the output into a CSV file. /* PLEASE ENTER THE FULLY QUALIFIED DOMAIN NAME OF THE HOST IN THE LINE BELOW BEFORE RUNNING THE QUERY */ Declare @FullyQualifiedDomainName varchar(200) = ''; With ServerCores AS( SELECT cpu_count AS [Logical CPU Count], hyperthread_ratio AS [Hyperthread Ratio], cpu_count/hyperthread_ratio AS [Physical CPU Count], sqlserver_start_time FROM sys.dm_os_sys_info ), Config AS( SELECT value_in_use AS [Max Server Memory(MB) In Use] FROM sys.configurations WHERE name = 'max server memory (MB)'), UserCount AS( SELECT count(database_id) AS [Number User Databases] FROM sys.databases WHERE database_id > 4 ), DbSize AS( SELECT SUM(CAST(Size AS bigint))*8/1024 AS [Max server memory (in MB)] FROM sys.master_files WHERE database_id > 4 ), TempDbSize AS ( SELECT SUM(CAST(Size AS bigint))*8/1024 AS [TempDb Size (In MB)] FROM sys.master_files WHERE db_name(database_id) = 'tempdb' ), Logins AS (Select count(name) AS [NumOfLogins] from sys.syslogins), Properties AS ( select db.name as database_name, db.database_id as database_id, cast(db.compatibility_level as int) as database_compatibility_level, db.collation_name as database_collation, db.is_broker_enabled as is_service_broker_enabled, '0' as [Number of Nics], SERVERPROPERTY ('servername') as instance_name, SERVERPROPERTY ('productversion') as instance_version, SERVERPROPERTY ('edition') as instance_edition, SERVERPROPERTY ('productlevel') as instance_level, SERVERPROPERTY ('IsHadrEnabled') as is_hadr_enabled, SERVERPROPERTY ('collation') as server_collation, CASE WHEN SERVERPROPERTY ('IsClustered') = 1 THEN 'TRUE' ELSE 'FALSE' END as is_failover_clustered, COALESCE(SERVERPROPERTY ('IsIntegratedSecurityOnly'), 0) as is_integrated_security_only, COALESCE(SERVERPROPERTY ('IsPolyBaseInstalled'), 0) as is_polybase_installed, CASE WHEN 'A' = 'a' THEN 0 ELSE 1 END as is_server_case_sensitive, /* '' + @host_platform + '' as host_platform,*/ db.state as database_state, /* db.is_auto_close_on as is_auto_close_on,'+ CASE when @check_cdc = 1 THEN 'db.is_cdc_enabled' else '0' END + ' as is_cdc_enabled,*/ CASE when db.user_access = 0 THEN 1 ELSE 0 END as is_multi_user, CASE when db.recovery_model = 0 THEN 0 ELSE 1 END as is_full_recovery_model, db.is_read_only as is_read_only, case when source_database_id IS NULL THEN 0 ELSE 1 end as is_snapshot, case when is_published = 1 OR is_subscribed = 1 OR is_merge_published = 1 or is_distributor = 1 THEN 1 ELSE 0 end as is_replication_enabled, db.is_encrypted from sys.databases db where db.name not in ('master', 'tempdb', 'model', 'msdb') and is_distributor <> 1), EachDbSize AS( SELECT database_name = DB_NAME(database_id), CAST(SUM(CAST(size AS bigint)) * 8. / 1024 AS DECIMAL(12,2)) AS [Database size (in MB)], convert(varchar(12),CAST((sum(case when physical_name like '%ldf'then 0 else size end)*8.0)/1024.0 AS numeric(8,2))) AS [Data file size (in MB)], convert(varchar(12),CAST((sum(case when physical_name like '%ldf'then size else 0 end)*8.0)/1024.0 AS numeric(8,2))) AS [Log file size (in MB)] FROM sys.master_files WHERE database_id > 4 GROUP BY database_id) select @FullyQualifiedDomainName as [Fully qualified domain name], instance_name as [Instance Name], p.database_name as [Database Name], database_state as [Database Status], instance_edition [SQL Edition], instance_version as [SQL Version], [Logical CPU Count], [Hyperthread Ratio], [Physical CPU Count], [TempDb size (in MB)], is_failover_clustered as [Is FCI Enabled], '' as [Failover cluster name], '' as [Failover cluster instance role], '' as [Failover cluster shared disk count], '' as [Is failover cluster multi subnet], [Max server memory (in MB)], [Number of Nics], instance_level as [Service pack], '' as [ESU status], 'FALSE' as [IS AG enabled], instance_level as [Build version], 'Online' as [Service status], 'FALSE' as [Is database highly available], 'FALSE' as [Is part of distributed availability group], [Database size (in MB)], [Data file size (in MB)], [Log file size (in MB)], database_compatibility_level as [Compatibility level], '' as [Availability group ID], '' as [Availability group name], '' as [Availability group type], '' as [Availability replica ID], '' as [Availability replica name], '' as [Commit mode], '' as [Replica type], '' as [Replica state], 'FALSE' as [Is AG multi subnet], '' as [AG replica sync status], '' as [AG replica seed mode], '' as [AG replica read mode] from ServerCores, Config, UserCount, DbSize, TempDbSize,Logins, Properties p LEFT JOIN EachDbSize s on p.database_name = s.database_name order by p.database_name Collate the output of all SQL Server Instances of interest into a single file. Step-2: Validate and upload the CSV file To validate and upload the generated CSV file with SQL Server discovery data: 1) Go to the Azure Migrate experience link you would have received from Microsoft after you signed up for the private preview of this feature. Click on “Create Project” to create a new Project (if not done already): 2) Fill in the required fields in create project wizard 3) Click on the created project and select “Using custom Import” option under “Start Discovery” button. 4) In the Discover page, select “File Type” as “SQL Server inventory (CSV)” 5) (Optional) Click on “Download” button under Step 1: Download template to view and understand the template expected by Azure Migrate. Optionally, you can skip this step as the CSV file generated by SQL script shared above confirms to the template. 6) Browse and select the inventory file in Step-2 and click on Validate to validate the uploaded SQL Inventory data. 7) If the uploaded doesn’t contain any warnings or error messages, it will show validation completed successfully with 0 errors and warnings. If there are any errors or warnings Azure Migrate will show a count of warning and error messages. To view and resolve the warnings or errors (if any) with the imported file, click on Download ErroDetails.csv, which displays errors/warning messages corresponding to each database along with remedial action. Please note warnings are non-blocking messages, i.e., user can still go ahead and complete the Import operation. Whereas errors are blocking in nature i.e. users will not be allowed to import unless the error messages are resolved. 8) Once the validation phase is completed, click on Import button to complete the import operation. Step-3: Create Assessments using imported SQL Discovery Data SQL Server Instances and the databases uploaded via CSV import route are treated at par with the SQL Server instances that are discovered by Azure Migrate appliance. You can view the SQL Server Instance properties like Version details, Number of databases, Support status and all other properties uploaded via CSV file. The quality of property details and the assessments generated is directly dependent on the quality of data uploaded via CSV. To create SQL assessments for the uploaded SQL Server Instances, go to the Azure Migrate project's resource page and click on Create Assessments button: Give a name for the assessment and click on Add workloads to add the imported and appliance discovered workloads: In the following page, select the SQL Server instances that have value “Import” for Discovery Source column. You can also create assessments on a combination of workloads that are imported and appliance discovered. Click Add button to add the workloads and click on Review + Create Assessments to create assessment. For imported SQL Server instances, the quality of assessments is only as good as the data uploaded via CSV, please ensure that the data uploaded via CSV accurately depicts the metadata of the SQL Server instance. Like regular SQL assessments, you will have all the assessment configuration options like choosing Sizing criteria (performance-based vs as on-premises), migration strategy (optimizing cost vs migration to PaaS), comfort factor etc. By default, Azure Migrate will try to create a performance-based incase the performance-metrics are not available, it defaults to As-on-premises assessment. Assessment created on the imported SQL Servers will help you know which SQL Server Instances are ready for migration to Azure SQL, the recommended Azure SQL Target type. Linking and Rediscovery Linking If the host where the imported SQL Instance currently hosted is discovered via import of VM Servers or via appliance, the SQL Server Instance will be automatically associated with that host. SQL Server Instance will be rolled up under that VM Server in the discovery view. The SQL Server instance will be uniquely identified based on the FQDN and the SQL Server Instance name. The combination of attributes is used for conflict resolution as well. Rediscovery and precedence rules 1) If the SQL Server instance is originally imported via CSV and is reuploaded again via CSV, upload operation will rewrite all the properties. 2) If the SQL server instance is first imported via CSV and then discovered via Azure Migrate appliance and the user has provided credentials to do deep discovery via Azure Migrate appliance, then the appliance discovered data will override the data imported via CSV. 3) If the SQL server instance is first imported via CSV and then discovered via Azure Migrate appliance and the user has not provided SQL Server credentials to do deep discovery, then the appliance discovered data will only update the additional data that appliance discovered. 4) If the SQL Server Instance is first discovered via the Azure Migrate appliance and is deep discovered (correct SQL Server credentials are provided), and if the SQL Server is imported via CSV, data imported via CSV will be discarded. 5) If the SQL Server Instance is first discovered via the Azure Migrate appliance and is only inventoried (correct SQL Server credentials are not provided), and if the SQL Server is later imported via CSV, upload operation will completely overwrite the appliance discovered details. How to onboard to Private Preview To onboard Preview of SQL Inventory import as CSV, please fill the Signup form for Private preview What’s Next 1) In the next release will ship out capabilities to create Business case for SQL Servers imported via CSV. 2) We will also build support for Availability Groups and Failover cluster Instances for the SQL Server instances imported via CSV.SQL MI Restore with valid SAS key failing with Operating System Error 86
We will provide details about a workaround for resolving the "Operating System Error 86" encountered during database restoration from Azure Storage on SQL Managed Instance. In this scenario, the Storage Account was not behind firewall and the credential already had a valid SAS token.2.4KViews0likes1CommentAzure SQL Database idle sessions are killed after about 30 minutes when "Proxy" connection policy
Let's see how the connection policy affects the idle sessions on our Azure SQL Database connections. As we will see, the Azure SQL Gateways will kill idle sessions after 30 minutes when "Proxy" connection policy is set.14KViews3likes5CommentsHow to Change Collation of an Azure SQL Managed Instance When There Are Dependency Errors
In this article, I will explain how to change the collation for your Azure SQL Managed Instance when the following scenario is encountered: You try to change the database collation with the following T-SQL: USE [master] GO ALTER DATABASE [testdb] COLLATE SQL_Latin1_General_CP1_CS_AS GO And it fails with the following message: Msg 5075, Level 16, State 1, Line 24 The column 'tablex.colx' is dependent on database collation. The database collation cannot be changed if a schema-bound object depends on it. Remove the dependencies on the database collation and then retry the operation. Steps in brief: Export the database to bacpac using SQLPackage. Modify the collation by editing the model.xml file. Import the database using SQLPackage and overriding the model.xml path. Change the database name or modify app connection string to use the new database. Considerations: Schedule a maintenance window for your application during the process and stop the workload to prevent data loss. Ensure you have the latest SQLPackage version: Download and install SqlPackage - SQL Server | Microsoft Learn Review the different connection strings depending on the connection method outlined in these examples: SqlPackage Export - SQL Server | Microsoft Learn This article applies to Azure SQL DB and Azure SQL Managed Instance. Steps in details: Start the maintenance window for your application. Export the database using SQLPackage: sqlpackage.exe /Action:Export /ssn:tcp:<ServerName>.database.windows.net,1433 /sdn:<DatabaseName> /su:<UserName> /sp:<Password> /tf:<TargetFile> Rename the *.bacpac file to *.zip so you can open it with the file explorer: Copy the model.xml to a local folder “C:\Temp\model.xml”. Edit the “C:\Temp\model.xml” with the desired collation and save the file. For example: <Property Name="Collation" Value="Thai_CI_AS" /> Rename the file extension modified in step 3 from *.zip back to *.bacpac. Run the import using sqlpackage.exe. Use the flag /ModelFilePath:C:\Temp\model.xml to override the model.xml in the .bacpac file. sqlpackage.exe /Action:Import /tsn:<server>.database.windows.net /tdn:<database> /tu:<user> /tp:<password> /sf:"C:\Temp\database.bacpac" /ModelFilePath:<path>model.xml When the import operation is completed, change the database name, or modify the application connection string to use the new database name. You can also check the new collation setting in the database properties using SSMS: Stop the maintenance window for your application and run the workload. Ensure that the application is working as expected.4.4KViews0likes0CommentsExport Azure SQL Database | Advanced Scenarios
Introduction: Export Azure SQL Database is a common request for Azure SQL DB customers, in this article we are going to list down some advanced scenarios, on how this can be achievable through various tools not limited to Azure Portal, Azure CLI and PowerShell. In addition, this article will provide alternative methods when it comes to private endpoints and deny public access. Scenarios: In this section, we are going through the scenarios and provide a thoughtful insight on each one. Note: - Import Export using Private Link now in public review, more information at blog article: Import Export using Private Link now in Preview - Microsoft Tech Community Export via Azure Portal to Storage Account This can be a seamless solution to do the database export when the SQL server allows the public access, untoggled the Deny public access option on SQL DB Azure portal, otherwise you might get error like: An unexpected error was returned by the SQL engine while preparing the operation inputs. Error: 47073, State: 1. To overcome such error, you can TEMPORARY set deny public access to NO during the export operation. Note: - You don’t need to worry, if you set “Deny public access” to “No” it doesn’t mean that everyone will be able to connect from outside; you still can restrict the access using the database firewall. You can find more information at: Connectivity settings for Azure SQL Database and Azure Synapse Analytics - Azure SQL Database and Azure Synapse Analytics | Microsoft Docs Export via REST API You can use Export REST API to export the database, this can be done programmatically, or from tools like Postman, Also you can try this from Azure Documentation using the >try it button, More information can be found at: Databases - Export - REST API (Azure SQL Database) | Microsoft Docs Here is an example using postman: Request Body: { "storageKeyType": "StorageAccessKey", "storageKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx= =", "storageUri": https://xxxxxxxxxxxxxxxxx.blob.core.windows.net/testc, "administratorLogin": "xxxxxxxxxxxx", "administratorLoginPassword": "xxxxxxxxxxxxxx", "authenticationType": "Sql", "networkIsolation": { "sqlServerResourceId": "/subscriptions/xxxxxxxxxxxxx/resourceGroups/customer/providers/Microsoft.Sql/servers/xxxxxxxxx", "storageAccountResourceId": "/subscriptions/xxxxxxxxxxx/resourceGroups/customer/providers/Microsoft.Storage/storageAccounts/xxxxxxxxx" } } Below error may occur if the Deny public access is enabled, the solution is to enable the public access temporarily {"error":{"code":"ResourceNotFound","message":"The specified resource 'https://management.northeurope.control.database.windows.net/modules/AzureResourceManager.dsts/subscriptions/<yoursubscriptionid>/resourceGroups/customer/providers/Microsoft.Sql/servers/<servername>/databases/<dbname>/export?api-version=2021-02-01-preview ' was not found."}} Note:- networkisolation setting, this feature is currently under development and not ready for public consumption. More information can be found at: New-AzSqlDatabaseExport with network isolation · Discussion #13937 · Azure/azure-powershell · GitHub Error when calling New-AzSqlDatabaseExport with UseNetworkIsolation on $true · Issue #13964 · Azure/azure-powershell · GitHub Export via SQLPackage This can be a best bet solution for many scenarios to overcome limitations on the database size and also to export SQL DB via private endpoint through a VM running in the same VNET. Note:- you can export to local disk or Azure File Share, but you cannot use Azure Blob, for details can be found at Lesson Learned #25: Export/Import Azure SQL Database using Azure File Service? - Microsoft Tech Community Therefore, you can export the .bacpac locally/File share on the VM in the same VNET as the private endpoint of the SQL Server using SQLPackage.exe/SSMS then copy the bacpac to Azure blob (if required). For example: Using SQLPackage to import or export SQL Server and Azure SQL DB - Microsoft Tech Community Export via SQL server Management Studio : Export using SSMS from the VM running in the same VNET as a private endpoint from SQL to blob storage/ file share You can make use SQL Server Management Studio Export data-tier application wizard to export the Azure SQL database to a .bacpac file. The .bacpac can be stored into Azure blob storage or file share. Right click on the SQL Database on logical SQL Server from SSMS --> Tasks --> Select 'Export data-tier application' wizard. Select the location to store the BACPAC file You can select the subset of the tables from export setting in Advance tab --> Click Next to view the summary of export. One you click finish. And up on completion of the process you will be able to view the BACPAC file in the specified destination. More information at blog: Using data-tier applications (BACPAC) to migrate a database from Managed Instance to SQL Server - Microsoft Tech Community Export via Powershell/ CLI The New-AzSqlDatabaseExport cmdlet can be used to export database request to the Azure SQL Database service. Make a note that you have to enable public access to export the database via this method. With Deny public access set to YES, you might encounter below error. PowerShell command to export the database. Command to export the database via PS : New-AzSqlDatabaseExport -ResourceGroupName "customer" -ServerName "<your server name>" -DatabaseName "<your db name>" -StorageKeyType "StorageAccessKey" -StorageKey "<your storage access key>" -StorageUri "https://xxxxxxxxxxxxxxxxx.blob.core.windows.net/testc/database01.bacpac" -AdministratorLogin "<your login name>" To check the status of the export request, use the Get-AzSqlDatabaseImportExportStatus cmdlet. Get-AzSqlDatabaseImportExportStatus -OperationStatusLink https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx/providers/Microsoft.Sql/locations/northeurope/importExportOperationResults/xxxxxxx-xxxxxxx?api-version=2021-02-01-preview Use the Database Operations - Cancel API or the PowerShell Stop-AzSqlDatabaseActivity command to cancel an export request. Stop-AzSqlDatabaseActivity -ResourceGroupName $ResourceGroupName -ServerName $ServerName -DatabaseName $DatabaseName -OperationId $Operation.OperationId - Please make a note of some of the considerations when using PowerShell method. Also check Azure SQL CLI at: az sql db | Microsoft Docs - Check out: How to cancel Azure SQL Database Import or Export operation - Microsoft Tech Community Database Copy You can use copy database from Azure portal to copy the database to the different server, then perform the export to Azure Blob, later on you can clean up the copied database The database export can also be done via automation, more information can be found at Blog: How to automate Export Azure SQL DB to blob storage use Automation account - Microsoft Tech Community Video: SQL Insider Series: Exporting Azure SQL DB BACPAC file to Azure with Azure Automation | Data Exposed - YouTube Additional References: Export a database to a BACPAC file - Azure SQL Database & Azure SQL Managed Instance | Microsoft Docs Using Azure Import/Export to transfer data to and from Azure Storage | Microsoft Docs Configure Azure Storage firewalls and virtual networks | Microsoft Docs Connectivity settings for Azure SQL Database and Azure Synapse Analytics - Azure SQL Database and Azure Synapse Analytics | Microsoft Docs Automate native database backup of Azure SQL Managed instance to Azure blob storage - Microsoft Tech Community Disclaimer Please note that products and options presented in this article are subject to change. This article reflects the database export options available for Azure SQL database in February, 2022. Closing remarks We hope you find this article helpful. If you have any feedback, please do not hesitate to provide it in the comment section below. Abhishek Shaha (Author) Ahmed Mahmoud (Co-Author)17KViews4likes2Comments