azure sql database
440 TopicsLesson Learned #519: Reusing Connections in Azure SQL DB: How Connection Pooling Caches Your Session
A few days ago, I was working on a case where a customer reported an unexpected behavior in their application: even after switching the connection policy from Proxy to Redirect, the connections were still using Proxy mode. After investigating, we found that the customer was using connection pooling, which caches connections for reuse. This meant that even after changing the connection policy, the existing connections continued using Proxy mode because they had already been established with that setting. The new policy would only apply to newly created connections, not the ones being reused from the pool. To confirm this, we ran a test using .NET and Microsoft.Data.SqlClient to analyze how the connection pool behaves and whether connections actually switch to Redirect mode when the policy changes. How Connection Pooling Works Connection pooling is designed to reuse existing database connections instead of creating a new one for every request. This improves performance by reducing latency and avoiding unnecessary authentication handshakes. However, once a connection is established, it is cached with the original settings, including: Connection policy (Proxy or Redirect) Authentication mode Connection encryption settings This means that if you change the connection policy but reuse a pooled connection, it will retain its original mode. The only way to apply the new policy is to create a new physical connection that does not come from the pool. Testing Connection Pooling Behavior For Testing the connection pooling behavior, I developed this small code in C# that basically, opens the connection, provides information about the port using and close the connection. Repeating this process 10000 times. The idea was to track active connections and check if the port and connection policy were changing after modifying the connection policy. Initially, I attemped to use netstat -ano to track active connections and monitor the local port used by each session. Unfortunately, in Azure SQL Database, local port information is not reported, making it difficult to confirm whether a connection was truly being reused at the OS level. Despite this limitation, by analyzing the session behavior and connection reuse patterns, we were able to reach a clear conclusion. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; using System.Threading.Tasks; using Microsoft.Data.SqlClient; namespace InfoConn { using System; using System.Data; using System.Diagnostics; using System.Text.RegularExpressions; using System.Threading; using Microsoft.Data.SqlClient; class Program { static void Main() { string connectionStringProxy = "Server=tcp:servername.database.windows.net,1433;Database=db1;User Id=user1;Password=..;Pooling=True;"; Console.WriteLine("Starting Connection Pooling Test"); for (int i = 0; i < 10000; i++) { using (SqlConnection conn = new SqlConnection(connectionStringProxy)) { conn.Open(); ShowConnectionDetails(conn, i); } Thread.Sleep(5000); } Console.WriteLine("Test complete."); } static void ShowConnectionDetails(SqlConnection conn, int attempt) { string query = "SELECT session_id, client_net_address, local_net_address, auth_scheme FROM sys.dm_exec_connections WHERE session_id = @@SPID;"; using (SqlCommand cmd = new SqlCommand(query, conn)) { using (SqlDataReader reader = cmd.ExecuteReader()) { while (reader.Read()) { Console.WriteLine($"[Attempt {attempt + 1}] Session ID: {reader["session_id"]}"); Console.WriteLine($"[Attempt {attempt + 1}] Client IP: {reader["client_net_address"]}"); Console.WriteLine($"[Attempt {attempt + 1}] Local IP: {reader["local_net_address"]}"); Console.WriteLine($"[Attempt {attempt + 1}] Auth Scheme: {reader["auth_scheme"]}"); } } } RetrievePortInformation(attempt); } static void RetrievePortInformation(int attempt) { try { int currentProcessId = Process.GetCurrentProcess().Id; Console.WriteLine($"[Attempt {attempt + 1}] PID: {currentProcessId}"); string netstatOutput = RunNetstatCommand(); var match = Regex.Match(netstatOutput, $@"\s*TCP\s*(\S+):(\d+)\s*(\S+):(\d+)\s*ESTABLISHED\s*{currentProcessId}"); if (match.Success) { string localAddress = match.Groups[1].Value; string localPort = match.Groups[2].Value; string remoteAddress = match.Groups[3].Value; string remotePort = match.Groups[4].Value; Console.WriteLine($"[Attempt {attempt + 1}] Local IP: {localAddress}"); Console.WriteLine($"[Attempt {attempt + 1}] Local Port: {localPort}"); Console.WriteLine($"[Attempt {attempt + 1}] Remote IP: {remoteAddress}"); Console.WriteLine($"[Attempt {attempt + 1}] Remote Port: {remotePort}"); } else { Console.WriteLine($"[Attempt {attempt + 1}] No active TCP connection found in netstat."); } } catch (Exception ex) { Console.WriteLine($"[Attempt {attempt + 1}] Error retrieving port info: {ex.Message}"); } } static string RunNetstatCommand() { using (Process netstatProcess = new Process()) { netstatProcess.StartInfo.FileName = "netstat"; netstatProcess.StartInfo.Arguments = "-ano"; netstatProcess.StartInfo.RedirectStandardOutput = true; netstatProcess.StartInfo.UseShellExecute = false; netstatProcess.StartInfo.CreateNoWindow = true; netstatProcess.Start(); string output = netstatProcess.StandardOutput.ReadToEnd(); netstatProcess.WaitForExit(); return output; } } } }Improving Azure SQL Database reliability with accelerated database recovery in tempdb
We are pleased to announce that in Azure SQL Database, accelerated database recovery is now enabled in the tempdb database to bring instant transaction rollback and aggressive log truncation for transactions in tempdb. The same improvement is coming to SQL Server and Azure SQL Managed Instance.263Views1like0CommentsIntroducing Azure Database Fleet Manager
Azure SQL Database is powering some of the largest SaaS solutions on the planet sporting millions of tenants. It does that through a combination of multiple multi-tenancy models that support different requirements and use cases. from few users to millions. In this article, we present a new Azure Database Platform capability (in Private Preview) that helps SaaS ISVs and Solution Providers building multi-tenant data apps by removing complexities in managing fleets of 100ks databases and let them focus on finding their optimal price-performance ratio.12KViews8likes7CommentsSeamless end-to-end SQL Server migration to Azure with Azure Arc
Migrating your on-premises SQL Server to Azure used to require multiple tools and involve several disconnected steps. We have addressed these challenges with an integrated all-in-one migration experience for Arc-enabled SQL Servers. Our new solution eliminates the need for additional software or tools, requiring only Arc-enablement of your SQL Server to complete the entire end-to-end migration journey. We refer to this experience as a journey because the migration process can span several days or even weeks. Our solution manages every step along the way, allowing you the flexibility to pick up where you left off at any time. About the solution The Arc-enabled migration integrates all steps of the migration journey into a single, simple-to-use experience. The solution starts by providing an overview of the benefits of Azure SQL services and modernizing your SQL Server in Azure. It offers continuous automated assessments of your SQL Server databases, providing recommendations for migration to various Azure SQL destinations. Based on these recommendations, an appropriate Azure SQL destination is suggested, tailored to your workload needs. Thereafter, you can choose to provision the recommended Azure SQL service in Azure and start the migration process. Throughout the process, you can monitor the ongoing migrations, evaluate data replicated in Azure, and control the cutover point to Azure according to your business requirements. Figure 1: Integrated Arc enabled end to end migrations experience. Note: Functionality, look and feel of preview product experiences are subject to change. This release is limited to migrating SQL Server databases to Azure SQL Managed Instance only using the link feature as one of the best performing minimum downtime migration solutions. It does not provide other migration options or destinations at this time. Hands-on We love hearing back from our customers! Your participation in the private preview and working with the product group can influence the product roadmap. If you're interested in evaluating your SQL Server workloads for migration to Azure or are ready to migrate, please fill out our application form to request an invitation to the private preview: https://aka.ms/arc-migrations-preview Our product team will select candidates on an ongoing basis based on onboarding capacity. Additional resources Migration overview from SQL Server to Azure SQL Server enabled by Azure Arc464Views2likes0CommentsAzure Data Studio Retirement
We’re announcing the upcoming retirement of Azure Data Studio (ADS) on February 6, 2025, as we focus on delivering a modern, streamlined SQL development experience. ADS will remain supported until February 28, 2026, giving developers ample time to transition. This decision aligns with our commitment to simplifying SQL development by consolidating efforts on Visual Studio Code (VS Code) with the MSSQL extension, a powerful and versatile tool designed for modern developers. Why Retire Azure Data Studio? Azure Data Studio has been an essential tool for SQL developers, but evolving developer needs and the rise of more versatile platforms like VS Code have made it the right time to transition. Here’s why: Focus on innovation VS Code, widely adopted across the developer community, provides a robust platform for delivering advanced features like cutting-edge schema management and improved query execution. Streamlined tools Consolidating SQL development on VS Code eliminates duplication, reduces engineering maintenance overhead, and accelerates feature delivery, ensuring developers have access to the latest innovations. Why Transition to Visual Studio Code? VS Code is the #1 developer tool, trusted by millions worldwide. It is a modern, versatile platform that meets the evolving demands of SQL and application developers. By transitioning, you gain access to cutting-edge tools, seamless workflows, and an expansive ecosystem designed to enhance productivity and innovation. We’re committed to meeting developers where they are, providing a modern SQL development experience within VS Code. Here’s how: Modern development environment VS Code is a lightweight, extensible, and community-supported code editor trusted by millions of developers. It provides: Regular updates. An active extension marketplace. A seamless cross-platform experience for Windows, macOS, and Linux. Comprehensive SQL features With the MSSQL extension in VS Code, you can: Execute queries faster with filtering, sorting, and export options for JSON, Excel, and CSV. Manage schemas visually with Table Designer, Object Explorer, and support for keys, indexes, and constraints. Connect to SQL Server, Azure SQL (all offerings), and SQL database in Fabric using an improved Connection Dialog. Streamline development with scripting, object modifications, and a unified SQL experience. Optimize performance with an enhanced Query Results Pane and execution plans. Integrate with DevOps and CI/CD pipelines using SQL Database Projects. Stay tuned for upcoming features—we’re continuously building new experiences based on feedback from the community. Make sure to follow the MSSQL repository on GitHub to stay updated and contribute to the project! Streamlined workflow VS Code supports cloud-native development, real-time collaboration, and thousands of extensions to enhance your workflows. Transitioning to Visual Studio Code: What You Need to Know We understand that transitioning tools can raise concerns, but moving from Azure Data Studio (ADS) to Visual Studio Code (VS Code) with the MSSQL extension is designed to be straightforward and hassle-free. Here’s why you can feel confident about this transition: No Loss of Functionality If you use ADS to connect to Azure SQL databases, SQL Server, or SQL database in Fabric, you’ll find that the MSSQL extension supports these scenarios seamlessly. Your database projects, queries, and scripts created in ADS are fully compatible with VS Code and can be opened without additional migration steps. Familiar features, enhanced experience VS Code provides advanced tools like improved query execution, modern schema management, and CI/CD integration. Additionally, alternative tools and extensions are available to replace ADS capabilities like SQL Server Agent and Schema Compare. Cross-Platform and extensible Like ADS, VS Code runs on Windows, macOS, and Linux, ensuring a consistent experience across operating systems. Its extensibility allows you to adapt it to your workflow with thousands of extensions. If you have further questions or need detailed guidance, visit the ADS Retirement page. The page includes step-by-step instructions, recommended alternatives, and additional resources. Continued Support With the Azure Data Studio retirement, we’re committed to supporting you during this transition: Documentation: Find detailed guides, tutorials, and FAQs on the ADS Retirement page. Community Support: Engage with the active Visual Studio Code community for tips and solutions. You can also explore forums like Stack Overflow. GitHub Issues: If you encounter any issues, submit a request or report bugs on the MSSQL extension’s GitHub repository. Microsoft Support: For critical issues, reach out to Microsoft Support directly through your account. Transitioning to VS Code opens the door to a more modern and versatile SQL development experience. We encourage you to explore the new possibilities and start your journey today! Conclusion Azure Data Studio has served the SQL community well,but the Azure Data Studio retirement marks an opportunity to embrace the modern capabilities of Visual Studio Code. Transitioning now ensures you’re equipped with cutting-edge tools and a future-ready platform to enhance your SQL development experience. For a detailed guide on ADS retirement , visit aka.ms/ads-retirement. To get started with the MSSQL extension, check out the official documentation. We’re excited to see what you build with VS Code!23KViews4likes21CommentsAvailability metric for Azure SQL DB is now generally available
Azure SQL Database is the modern cloud-based relational database service to power a wide variety of applications including mission critical, resource-intensive and the latest generative AI applications. Azure SQL Database provides industry leading availability SLA of 99.99%. We know customers want to monitor the availability of critical Azure services like Azure SQL Database in a granular, consistent way and in near real time with high quality data. We are pleased to announce the general availability of the Availability metric for Azure SQL DB enabling you to monitor SLA-compliant availability. This Azure monitor metric is at a 1-minute frequency storing up to 93 days. Typically, the latency to display availability is less than three minutes. You can visualize the metric in Azure monitor and set up alerts too. Availability is determined based on the database being operational for connections. A minute is considered as downtime or unavailable for a given database if all continuous attempts by the customer to establish a connection to the database within the minute fail. If there is intermittent unavailability, the duration of continuous unavailability must exceed the one-minute threshold to be considered as downtime. Availability metric data is applicable for a database in DTU or vCore purchasing model and in all the service tiers (Basic, Standard, Premium, General Purpose, Business Critical & Hyperscale). Both singleton and elastic pool deployments are supported. You can monitor the metric by adding Availability metric in the portal as shown below: If your database is configured to send Metrics to Log Analytics under “Diagnostic settings”, you can also query the Availability metric data as shown below: For comprehensive details on Availability metric like the logic used for computing availability please refer to documentation. To learn more of Azure SQL Database Service Level Agreements (SLA) refer to SLA. Frequently Asked Questions What is the “Availability” metric? It is a metric available in Azure Portal to track SLA-compliant availability at an individual SQL Database level. What does SLA-compliant mean? Being SLA-compliant involves the following: Availability is determined based on the database being operational for connections. A minute is considered unavailable for a given database if all continuous attempts by customer to establish a connection to the database within the minute fail. In the case of intermittent unavailability, the duration of continuous unavailability must exceed the one-minute threshold to be considered as downtime. How do you view the metric in the portal? In the Azure portal, select your SQL database and under “Monitoring”, select “Metrics”. In the “Metric” dropdown, choose “Availability”. What is the granularity of the Availability metric? One minute. What are the possible values for the Availability metric for each one-minute data point? Possible values are either 100% or 0. Can I use Azure Monitor to centrally monitor and configure alerts for the Availability metric? Yes How is the downtime minute boundary aligned? It is aligned to wall clock time for example 9:00 or 9:01. Is the Availability metric also presented for the read-only replica of a Business Critical tier database? No. Is the Availability metric also presented for Hyperscale Named Replicas? Yes. Does the Availability metric support private link connections? Yes. Does the Availability metric support both proxy and redirect connection policies? Yes, both are supported. What data does the Availability metric show in cases of an outage? When no connection activity to the database is observed, the Availability metric will show 100%. Availability shows 0 when only failed connections are observed for the entire minute. Depending on the type of outage, connection activity may or may not be observed.470Views0likes0CommentsExciting new T-SQL features: Regex support, Fuzzy matching, and bigint support in DATEADD – preview
We are thrilled to announce the public preview of three powerful T-SQL features that will elevate your SQL queries: Regular Expressions (Regex) support, Fuzzy string-matching, and bigint support in DATEADD. These enhancements are now available in Azure SQL Database and SQL Database in Microsoft Fabric, offering developers more flexibility, efficiency, and precision in their data operations. They will also help simplify complex operations that previously required workarounds or external processing. In this post, we’ll explore the capabilities of each feature, provide practical examples, and explain how they will transform the way you write T-SQL queries. Regular Expressions (Regex) support in T-SQL Regular expressions are a powerful tool for pattern matching and data extraction. They allow you to search, validate, and manipulate text based on specific patterns. With native Regex support in T-SQL, you can now use POSIX-compliant regular expressions to perform sophisticated string operations directly within your SQL queries. Advantages of Regex based pattern matching Flexibility: Easily create complex search patterns to match specific criteria. Efficiency: Perform advanced string operations in a single query, reducing the need for multiple steps. Data Validation: Ensure data meets specific formatting requirements. Data Transformation: Standardize and clean your data by applying Regex patterns. Here is the list of Regex functions being included in this release: REGEXP_LIKE: This function returns TRUE if a string matches a regular expression pattern, or FALSE otherwise. REGEXP_COUNT: This function returns the number of times a regular expression pattern matches in a string. REGEXP_INSTR: This function returns the starting or ending position, based on the specified option, of the given occurrence of a regular expression pattern in a string. REGEXP_REPLACE: This function returns a modified string replaced by a ‘replacement string’, where occurrence of the regular expression pattern found. REGEXP_SUBSTR: This function returns a substring that matches a regular expression pattern from a string. If your current database compatibility level is below 170 and you have a use-case where you want to leverage the REGEXP_LIKE function but need additional time to perform the complete analysis and testing before switching the compatibility level, you can use REGEXP_COUNT or REGEXP_INSTR functions as an interim solution. Example – SELECT [Name], Email FROM (VALUES('John Doe', 'john@contoso.com'), ('Alice Smith', 'alice@fabrikam.com'), ('Bob Johnson', 'bob@fabrikam.net'), ('Charlie Brown', 'charlie@contoso.co.in')) as e (Name, Email) WHERE REGEXP_COUNT(e.email, '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$') > 0 --OR REGEXP_INSTR(e.email, '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$') > 0; To learn more about the feature, please visit – Learn page – Regular expressions Private preview blog – https://aka.ms/regex-prpr-blog Fuzzy String-Matching Real-world data often contains inconsistencies, such as typos, spelling variations or minor errors. Fuzzy string-matching functions help you identify and match similar strings, making it easier to match, clean and analyze your data. These functions include EDIT_DISTANCE, EDIT_DISTANCE_SIMILARITY, JARO_WINKLER_DISTANCE, and JARO_WINKLER_SIMILARITY. Advantages of fuzzy match Handle Data Inconsistencies: Enhance string search experience by retrieving similar and approximate matches, addressing spelling variations, typos, missing letters, swapped characters etc. Improved Data Quality: Enhance data deduplication and entity resolution tasks by accurately matching similar records. Reduce the need for manual data cleansing. Flexible Matching: Use different algorithms to suit various use cases and data characteristics. Here is the list of fuzzy match functions that are being introduced: The EDIT_DISTANCE function calculates the number of characters that need to be substituted, inserted, or deleted, to transform one string into another. The EDIT_DISTANCE_SIMILARITY function returns a similarity score based on the edit distance. The JARO_WINKLER_DISTANCE function calculates the similarity between two strings using the Jaro-Winkler algorithm. The JARO_WINKLER_SIMILARITY function returns a similarity score using the Jaro-Winkler algorithm. Example – Here is an example query with all the functions – SELECT t.source_string, t.target_string, EDIT_DISTANCE(t.source_string, t.target_string) as ED_Distance, JARO_WINKLER_DISTANCE(t.source_string, t.target_string) as JW_Distance, EDIT_DISTANCE_SIMILARITY(t.source_string, t.target_string) as ED_Similarity, CAST(JARO_WINKLER_SIMILARITY(t.source_string, t.target_string)*100 as int) as JW_Similarity FROM (VALUES -- Customer Names (Typos & Variants) ('Johnathan Doe', 'Jonathon Doe'), ('Elizabeth Smith', 'Elizabth Smithe'), ('Christopher Nolan', 'Kristopher Nolen'), ('Catherine Johnson', 'Katherine Jhonson'), ('Michael Brown', 'Micheal Browne'), -- Product Names (Typos & Variants) ('Surface Pro', 'SurfacePro'), ('XBox 10', 'XBox X'), ('Azure SQL Database', 'Azure SQL DB'), ('SQL Server', 'SQLServer'), -- Addresses (Typos & Variants) ('123 Main St, New York, NY', '123 Main Street, NYC'), ('456 Elm Ave, Los Angeles, CA', '456 Elm Av., LA, CA'), ('789 Oak Dr, Houston, TX', '789 Oak Drive, Houston TX'), ('321 Pine Blvd, Chicago, IL', '321 Pyn Bld, Chicago IL'), ('654 Maple Rd, Miami, FL', '654 Mpl Rd., Miami'), -- Common Words (Spelling Mistakes & Variations) ('Accommodate', 'Acommodate'), ('Catalogue', 'Catalog'), ('Definitely', 'Definately'), ('February', 'Febuary'), ('Final', 'Finale'), ('Organise', 'Organize'), ('Programme', 'Program'), ('Receive', 'Recieve'), ('Separate', 'Seperate') ) as t(source_string, target_string); To learn more about the feature, please visit – learn page. DATEADD (bigint support) The DATEADD function now supports BIGINT for the number parameter, enabling precise calculations with large interval values. This update removes the previous integer-only limitation, simplifying high-precision datetime computations. Advantages of bigint support in DATEADD It will help perform accurate date and time calculations, especially with precise units like MICROSECOND or NANOSECOND. The earlier limit of INT could result in an overflow if the interval value is large. Bigint ensures safe calculations for historical or long-term data. Examples – Here are couple of examples showcasing the bigint support in DATEADD. DECLARE @datetime2 AS DATETIME2; SET @datetime2 = '2024-02-26 01:01:01.1111111'; -- Add a very large number of nanoseconds to the datetime SELECT DATEADD (NANOSECOND, 2223372036854775807, @datetime2) as NewDateTime; --To calculate the difference in milliseconds from the current date until December 31, 9999 using the DATEDIFF_BIG function, and subsequently add those milliseconds to the current date and time SELECT DATEADD (MILLISECOND, DATEDIFF_BIG(MILLISECOND,GETDATE(),'9999-12-31'), GETDATE()); To learn more, please visit – learn page. Conclusion These new T-SQL features—Regex support, Fuzzy string-matching, and bigint support in DATEADD—empower developers to write more efficient, flexible, and precise SQL queries. Whether you are cleaning and validating data, performing complex string manipulations, matching similar strings, or handling high-precision date calculations, these enhancements will significantly improve your data management capabilities. We encourage you to explore these features in Azure SQL Database and SQL DB in Fabric and provide your valuable feedback. Stay tuned for more updates and enhancements in the future. Thank you for being a part of our community and helping us make Azure SQL Database and SQL Server the best platform for your data needs!1.2KViews0likes0CommentsLesson Learned #497:Understanding the Ordering of uniqueidentifier in SQL Server
Today, I worked on a service request that our customer asked about how SQL Server sorts the uniqueidentifier data type. We know that uniqueidentifier store globally unique identifiers (GUIDs). GUIDs are widely used for unique keys due to their extremely low probability of duplication. One common method to generate a GUID in SQL Server is by using the NEWID() function. However, the ordering of GUIDs, especially those generated by NEWID(), can appear non-intuitive. I would like to share my lessons learned how to determine the shorting method using uniqueidentifier and NEWID().Unable to enable RCSI
We created our Azure SQL database few years ago and at that time RCSI was not a default setting. I am trying to enable RCSI now but unsuccessful. Database details: We have Azure SQL database (single database) in East US. A read replica (Geo-replication) was created against it. Steps identified from documentation to enable RCSI: ALTER DATABASE [DatabaseName] REMOVE SECONDARY ON SERVER [SecondaryServerName]; ALTER DATABASE [DatabaseName] SET SINGLE_USER WITH ROLLBACK IMMEDIATE; ALTER DATABASE [DatabaseName] SET READ_COMMITTED_SNAPSHOT ON; ALTER DATABASE [DatabaseName] SET MULTI_USER; ALTER DATABASE [DatabaseName] ADD SECONDARY ON SERVER [SecondaryServerName]; Second script to set single user returns below error. I am unable to proceed after this. Any help to resolve the problem and enable RCSI is appreciated. Msg 1468, Level 16, State 1, Line 1 The operation cannot be performed on database "[DatabaseName]" because it is involved in a database mirroring session or an availability group. Some operations are not allowed on a database that is participating in a database mirroring session or in an availability group. Msg 5069, Level 16, State 1, Line 1 ALTER DATABASE statement failed.Solved87Views0likes2Comments