azure
119 TopicsSecuring VNet-Integrated Azure Functions with Blob Triggers: Private Endpoints and No Public Access
Azure Blob Trigger in Azure Functions enables automatic function invocation based on changes in Blob Storage, streamlining serverless integration with cloud storage. To ensure reliability, it handles failures by using poison blob queues and configurable retry mechanisms.Implementing Disaster Recovery for Azure App Service Web Applications
Starting March 31, 2025, Microsoft will no longer automatically place Azure App Service web applications in disaster recovery mode in the event of a regional disaster. This change emphasizes the importance of implementing robust disaster recovery (DR) strategies to ensure the continuity and resilience of your web applications. Here’s what you need to know and how you can prepare. Understanding the Change Azure App Service has been a reliable platform for hosting web applications, REST APIs, and mobile backends, offering features like load balancing, autoscaling, and automated management. However, beginning March 31, 2025, in the event of a regional disaster, Azure will not automatically place your web applications in disaster recovery mode. This means that you, as a developer or IT professional, need to proactively implement disaster recovery techniques to safeguard your applications and data. Why This Matters Disasters, whether natural or technical, can strike without warning, potentially causing significant downtime and data loss. By taking control of your disaster recovery strategy, you can minimize the impact of such events on your business operations. Implementing a robust DR plan ensures that your applications remain available and your data remains intact, even in the face of regional outages. Common Disaster Recovery Techniques To prepare for this change, consider the following commonly used disaster recovery techniques: Multi-Region Deployment: Deploy your web applications across multiple Azure regions. This approach ensures that if one region goes down, your application can continue to run in another region. You can use Azure Traffic Manager or Azure Front Door to route traffic to the healthy region. Multi-region load balancing with Traffic Manager and Application Gateway Highly available multi-region web app Regular Backups: Implement regular backups of your application data and configurations. Azure App Service provides built-in backup and restore capabilities that you can schedule to run automatically. Back up an app in App Service How to automatically backup App Service & Function App configurations Active-Active or Active-Passive Configuration: Set up your applications in an active-active or active-passive configuration. In an active-active setup, both regions handle traffic simultaneously, providing high availability. In an active-passive setup, the secondary region remains on standby and takes over only if the primary region fails. About active-active VPN gateways Design highly available gateway connectivity Automated Failover: Use automated failover mechanisms to switch traffic to a secondary region seamlessly. This can be achieved using Azure Site Recovery or custom scripts that detect failures and initiate failover processes. Add Azure Automation runbooks to Site Recovery recovery plans Create and customize recovery plans in Azure Site Recovery Monitoring and Alerts: Implement comprehensive monitoring and alerting to detect issues early and respond promptly. Azure Monitor and Application Insights can help you track the health and performance of your applications. Overview of Azure Monitor alerts Application Insights OpenTelemetry overview Steps to Implement a Disaster Recovery Plan Assess Your Current Setup: Identify all the resources your application depends on, including databases, storage accounts, and networking components. Choose a DR Strategy: Based on your business requirements, choose a suitable disaster recovery strategy (e.g., multi-region deployment, active-active configuration). Configure Backups: Set up regular backups for your application data and configurations. Test Your DR Plan: Regularly test your disaster recovery plan to ensure it works as expected. Simulate failover scenarios to validate that your applications can recover quickly. Document and Train: Document your disaster recovery procedures and train your team to execute them effectively. Conclusion While the upcoming change in Azure App Service’s disaster recovery policy may seem daunting, it also presents an opportunity to enhance the resilience of your web applications. By implementing robust disaster recovery techniques, you can ensure that your applications remain available and your data remains secure, no matter what challenges come your way. Start planning today to stay ahead of the curve and keep your applications running smoothly. Recover from region-wide failure - Azure App Service Reliability in Azure App Service Multi-Region App Service App Approaches for Disaster Recovery Feel free to share your thoughts or ask questions in the comments below. Let's build a resilient future together! 🚀Setting Up a Secure Webhook in an Azure Monitor Action Group
When configuring an Action Group in Azure Monitor, one of the most powerful notification options is a secure webhook. This allows you to send alerts to an external endpoint with an authentication token, ensuring that only authorized services can process the alerts. However, setting this up can be a bit confusing—especially when dealing with app registrations, service principals, and roles. This guide simplifies the process by breaking it down into clear steps. Please check the official documentation here. The PowerShell (PS) commands I use below have been derived from the document, but I am using the Azure Portal to create the AppRole. Prerequisites Before diving in, make sure you have the following: PowerShell (PS) with the Microsoft.Graph module installed Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force Run this command to connect with the proper scope to MsGraph: Connect-MgGraph -Scopes “Application.ReadWrite.All” -TenantId $yourTenantId The Azns AAD Webhook Service Principal created in your Microsoft Entra tenant. This service principal is critical, as it allows Azure Monitor to authenticate when calling your webhook. You’ll need to ensure this principal is in your tenant and assign the necessary role(s) to it. This principal can be found in the Microsoft Entra tenant (This is different from your Entra tenant). When you run the following command to create this service principal it will create this as a first-party Microsoft application in your tenant. For more info on these apps in your tenant please check out this document. To check and see if you have this service principal in your tenant. Notice we are checking for a specific id, it must be this id to work correctly. You can run the following PS command: Get-MgServicePrincipal -Filter "appId eq '461e8683-5575-4561-ac7f-899cc907d62a'" If the command returns nothing, you can create it by running this PS command: New-MgServicePrincipal -AppId 461e8683-5575-4561-ac7f-899cc907d62a Step 1: Create an App Registration Now that you have all the pre-requisites, the first step is to register an application in Microsoft Entra ID. Configure this to expose application permissions. How to create the App Registration: Navigate to Microsoft Entra ID in the Azure Portal Click on App registrations → New registration Provide a Name (e.g., SecureWebhookApp) Click Register Important Notes: You must explicitly assign yourself as an owner of the app registration. This ensures that it appears in the Azure Portal UI when selecting app registrations for your webhook setup. Step 2: Add Application ID URI An Application ID URI is required to uniquely identify the web API that the webhook will use for authentication. How to set the Application ID URI: In your App Registration, go to Expose an API Click the Add link next to the Application ID URI This will open up the Edit application ID URI blade on the right Use the default, or enter a unique URI Click Save It will look something like this: This step is crucial because the Action Group will not allow you to save your secure webhook giving you and error like this: Step 3: Create an App Role for the App Registration Next, we need to define an application role that will be used for authentication. This role is required so that the ‘Azns AAD Webhook’ Service Principal can obtain a token with the appropriate permissions. We will join this role to the service principal in a later step. How to create an App Role: In your App Registration, go to the App roles section Click Create app role Set the Display name and Value (e.g., App.ActionGroupSecureWebhook) Set the Allowed member type to Application Add a Description Step 4: Assigning the ‘Azns AAD Webhook’ Service Principal to the Role The next step is to use the following PS command to assign the role to the ‘Azns AAD Webhook’ Service Principal. This functionality is not currently available in the portal which is why you have to do it via the MsGraph PS command. How to the App Role: Run the following command in PS New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $myActionGroupServicePrincipal.Id -PrincipalId $myActionGroupServicePrincipal.Id -AppRoleId $myAppRoleId -ResourceId $myAppRegistrationId Please check out the full script that I use to connect to MsGraph, get all the ids and create the Service Principal and the role assignment. You will need to fill in your own values for $myTenantId, $myMicrosoftEntraAppRegistrationObjectId, and $myActionGroupRoleName You can confirm this by doing the following: Open up your App Registration -> Overview In the ‘Essentials’ section you should see a place for ‘Managed application in local directory’, that will have a link that will take you to the ‘Enterprise Application’ Once you are in the ‘Enterprise Application’, go to Manage -> Users and groups and you should see something like this. It shows you that you have successfully assigned the app role you created to the ‘Azns AAD Webhook’ Service Principal: Step 5: Create the Secure Webhook Finally, we are ready to create the Secure Webhook from the Action Group Edit page. How to create the Secure Webhook: Navigate to the Action Group you want to add the secure webhook to Azure Monitor -> Alerts -> Action groups Select the Action group you want to edit and hit the Edit button Find the actions section and under ‘Action type’, select Secure Webhook from the dropdown: That will open up a blade on the right where you will need to select the Object ID of the app registration you created above (this is where it will only show up if you are the owner of that app registration) Select the object Id If you just created the app registration, you may need to wait several minutes for the system to synchronize and process the necessary updates. Fill in the webhook uri Choose if you want the ‘Common alert schema’ or not Fill out the name of the webhook back under the Actions section. Hit ‘Save changes’ button Now you can test your action group and confirm that the bearer token is passed with the request, and that the app role was added to the token. To do this I usually just hit the ‘Test action group’ button at the top next to the ‘Save changes’ button. Then that will bring up the Test blade. Select a sample alert type and hit the test button. The webhook I’m using is a custom Azure Function that is logging out the headers so that I can verify the Bearer token was passed in. If you would like to see this code it is available at this repo. Here is what the log output looks like: Taking a closer look at the token you can see that it included the role and the app id of the ‘Azns AAD Webhook’ Service Principal: How It Works Once the setup is complete, the Action Group will: Request a token from Microsoft Entra ID using the app role you've created Include the token in the Authorization header of the webhook request This ensures that the receiving system can validate the request and only process alerts from authorized sources. Wrapping Up Setting up a secure webhook for Azure Monitor Action Groups might seem complex, but by following these steps, you can ensure that your alerts are sent securely and authenticated. By leveraging Microsoft Entra ID, app roles, and service principals, you’re adding a layer of security to your webhook integrations—protecting sensitive alert data from unauthorized access. Would love to hear your thoughts—have you implemented secure webhooks before? Let me know in the comments!Configuring Azure Blob Trigger Identity Based Connection
If you are tired of having to manage connection strings and secrets for your blob triggered azure functions, then you will be glad to know that as of Azure Blobs extension version 5.0.0 you now can configure these connections using managed identities .Managed SQL Deployments Like Terraform
Introduction This is the next post in our series on CI/CD for SQL projects. In this post we will challenge some long held beliefs on how we should manage SQL Deployments. Traditionally we've always had this notion that we should never drop data in any environment. Deployments should almost extensively be done via SQL scripts and manually ran to ensure completion and to prevent any type of data loss. We will challenge this and propose a solution that falls more in line with other modern DevOps tooling and practices. If this sounds appealing to you then let's dive into it. Why We've always approached the data behind our applications as the differentiating factor when it comes to Intellectual Property (IP). No one wants to hear the words that we've lost data or that the data is unrecoverable. Let me be clear and throw a disclaimer on what I am going to propose, this is not a substitute for proper data management techniques to prevent data loss. Rather we are going to look at a way to thread the needle on keeping the data that we need while removing the data that we don't. Shadow Data We've all heard about "shadow IT", well what about "shadow data"? I think every developer has been there. For example, taking a backup of a table/database to ensure we don't inadvertently drop it during a deployment. Heck sometimes we may even go a step further and backup this up into a lower environment. The caveat is that we very rarely ever go back and clean up that backup. We've effectively created a snapshot of data which we kept for our own comfort. This copy is now in an ungoverned, unmanaged, and potentially in an insecure state. This issue then gets compounded if we have automated backups or restore to QA operations. Now we keep amplifying and spreading our shadow data. Shouldn't we focus on improving the Software Delivery Lifecycle (SDLC), ensuring confidence in our data deployments? Let's take it a step further and shouldn't we invest in our data protection practice? Why should we be doing this when we have technology that backs up our SQL schema and databases? Another consideration, what about those quick "hot fixes" that we applied in production. The ones where we changed a varchar() column length to accommodate the size of a field in production. I am not advocating for making these changes in production...but when your CIO or VP is escalating since this is holding up your businesses Data Warehouse and you so happen to have the SQL admin login credentials...stuff happens. Wouldn't it be nice if SQL had a way to report back that this change needs to be accommodated for in the source schema? Again, the answer is in our SDLC process. So, where is the book of record for our SQL schemas? Well, if this is your first read in this series or if you are unfamiliar with source control I'd encourage you to read Leveraging DotNet for SQL Builds via YAML | Microsoft Community Hub where I talk about the importance of placing your database projects under source control. The TL/DR...your database schema definitions should be defined under source control, ideally as a .sqlproj. Where Terraform Comes In At this point I've already pointed out a few instances on how our production database instance can differ from what we have defined in our source project. This certainly isn't anything new in software development. So how does other software development tooling and technologies account for this? Generally, application code simply gets overwritten, and we have backup versions either via release branches, git tags, or other artifacts. Cloud infrastructure can be defined as Infrastructure as Code (IaC) and as such still follow something similar to our application code workflow. There are two main flavors of IaC for Azure: Bicep/ARM and Terraform. Bicep/ARM adheres to an incremental deployment, which has its pros and cons. The quick version is Azure Resource Manager (ARM) deployments will not delete resources that are not defined in its template. Part of this has led to Azure Deployment Stacks which can help enforce resource deletion when it's been removed from a template. If interested in understanding a Terraform workflow I will point you to one of my other posts on the topic. At a high level Terraform evaluates your IaC definition and determines what properties need to be updated, and more importantly, what resources need to be removed. Now how does Terraform do this and more importantly, how can we tell what properties will be updated and/or removed? Terraform has a concept known as a plan. This plan will run your deployment against what is known as the state file, in Bicep/ARM this is the Deployment Stack, and produce a summary of changes that will occur. This includes new resources to be created, modification of existing resources, and deletion of resources previously deployed to the same state file. Typically, I recommend running a Terraform plan across all environments at CI. This ensures one can evaluate changes being proposed across all potential environments and summarize these changes at the time of the Pull Request (PR). I then advise re-executing this plan prior to deployment as a way to confirm/re-evaluate if anything has been updated since the original plan ran. Some will argue the previous plan can be "approved" to deploy to the next environment; however, there is little overhead in running a second plan and I prefer this option. Here's the thing....SQL actually has this same functionality. Deploy Reports Via SqlPackage there is additional functionality we can leverage with our .dacpacs. We are going to dive a little more on Deploy Reports. If you have followed this series, you may know we use the SqlPackage Publish command wrapped behind the SqlAzureDacpacDeployment@1 task. More information on this can be found at Deploying .dapacs to Azure SQL via Azure DevOps Pipelines | Microsoft Community Hub . So, what is a Deploy Report? A Deploy Report is the XML representation of the changes your .dacpac will make to a database. Here is an example of one denoting that there is a risk of potential data loss: This report is the key to our whole argument for modeling a SQL Continuous Integration/Continous Delivery after one that Terraform uses. We already will have a separate .dacpac file, built from the same .sqlproj, for each environment when leveraging pre/post scripts as we saw in Deploying .dacpacs to Multiple Environments via ADO Pipelines | Microsoft Community Hub. So now we need to take each one of those and run a Deploy Report against the appropriate target. This is the same as effectively running a `tf plan` with a different variable file against each environment to determine what actions a Terraform `apply` will execute. These Deploy Reports are then what we will include in our PR approval to validate and approve any changes we will make to our SQL database. Dropping What's Not in Source Control This is the controversial part and the biggest sell in our adoption of a Terraform like approach to SQL deployments. It has long been considered a best practice to have whatever is deployed match what is under source control. This provides for a consistent experience when developing and then deploying across multiple environments. Within IaC, we have our cloud infrastructure defined in source control and deployed across environments. Typically, it is seen as a good practice to delete resources which have been removed from source control. This helps simplify the environment, reduces cost, and reduces potential security surface areas. So why not the same for databases? Typically, it is due to us having the fear of losing data. To prevent this, we should have proper data protection and recovery processes in place. Again, I am not addressing that aspect. If we have those accounted for, then by all means, our source control version of our databases should match our deployed environments. What about security and indexing? Again, this can be accounted for in Deploying .dacpacs to Multiple Environments via ADO Pipelines | Microsoft Community Hub. Where we have two different post deployment security scripts, and these scripts are under source control! How can we see if data loss will occur? Refer back to the Deploy Reports for this! There potentially is some natural hesitation as the default method for deploying a .dacpac has safeguards to prevent deployments in the event of potential data loss. This is not a bad thing as it prevents a destructive activity from automatically occurring; however, we by no means need to accept the default behavior. We will need to refer to SqlPackage Publish - SQL Server | Microsoft Learn. From this list we will be able to identify and explicitly set the value for various parameters. These will enable our package to deploy even in the event of potential data loss. Conclusion This post hopefully challenges the mindset we have when it comes to database deployments. By taking an approach that more closely relates to modern DevOps practices, we can gain confidence that our source control and database match, increased reliability and speed with our deployments, and we are closing potential security gaps in our database deployment lifecycle. This content was not designed to be technical. In our next post we will demo, provide examples, and talk through how to leverage YAML Pipelines to accomplish what we have outlined here. Be sure to follow me on LinkedIn for the latest publications. For those who are technically sound and want to skip ahead feel free to check out my code on my GitHub : https://github.com/JFolberth/cicd-adventureWorks and https://github.com/JFolberth/TheYAMLPipelineOneFabric AI Skill - example AI Agent for your Data using Healthcare data
Interested in trying out the new Microsoft Fabric AI Skill? Fabric AI Skill is a new Generative AI capability in preview for Fabric that functions as the basis for a SaaS AI Agent in Fabric using natural language to query your data. An AI Skill module has been added to the GitHub repo created along with my colleague Inderjit Rana. Why would you want to use AI Skill? AI Skill uses a pattern similar to a RAG large language model (LLM) so that context about your data and examples of proper SQL queries will provide a basis for the LLM to generate accurate queries that then run against your data such as "What was total costs in 2022 for Dallas, Texas?" or "Show me the top cities in Florida based on total beneficiaries?" If you've already deployed the 250M row Lakehouse / Warehouse / Direct Lake solution from the repo, the new AI Skill content can be deployed in about 5-10 minutes. If you haven't deployed the rest of the repo, steps in the repo documentation will guide you through the deployment process which should take less than an hour and result in an optimized Lakehouse and/or Warehouse that is ready for Fabric AI Skills. All that you need to deploy the entire solution is a Fabric Workspace that uses a Fabric or Premium capacity with the Fabric tools enabled. I'm currently working on an in-depth article to review use cases for AI Skill, but for now you can get started by using the GitHub repo at this link: https://github.com/isinghrana/fabric-samples-healthcare/tree/main/analytics-bi-directlake-starschema A video walking through a demo of the AI Skill module, along with a walkthrough of the deployment process can be found here: https://youtu.be/ftout8UX4lgMicrosoft expands Azure scalability for Healthcare Organizations
Microsoft has expanded Azure's scalability for running Epic's Chronicles Operational Database (ODB) to 65 million global references per second (GRefs/s). This enhancement, utilizing the new Mbv3 VM series with Azure Boost and Ultra Disk, covers 94% of the Epic market and significantly boosts performance for large health care providers using Epic on Azure.1.1KViews1like0Comments