cloud native
83 Topics[On demand] AMA: Cloud native with Microsoft Intune
Find the answers you need to help your organization become cloud-ready. Watch AMA: Cloud native with Microsoft Intune – now on demand – and join the conversation at https://aka.ms/AMA/CloudNativeWithIntune. For more free technical skilling on the latest in Windows, Windows in the cloud, and Microsoft Intune, view the full Microsoft Technical Takeoff session list.20Views0likes0CommentsFour Methods to Access Azure Key Vault from Azure Kubernetes Service (AKS)
In this article, we will explore various methods that an application hosted on Azure Kubernetes Service (AKS) can use to retrieve secrets from an Azure Key Vault resource. You can find all the scripts on GitHub. Microsoft Entra Workload ID with Azure Kubernetes Service (AKS) In order for workloads deployed on an Azure Kubernetes Services (AKS) cluster to access protected resources like Azure Key Vault and Microsoft Graph, they need to have Microsoft Entra application credentials or managed identities. Microsoft Entra Workload ID integrates with Kubernetes to federate with external identity providers. To enable pods to have a Kubernetes identity, Microsoft Entra Workload ID utilizes Service Account Token Volume Projection. This means that a Kubernetes token is issued and OIDC federation enables Kubernetes applications to securely access Azure resources using Microsoft Entra ID, based on service account annotations. As shown in the following diagram, the Kubernetes cluster becomes a security token issuer, issuing tokens to Kubernetes Service Accounts. These tokens can be configured to be trusted on Microsoft Entra applications and user-defined managed identities. They can then be exchanged for an Microsoft Entra access token using the Azure Identity SDKs or the Microsoft Authentication Library (MSAL). In the Microsoft Entra ID platform, there are two kinds of workload identities: Registered applications have several powerful features, such as multi-tenancy and user sign-in. These capabilities cause application identities to be closely guarded by administrators. For more information on how to implement workload identity federation with registered applications, see Use Microsoft Entra Workload Identity for Kubernetes with a User-Assigned Managed Identity. Managed identities provide an automatically managed identity in Microsoft Entra ID for applications to use when connecting to resources that support Microsoft Entra ID authentication. Applications can use managed identities to obtain Microsoft Entra tokens without having to manage any credentials. Managed identities were built with developer scenarios in mind. They support only the Client Credentials flow meant for software workloads to identify themselves when accessing other resources. For more information on how to implement workload identity federation with registered applications, see Use Azure AD Workload Identity for Kubernetes with a User-Assigned Managed Identity. Advantages Transparently assigns a user-defined managed identity to a pod or deployment. Allows using Microsoft Entra integrated security and Azure RBAC for authorization. Provides secure access to Azure Key Vault and other managed services. Disadvantages Requires using Azure libraries for acquiring Azure credentials and using them to access managed services. Requires code changes. Resources Use Microsoft Entra Workload ID with Azure Kubernetes Service (AKS) Deploy and Configure an AKS Cluster with Workload Identity Configure Cross-Tenant Workload Identity on AKS Use Microsoft Entra Workload ID with a User-Assigned Managed Identity in an AKS-hosted .NET Application Azure Key Vault Provider for Secrets Store CSI Driver in AKS The Azure Key Vault provider for Secrets Store CSI Driver enables retrieving secrets, keys, and certificates stored in Azure Key Vault and accessing them as files from mounted volumes in an AKS cluster. This method eliminates the need for Azure-specific libraries to access the secrets. This Secret Store CSI Driver for Key Vault offers the following features: Mounts secrets, keys, and certificates to a pod using a CSI volume. Supports CSI inline volumes. Allows the mounting of multiple secrets store objects as a single volume. Offers pod portability with the SecretProviderClass CRD. Compatible with Windows containers. Keeps in sync with Kubernetes secrets. Supports auto-rotation of mounted contents and synced Kubernetes secrets. When auto-rotation is enabled for the Azure Key Vault Secrets Provider, it automatically updates both the pod mount and the corresponding Kubernetes secret defined in the secretObjects field of SecretProviderClass. It continuously polls for changes based on the rotation poll interval (default is two minutes). If a secret in an external secrets store is updated after the initial deployment of the pod, both the Kubernetes Secret and the pod mount will periodically update, depending on how the application consumes the secret data. Here are the recommended approaches for different scenarios: Mount the Kubernetes Secret as a volume: Utilize the auto-rotation and sync K8s secrets features of Secrets Store CSI Driver. The application should monitor changes from the mounted Kubernetes Secret volume. When the CSI Driver updates the Kubernetes Secret, the volume contents will be automatically updated. Application reads data from the container filesystem: Take advantage of the rotation feature of Secrets Store CSI Driver. The application should monitor file changes from the volume mounted by the CSI driver. Use the Kubernetes Secret for an environment variable: Restart the pod to acquire the latest secret as an environment variable. You can use tools like Reloader to watch for changes on the synced Kubernetes Secret and perform rolling upgrades on pods. Advantages Secrets, keys, and certificates can be accessed as files from mounted volumes. Optionally, Kubernetes secrets can be created to store keys, secrets, and certificates from Key Vault. No need for Azure-specific libraries to access secrets. Simplifies secret management with transparent integration. Disadvantages Still requires accessing managed services such as Azure Service Bus or Azure Storage using their own connection strings from Azure Key Vault. Cannot utilize Microsoft Entra ID integrated security and managed identities for accessing managed services. Resources Using the Azure Key Vault Provider for Secrets Store CSI Driver in AKS Access Azure Key Vault with the CSI Driver Identity Provider Configuration and Troubleshooting Options for Azure Key Vault Provider in AKS Azure Key Vault Provider for Secrets Store CSI Driver Dapr Secret Store for Key Vault Dapr (Distributed Application Runtime) is a versatile and event-driven runtime that simplifies the development of resilient, stateless, and stateful applications for both cloud and edge environments. It embraces the diversity of programming languages and developer frameworks, providing a seamless experience regardless of your preferences. Dapr encapsulates the best practices for building microservices into a set of open and independent APIs known as building blocks. These building blocks offer the following capabilities: Enable developers to build portable applications using their preferred language and framework. Are completely independent from each other, allowing flexibility and freedom of choice. Have no limits on how many building blocks can be used within an application. Dapr offers a built-in secrets building block that makes it easier for developers to consume application secrets from a secret store such as Azure Key Vault, AWS Secret Manager, and Google Key Management, and Hashicorp Vault. You can follow these steps to use Dapr's secret store building block: Deploy the Dapr extension to your AKS cluster. Set up a component for a specific secret store solution. Retrieve secrets using the Dapr secrets API in your application code. Optionally, reference secrets in Dapr component files. You can watch this overview video and demo to see how Dapr secrets management works. The secrets management API building block offers several features for your application. Configure secrets without changing application code: You can call the secrets API in your application code to retrieve and use secrets from Dapr-supported secret stores. Watch this video for an example of how the secrets management API can be used in your application. Reference secret stores in Dapr components: When configuring Dapr components like state stores, you often need to include credentials in component files. Alternatively, you can place the credentials within a Dapr-supported secret store and reference the secret within the Dapr component. This approach is recommended, especially in production environments. Read more about referencing secret stores in components. Limit access to secrets: Dapr provides the ability to define scopes and restrict access permissions to provide more granular control over access to secrets. Learn more about using secret scoping. Advantages Allows applications to retrieve secrets from various secret stores, including Azure Key Vault. Simplifies secret management with Dapr's consistent API. Supports Azure Key Vault integration with managed identities. Supports third-party secret stores, such as Azure Key Vault, AWS Secret Manager, and Google Key Management, and Hashicorp Vault. Disadvantages Requires injecting a sidecar container for Dapr into the pod, which may not be suitable for all scenarios. Resources Dapr Secrets Overview Azure Key Vault Secret Store in Dapr Secrets management quickstart: Retrieve secrets in the application code from a configured secret store using the secrets management API. Secret Store tutorial: Learn how to use the Dapr Secrets API to access secret stores. Authenticating to Azure for Dapr How-to Guide for Managed Identities with Dapr External Secrets Operator with Azure Key Vault The External Secrets Operator is a Kubernetes operator that enables managing secrets stored in external secret stores, such as Azure Key Vault, AWS Secret Manager, and Google Key Management, and Hashicorp Vault.. It leverages the Azure Key Vault provider to synchronize secrets into Kubernetes secrets for easy consumption by applications. External Secrets Operator integrates with Azure Key vault for secrets, certificates and Keys management. You can configure the External Secrets Operator to use Microsoft Entra Workload ID to access an Azure Key Vault resource. Advantages Manages secrets stored in external secret stores like Azure Key Vault, AWS Secret Manager, and Google Key Management, Hashicorp Vault, and more. Provides synchronization of Key Vault secrets into Kubernetes secrets. Simplifies secret management with Kubernetes-native integration. Disadvantages Requires setting up and managing the External Secrets Operator. Resources External Secrets Operator Azure Key Vault Provider for External Secrets Operator Hands On Labs You are now ready to see each technique in action. Configure Variables The first step is setting up the name for a new or existing AKS cluster and Azure Key Vault resource in the scripts/00-variables.sh file, which is included and used by all the scripts in this sample. # Azure Kubernetes Service (AKS) AKS_NAME="<AKS-Cluster-Name>" AKS_RESOURCE_GROUP_NAME="<AKS-Resource-Group-Name>" # Azure Key Vault KEY_VAULT_NAME="<Key-Vault-name>" KEY_VAULT_RESOURCE_GROUP_NAME="<Key-Vault-Resource-Group-Name>" KEY_VAULT_SKU="Standard" LOCATION="EastUS" # Choose a location # Secrets and Values SECRETS=("username" "password") VALUES=("admin" "trustno1!") # Azure Subscription and Tenant TENANT_ID=$(az account show --query tenantId --output tsv) SUBSCRIPTION_NAME=$(az account show --query name --output tsv) SUBSCRIPTION_ID=$(az account show --query id --output tsv) The SECRETS array variable contains a list of secrets to create in the Azure Key Vault resource, while the VALUES array contains their values. Create or Update AKS Cluster You can use the following Bash script to create a new AKS cluster with the az aks create command. This script includes the --enable-oidc-issuer parameter to enable the OpenID Connect (OIDC) issuer and the --enable-workload-identity parameter to enable Microsoft Entra Workload ID. If the AKS cluster already exists, the script updates it to use the OIDC issuer and enable workload identity by calling the az aks update command with the same parameters. #!/bin/Bash # Variables source ../00-variables.sh # Check if the resource group already exists echo "Checking if [$AKS_RESOURCE_GROUP_NAME] resource group actually exists in the [$SUBSCRIPTION_NAME] subscription..." az group show --name $AKS_RESOURCE_GROUP_NAME &>/dev/null if [[ $? != 0 ]]; then echo "No [$AKS_RESOURCE_GROUP_NAME] resource group actually exists in the [$SUBSCRIPTION_NAME] subscription" echo "Creating [$AKS_RESOURCE_GROUP_NAME] resource group in the [$SUBSCRIPTION_NAME] subscription..." # create the resource group az group create --name $AKS_RESOURCE_GROUP_NAME --location $LOCATION 1>/dev/null if [[ $? == 0 ]]; then echo "[$AKS_RESOURCE_GROUP_NAME] resource group successfully created in the [$SUBSCRIPTION_NAME] subscription" else echo "Failed to create [$AKS_RESOURCE_GROUP_NAME] resource group in the [$SUBSCRIPTION_NAME] subscription" exit fi else echo "[$AKS_RESOURCE_GROUP_NAME] resource group already exists in the [$SUBSCRIPTION_NAME] subscription" fi # Check if the AKS cluster already exists echo "Checking if [$AKS_NAME] AKS cluster actually exists in the [$AKS_RESOURCE_GROUP_NAME] resource group..." az aks show \ --name $AKS_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME \ --only-show-errors &>/dev/null if [[ $? != 0 ]]; then echo "No [$AKS_NAME] AKS cluster actually exists in the [$AKS_RESOURCE_GROUP_NAME] resource group" echo "Creating [$AKS_NAME] AKS cluster in the [$AKS_RESOURCE_GROUP_NAME] resource group..." # create the AKS cluster az aks create \ --name $AKS_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME \ --location $LOCATION \ --enable-oidc-issuer \ --enable-workload-identity \ --generate-ssh-keys \ --only-show-errors &>/dev/null if [[ $? == 0 ]]; then echo "[$AKS_NAME] AKS cluster successfully created in the [$AKS_RESOURCE_GROUP_NAME] resource group" else echo "Failed to create [$AKS_NAME] AKS cluster in the [$AKS_RESOURCE_GROUP_NAME] resource group" exit fi else echo "[$AKS_NAME] AKS cluster already exists in the [$AKS_RESOURCE_GROUP_NAME] resource group" # Check if the OIDC issuer is enabled in the AKS cluster echo "Checking if the OIDC issuer is enabled in the [$AKS_NAME] AKS cluster..." oidcEnabled=$(az aks show \ --name $AKS_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME \ --only-show-errors \ --query oidcIssuerProfile.enabled \ --output tsv) if [[ $oidcEnabled == "true" ]]; then echo "The OIDC issuer is already enabled in the [$AKS_NAME] AKS cluster" else echo "The OIDC issuer is not enabled in the [$AKS_NAME] AKS cluster" fi # Check if Workload Identity is enabled in the AKS cluster echo "Checking if Workload Identity is enabled in the [$AKS_NAME] AKS cluster..." workloadIdentityEnabled=$(az aks show \ --name $AKS_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME \ --only-show-errors \ --query securityProfile.workloadIdentity.enabled \ --output tsv) if [[ $workloadIdentityEnabled == "true" ]]; then echo "Workload Identity is already enabled in the [$AKS_NAME] AKS cluster" else echo "Workload Identity is not enabled in the [$AKS_NAME] AKS cluster" fi # Enable OIDC issuer and Workload Identity if [[ $oidcEnabled == "true" && $workloadIdentityEnabled == "true" ]]; then echo "OIDC issuer and Workload Identity are already enabled in the [$AKS_NAME] AKS cluster" exit fi echo "Enabling OIDC issuer and Workload Identity in the [$AKS_NAME] AKS cluster..." az aks update \ --name $AKS_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME \ --enable-oidc-issuer \ --enable-workload-identity \ --only-show-errors if [[ $? == 0 ]]; then echo "OIDC issuer and Workload Identity successfully enabled in the [$AKS_NAME] AKS cluster" else echo "Failed to enable OIDC issuer and Workload Identity in the [$AKS_NAME] AKS cluster" exit fi fi Create or Update Key Vault You can use the following Bash script to create a new Azure Key Vault if it doesn't already exist, and create a couple of secrets for demonstration purposes. #!/bin/Bash # Variables source ../00-variables.sh # Check if the resource group already exists echo "Checking if [$KEY_VAULT_RESOURCE_GROUP_NAME] resource group actually exists in the [$SUBSCRIPTION_NAME] subscription..." az group show --name $KEY_VAULT_RESOURCE_GROUP_NAME &>/dev/null if [[ $? != 0 ]]; then echo "No [$KEY_VAULT_RESOURCE_GROUP_NAME] resource group actually exists in the [$SUBSCRIPTION_NAME] subscription" echo "Creating [$KEY_VAULT_RESOURCE_GROUP_NAME] resource group in the [$SUBSCRIPTION_NAME] subscription..." # create the resource group az group create --name $KEY_VAULT_RESOURCE_GROUP_NAME --location $LOCATION 1>/dev/null if [[ $? == 0 ]]; then echo "[$KEY_VAULT_RESOURCE_GROUP_NAME] resource group successfully created in the [$SUBSCRIPTION_NAME] subscription" else echo "Failed to create [$KEY_VAULT_RESOURCE_GROUP_NAME] resource group in the [$SUBSCRIPTION_NAME] subscription" exit fi else echo "[$KEY_VAULT_RESOURCE_GROUP_NAME] resource group already exists in the [$SUBSCRIPTION_NAME] subscription" fi # Check if the key vault already exists echo "Checking if [$KEY_VAULT_NAME] key vault actually exists in the [$SUBSCRIPTION_NAME] subscription..." az keyvault show --name $KEY_VAULT_NAME --resource-group $KEY_VAULT_RESOURCE_GROUP_NAME &>/dev/null if [[ $? != 0 ]]; then echo "No [$KEY_VAULT_NAME] key vault actually exists in the [$SUBSCRIPTION_NAME] subscription" echo "Creating [$KEY_VAULT_NAME] key vault in the [$SUBSCRIPTION_NAME] subscription..." # create the key vault az keyvault create \ --name $KEY_VAULT_NAME \ --resource-group $KEY_VAULT_RESOURCE_GROUP_NAME \ --location $LOCATION \ --enabled-for-deployment \ --enabled-for-disk-encryption \ --enabled-for-template-deployment \ --sku $KEY_VAULT_SKU 1>/dev/null if [[ $? == 0 ]]; then echo "[$KEY_VAULT_NAME] key vault successfully created in the [$SUBSCRIPTION_NAME] subscription" else echo "Failed to create [$KEY_VAULT_NAME] key vault in the [$SUBSCRIPTION_NAME] subscription" exit fi else echo "[$KEY_VAULT_NAME] key vault already exists in the [$SUBSCRIPTION_NAME] subscription" fi # Create secrets for INDEX in ${!SECRETS[@]}; do # Check if the secret already exists echo "Checking if [${SECRETS[$INDEX]}] secret actually exists in the [$KEY_VAULT_NAME] key vault..." az keyvault secret show --name ${SECRETS[$INDEX]} --vault-name $KEY_VAULT_NAME &>/dev/null if [[ $? != 0 ]]; then echo "No [${SECRETS[$INDEX]}] secret actually exists in the [$KEY_VAULT_NAME] key vault" echo "Creating [${SECRETS[$INDEX]}] secret in the [$KEY_VAULT_NAME] key vault..." # create the secret az keyvault secret set \ --name ${SECRETS[$INDEX]} \ --vault-name $KEY_VAULT_NAME \ --value ${VALUES[$INDEX]} 1>/dev/null if [[ $? == 0 ]]; then echo "[${SECRETS[$INDEX]}] secret successfully created in the [$KEY_VAULT_NAME] key vault" else echo "Failed to create [${SECRETS[$INDEX]}] secret in the [$KEY_VAULT_NAME] key vault" exit fi else echo "[${SECRETS[$INDEX]}] secret already exists in the [$KEY_VAULT_NAME] key vault" fi done Create Managed Identity and Federated Identity Credential All the techniques use Microsoft Entra Workload ID. The repository contains a folder for each technique. Each folder includes the following create-managed-identity.sh Bash script: #/bin/bash # Variables source ../00-variables.sh source ./00-variables.sh # Check if the resource group already exists echo "Checking if [$AKS_RESOURCE_GROUP_NAME] resource group actually exists in the [$SUBSCRIPTION_ID] subscription..." az group show --name $AKS_RESOURCE_GROUP_NAME &>/dev/null if [[ $? != 0 ]]; then echo "No [$AKS_RESOURCE_GROUP_NAME] resource group actually exists in the [$SUBSCRIPTION_ID] subscription" echo "Creating [$AKS_RESOURCE_GROUP_NAME] resource group in the [$SUBSCRIPTION_ID] subscription..." # create the resource group az group create \ --name $AKS_RESOURCE_GROUP_NAME \ --location $LOCATION 1>/dev/null if [[ $? == 0 ]]; then echo "[$AKS_RESOURCE_GROUP_NAME] resource group successfully created in the [$SUBSCRIPTION_ID] subscription" else echo "Failed to create [$AKS_RESOURCE_GROUP_NAME] resource group in the [$SUBSCRIPTION_ID] subscription" exit fi else echo "[$AKS_RESOURCE_GROUP_NAME] resource group already exists in the [$SUBSCRIPTION_ID] subscription" fi # check if the managed identity already exists echo "Checking if [$MANAGED_IDENTITY_NAME] managed identity actually exists in the [$AKS_RESOURCE_GROUP_NAME] resource group..." az identity show \ --name $MANAGED_IDENTITY_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME &>/dev/null if [[ $? != 0 ]]; then echo "No [$MANAGED_IDENTITY_NAME] managed identity actually exists in the [$AKS_RESOURCE_GROUP_NAME] resource group" echo "Creating [$MANAGED_IDENTITY_NAME] managed identity in the [$AKS_RESOURCE_GROUP_NAME] resource group..." # create the managed identity az identity create \ --name $MANAGED_IDENTITY_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME &>/dev/null if [[ $? == 0 ]]; then echo "[$MANAGED_IDENTITY_NAME] managed identity successfully created in the [$AKS_RESOURCE_GROUP_NAME] resource group" else echo "Failed to create [$MANAGED_IDENTITY_NAME] managed identity in the [$AKS_RESOURCE_GROUP_NAME] resource group" exit fi else echo "[$MANAGED_IDENTITY_NAME] managed identity already exists in the [$AKS_RESOURCE_GROUP_NAME] resource group" fi # Get the managed identity principal id echo "Retrieving principalId for [$MANAGED_IDENTITY_NAME] managed identity..." PRINCIPAL_ID=$(az identity show \ --name $MANAGED_IDENTITY_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME \ --query principalId \ --output tsv) if [[ -n $PRINCIPAL_ID ]]; then echo "[$PRINCIPAL_ID] principalId or the [$MANAGED_IDENTITY_NAME] managed identity successfully retrieved" else echo "Failed to retrieve principalId for the [$MANAGED_IDENTITY_NAME] managed identity" exit fi # Get the managed identity client id echo "Retrieving clientId for [$MANAGED_IDENTITY_NAME] managed identity..." CLIENT_ID=$(az identity show \ --name $MANAGED_IDENTITY_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME \ --query clientId \ --output tsv) if [[ -n $CLIENT_ID ]]; then echo "[$CLIENT_ID] clientId for the [$MANAGED_IDENTITY_NAME] managed identity successfully retrieved" else echo "Failed to retrieve clientId for the [$MANAGED_IDENTITY_NAME] managed identity" exit fi # Retrieve the resource id of the Key Vault resource echo "Retrieving the resource id for the [$KEY_VAULT_NAME] key vault..." KEY_VAULT_ID=$(az keyvault show \ --name $KEY_VAULT_NAME \ --resource-group $KEY_VAULT_RESOURCE_GROUP_NAME \ --query id \ --output tsv) if [[ -n $KEY_VAULT_ID ]]; then echo "[$KEY_VAULT_ID] resource id for the [$KEY_VAULT_NAME] key vault successfully retrieved" else echo "Failed to retrieve the resource id for the [$KEY_VAULT_NAME] key vault" exit fi # Assign the Key Vault Secrets User role to the managed identity with Key Vault as a scope ROLE="Key Vault Secrets User" echo "Checking if [$ROLE] role with [$KEY_VAULT_NAME] key vault as a scope is already assigned to the [$MANAGED_IDENTITY_NAME] managed identity..." CURRENT_ROLE=$(az role assignment list \ --assignee $PRINCIPAL_ID \ --scope $KEY_VAULT_ID \ --query "[?roleDefinitionName=='$ROLE'].roleDefinitionName" \ --output tsv 2>/dev/null) if [[ $CURRENT_ROLE == $ROLE ]]; then echo "[$ROLE] role with [$KEY_VAULT_NAME] key vault as a scope is already assigned to the [$MANAGED_IDENTITY_NAME] managed identity" else echo "[$ROLE] role with [$KEY_VAULT_NAME] key vault as a scope is not assigned to the [$MANAGED_IDENTITY_NAME] managed identity" echo "Assigning the [$ROLE] role with [$KEY_VAULT_NAME] key vault as a scope to the [$MANAGED_IDENTITY_NAME] managed identity..." for i in {1..10}; do az role assignment create \ --assignee $PRINCIPAL_ID \ --role "$ROLE" \ --scope $KEY_VAULT_ID 1>/dev/null if [[ $? == 0 ]]; then echo "Successfully assigned the [$ROLE] role with [$KEY_VAULT_NAME] key vault as a scope to the [$MANAGED_IDENTITY_NAME] managed identity" break else echo "Failed to assign the [$ROLE] role with [$KEY_VAULT_NAME] key vault as a scope to the [$MANAGED_IDENTITY_NAME] managed identity, retrying in 5 seconds..." sleep 5 fi if [[ $i == 3 ]]; then echo "Failed to assign the [$ROLE] role with [$KEY_VAULT_NAME] key vault as a scope to the [$MANAGED_IDENTITY_NAME] managed identity after 3 attempts" exit fi done fi # Check if the namespace exists in the cluster RESULT=$(kubectl get namespace -o 'jsonpath={.items[?(@.metadata.name=="'$NAMESPACE'")].metadata.name'}) if [[ -n $RESULT ]]; then echo "[$NAMESPACE] namespace already exists in the cluster" else echo "[$NAMESPACE] namespace does not exist in the cluster" echo "Creating [$NAMESPACE] namespace in the cluster..." kubectl create namespace $NAMESPACE fi # Check if the service account already exists RESULT=$(kubectl get sa -n $NAMESPACE -o 'jsonpath={.items[?(@.metadata.name=="'$SERVICE_ACCOUNT_NAME'")].metadata.name'}) if [[ -n $RESULT ]]; then echo "[$SERVICE_ACCOUNT_NAME] service account already exists" else # Create the service account echo "[$SERVICE_ACCOUNT_NAME] service account does not exist" echo "Creating [$SERVICE_ACCOUNT_NAME] service account..." cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ServiceAccount metadata: annotations: azure.workload.identity/client-id: $CLIENT_ID azure.workload.identity/tenant-id: $TENANT_ID labels: azure.workload.identity/use: "true" name: $SERVICE_ACCOUNT_NAME namespace: $NAMESPACE EOF fi # Show service account YAML manifest echo "Service Account YAML manifest" echo "-----------------------------" kubectl get sa $SERVICE_ACCOUNT_NAME -n $NAMESPACE -o yaml # Check if the federated identity credential already exists echo "Checking if [$FEDERATED_IDENTITY_NAME] federated identity credential actually exists in the [$AKS_RESOURCE_GROUP_NAME] resource group..." az identity federated-credential show \ --name $FEDERATED_IDENTITY_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME \ --identity-name $MANAGED_IDENTITY_NAME &>/dev/null if [[ $? != 0 ]]; then echo "No [$FEDERATED_IDENTITY_NAME] federated identity credential actually exists in the [$AKS_RESOURCE_GROUP_NAME] resource group" # Get the OIDC Issuer URL AKS_OIDC_ISSUER_URL="$(az aks show \ --only-show-errors \ --name $AKS_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME \ --query oidcIssuerProfile.issuerUrl \ --output tsv)" # Show OIDC Issuer URL if [[ -n $AKS_OIDC_ISSUER_URL ]]; then echo "The OIDC Issuer URL of the [$AKS_NAME] cluster is [$AKS_OIDC_ISSUER_URL]" fi echo "Creating [$FEDERATED_IDENTITY_NAME] federated identity credential in the [$AKS_RESOURCE_GROUP_NAME] resource group..." # Establish the federated identity credential between the managed identity, the service account issuer, and the subject. az identity federated-credential create \ --name $FEDERATED_IDENTITY_NAME \ --identity-name $MANAGED_IDENTITY_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME \ --issuer $AKS_OIDC_ISSUER_URL \ --subject system:serviceaccount:$NAMESPACE:$SERVICE_ACCOUNT_NAME if [[ $? == 0 ]]; then echo "[$FEDERATED_IDENTITY_NAME] federated identity credential successfully created in the [$AKS_RESOURCE_GROUP_NAME] resource group" else echo "Failed to create [$FEDERATED_IDENTITY_NAME] federated identity credential in the [$AKS_RESOURCE_GROUP_NAME] resource group" exit fi else echo "[$FEDERATED_IDENTITY_NAME] federated identity credential already exists in the [$AKS_RESOURCE_GROUP_NAME] resource group" fi The Bash script performs the following steps: It sources variables from two files: ../00-variables.sh and ./00-variables.sh. It checks if the specified resource group exists. If not, it creates the resource group. It checks if the specified managed identity exists within the resource group. If not, it creates a user-assigned managed identity. It retrieves the principalId and clientId of the managed identity. It retrieves the id of the Azure Key Vault resource. It assigns the Key Vault Secrets User role to the managed identity with the Azure Key Vault as the scope. It checks if the specified Kubernetes namespace exists. If not, it creates the namespace. It checks if a specified Kubernetes service account exists within the namespace. If not, it creates the service account with the annotations and labels required by Microsoft Entra Workload ID. It checks if a specified federated identity credential exists within the resource group. If not, it retrieves the OIDC Issuer URL of the specified AKS cluster and creates the federated identity credential. You are now ready to explore each technique in detail. Hands-On Lab: Use Microsoft Entra Workload ID with Azure Kubernetes Service (AKS) Workloads deployed on an Azure Kubernetes Services (AKS) cluster require Microsoft Entra application credentials or managed identities to access Microsoft Entra protected resources, such as Azure Key Vault and Microsoft Graph. Microsoft Entra Workload ID integrates with Kubernetes capabilities to federate with external identity providers. To enable pods to use a Kubernetes identity, Microsoft Entra Workload ID utilizes Service Account Token Volume Projection (service account). This allows for the issuance of a Kubernetes token, and OIDC federation enables secure access to Azure resources with Microsoft Entra ID, based on annotated service accounts. Utilizing the Azure Identity client libraries or the Microsoft Authentication Library (MSAL) collection, alongside application registration, Microsoft Entra Workload ID seamlessly authenticates and provides access to Azure cloud resources for your workload. You can create a user-assigned managed identity for the workload, create federated credentials, and assign the proper permissions to it to read secrets from the source Key Vault using the create-managed-identity.sh Bash script. Then, you can run the following Bash script to retrieve the URL of the Azure Key Vault endpoint and then starts a demo pod in the workload-id-test namespace. The pod receives two parameters via environment variables: KEYVAULT_URL: The Azure Key Vault endpoint URL. SECRET_NAME: The name of a secret stored in Azure Key Vault. #/bin/bash # Variables source ../00-variables.sh source ./00-variables.sh # Retrieve the Azure Key Vault URL echo "Retrieving the [$KEY_VAULT_NAME] key vault URL..." KEYVAULT_URL=$(az keyvault show \ --name $KEY_VAULT_NAME \ --query properties.vaultUri \ --output tsv) if [[ -n $KEYVAULT_URL ]]; then echo "[$KEYVAULT_URL] key vault URL successfully retrieved" else echo "Failed to retrieve the [$KEY_VAULT_NAME] key vault URL" exit fi # Create the pod echo "Creating the [$POD_NAME] pod in the [$NAMESPACE] namespace..." cat <<EOF | kubectl apply -n $NAMESPACE -f - apiVersion: v1 kind: Pod metadata: name: $POD_NAME labels: azure.workload.identity/use: "true" spec: serviceAccountName: $SERVICE_ACCOUNT_NAME containers: - image: ghcr.io/azure/azure-workload-identity/msal-net:latest name: oidc env: - name: KEYVAULT_URL value: $KEYVAULT_URL - name: SECRET_NAME value: ${SECRETS[0]} nodeSelector: kubernetes.io/os: linux EOF exit Below you can read the C# code of the sample application that uses the Microsoft Authentication Library (MSAL) to acquire a security token to access Key Vault and read the value of a secret. // <directives> using System; using System.Threading; using Azure.Security.KeyVault.Secrets; // <directives> namespace akvdotnet { public class Program { static void Main(string[] args) { Program P = new Program(); string keyvaultURL = Environment.GetEnvironmentVariable("KEYVAULT_URL"); if (string.IsNullOrEmpty(keyvaultURL)) { Console.WriteLine("KEYVAULT_URL environment variable not set"); return; } string secretName = Environment.GetEnvironmentVariable("SECRET_NAME"); if (string.IsNullOrEmpty(secretName)) { Console.WriteLine("SECRET_NAME environment variable not set"); return; } SecretClient client = new SecretClient( new Uri(keyvaultURL), new MyClientAssertionCredential()); while (true) { Console.WriteLine($"{Environment.NewLine}START {DateTime.UtcNow} ({Environment.MachineName})"); // <getsecret> var keyvaultSecret = client.GetSecret(secretName).Value; Console.WriteLine("Your secret is " + keyvaultSecret.Value); // sleep and retry periodically Thread.Sleep(600000); } } } } public class MyClientAssertionCredential : TokenCredential { private readonly IConfidentialClientApplication _confidentialClientApp; private DateTimeOffset _lastRead; private string _lastJWT = null; public MyClientAssertionCredential() { // <authentication> // Microsoft Entra ID Workload Identity webhook will inject the following env vars // AZURE_CLIENT_ID with the clientID set in the service account annotation // AZURE_TENANT_ID with the tenantID set in the service account annotation. If not defined, then // the tenantID provided via azure-wi-webhook-config for the webhook will be used. // AZURE_AUTHORITY_HOST is the Microsoft Entra authority host. It is https://login.microsoftonline.com" for the public cloud. // AZURE_FEDERATED_TOKEN_FILE is the service account token path var clientID = Environment.GetEnvironmentVariable("AZURE_CLIENT_ID"); var tokenPath = Environment.GetEnvironmentVariable("AZURE_FEDERATED_TOKEN_FILE"); var tenantID = Environment.GetEnvironmentVariable("AZURE_TENANT_ID"); var host = Environment.GetEnvironmentVariable("AZURE_AUTHORITY_HOST"); _confidentialClientApp = ConfidentialClientApplicationBuilder .Create(clientID) .WithAuthority(host, tenantID) .WithClientAssertion(() => ReadJWTFromFSOrCache(tokenPath)) // ReadJWTFromFS should always return a non-expired JWT .WithCacheOptions(CacheOptions.EnableSharedCacheOptions) // cache the the AAD tokens in memory .Build(); } public override AccessToken GetToken(TokenRequestContext requestContext, CancellationToken cancellationToken) { return GetTokenAsync(requestContext, cancellationToken).GetAwaiter().GetResult(); } public override async ValueTask<AccessToken> GetTokenAsync(TokenRequestContext requestContext, CancellationToken cancellationToken) { AuthenticationResult result = null; try { result = await _confidentialClientApp .AcquireTokenForClient(requestContext.Scopes) .ExecuteAsync(); } catch (MsalUiRequiredException ex) { // The application doesn't have sufficient permissions. // - Did you declare enough app permissions during app creation? // - Did the tenant admin grant permissions to the application? } catch (MsalServiceException ex) when (ex.Message.Contains("AADSTS70011")) { // Invalid scope. The scope has to be in the form "https://resourceurl/.default" // Mitigation: Change the scope to be as expected. } return new AccessToken(result.AccessToken, result.ExpiresOn); } /// <summary> /// Read the JWT from the file system, but only do this every few minutes to avoid heavy I/O. /// The JWT lifetime is anywhere from 1 to 24 hours, so we can safely cache the value for a few minutes. /// </summary> private string ReadJWTFromFSOrCache(string tokenPath) { // read only once every 5 minutes if (_lastJWT == null || DateTimeOffset.UtcNow.Subtract(_lastRead) > TimeSpan.FromMinutes(5)) { _lastRead = DateTimeOffset.UtcNow; _lastJWT = System.IO.File.ReadAllText(tokenPath); } return _lastJWT; } } The Program class contains the Main method, which initializes a SecretClient object using a custom credential class MyClientAssertionCredential. The Main method code retrieves the Key Vault URL and secret name from environment variables, checks if they are set, and then enters an infinite loop where it fetches the secret from Key Vault and prints it to the console every 10 minutes. The MyClientAssertionCredential class extends TokenCredential and is responsible for authenticating with Microsoft Entra ID using a client assertion. It reads necessary environment variables for client ID, tenant ID, authority host, and federated token file path from the respective environment variables injected by Microsoft Entra Workload IDinto the pod. Environment variable Description AZURE_AUTHORITY_HOST The Microsoft Entra ID endpoint (https://login.microsoftonline.com/). AZURE_CLIENT_ID The client ID of the Microsoft Entra ID registered application or user-assigned managed identity. AZURE_TENANT_ID The tenant ID of the Microsoft Entra ID registered application or user-assigned managed identity. AZURE_FEDERATED_TOKEN_FILE The path of the projected service account token file. The class uses the ConfidentialClientApplicationBuilder to create a confidential client application that acquires tokens for the specified scopes. The ReadJWTFromFSOrCache method reads the JWT from the file system and caches it to minimize I/O operations. You can find the code, Dockerfile, and container image links for other programming languages in the table below. Language Library Code Image Example Has Windows Images C# microsoft-authentication-library-for-dotnet Link ghcr.io/azure/azure-workload-identity/msal-net Link ✅ Go microsoft-authentication-library-for-go Link ghcr.io/azure/azure-workload-identity/msal-go Link ✅ Java microsoft-authentication-library-for-java Link ghcr.io/azure/azure-workload-identity/msal-java Link ❌ Node.JS microsoft-authentication-library-for-js Link ghcr.io/azure/azure-workload-identity/msal-node Link ❌ Python microsoft-authentication-library-for-python Link ghcr.io/azure/azure-workload-identity/msal-python Link ❌ The application code retrieves the secret value specified by the SECRET_NAME parameter and logs it to the standard output. Therefore, you can use the following Bash script to display the logs generated by the pod. #!/bin/bash # Variables source ../00-variables.sh source ./00-variables.sh # Check if the pod exists POD=$(kubectl get pod $POD_NAME -n $NAMESPACE -o 'jsonpath={.metadata.name}') if [[ -z $POD ]]; then echo "No [$POD_NAME] pod found in [$NAMESPACE] namespace." exit fi # Read logs from the pod echo "Reading logs from [$POD_NAME] pod..." kubectl logs $POD -n $NAMESPACE The script should generate an output similar to the following: Reading logs from [demo-pod] pod... START 02/10/2025 11:01:36 (demo-pod) Your secret is admin Alternatively, you can use the Azure Identity client libraries in your workload code to acquire a security token from Microsoft Entra ID using the credentials of the registered application or user-assigned managed identity federated with the Kubernetes service account. You can choose one of the following approaches: Use DefaultAzureCredential, which attempts to use the WorkloadIdentityCredential. Create a ChainedTokenCredential instance that includes WorkloadIdentityCredential. Use WorkloadIdentityCredential directly. The following table provides the minimum package version required for each language ecosystem's client library. Ecosystem Library Minimum version .NET Azure.Identity 1.9.0 C++ azure-identity-cpp 1.6.0 Go azidentity 1.3.0 Java azure-identity 1.9.0 Node.js @azure/identity 3.2.0 Python azure-identity 1.13.0 In the following code samples, DefaultAzureCredential is used. This credential type uses the environment variables injected by the Azure Workload Identity mutating webhook to authenticate with Azure Key Vault. .NET C++ Go Java Node.js Python Here is a C# code sample that uses DefaultAzureCredential for user credentials. using Azure.Identity; using Azure.Security.KeyVault.Secrets; string keyVaultUrl = Environment.GetEnvironmentVariable("KEYVAULT_URL"); string secretName = Environment.GetEnvironmentVariable("SECRET_NAME"); var client = new SecretClient( new Uri(keyVaultUrl), new DefaultAzureCredential()); KeyVaultSecret secret = await client.GetSecretAsync(secretName); Hands-On Lab: Azure Key Vault Provider for Secrets Store CSI Driver in AKS The Secrets Store Container Storage Interface (CSI) Driver on Azure Kubernetes Service (AKS) provides various methods of identity-based access to your Azure Key Vault. You can use one of the following access methods: Service Connector with managed identity Workload ID User-assigned managed identity This article outlines focus on the Workload ID option. Please see the documentantion for the other methods. Run the following Bash script to upgrade your AKS cluster with the Azure Key Vault provider for Secrets Store CSI Driver capability using the az aks enable-addons command to enable the azure-keyvault-secrets-provider add-on. The add-on creates a user-assigned managed identity you can use to authenticate to your key vault. Alternatively, you can use a bring-your-own user-assigned managed identity. #!/bin/bash # Variables source ../00-variables.sh source ./00-variables.sh # Enable Addon echo "Checking if the [azure-keyvault-secrets-provider] addon is enabled in the [$AKS_NAME] AKS cluster..." az aks addon show \ --addon azure-keyvault-secrets-provider \ --name $AKS_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME &>/dev/null if [[ $? != 0 ]]; then echo "The [azure-keyvault-secrets-provider] addon is not enabled in the [$AKS_NAME] AKS cluster" echo "Enabling the [azure-keyvault-secrets-provider] addon in the [$AKS_NAME] AKS cluster..." az aks addon enable \ --addon azure-keyvault-secrets-provider \ --enable-secret-rotation \ --name $AKS_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME else echo "The [azure-keyvault-secrets-provider] addon is already enabled in the [$AKS_NAME] AKS cluster" fi You can create a user-assigned managed identity for the workload, create federated credentials, and assign the proper permissions to it to read secrets from the source Key Vault using the create-managed-identity.sh Bash script. The next step is creating an instance of the SecretProviderClass custom resource in your workload namespace. The SecretProviderClass is a namespaced resource in Secrets Store CSI Driver that is used to provide driver configurations and provider-specific parameters to the CSI driver. The SecretProviderClass allows you to indicate the client ID of a user-assigned managed identity used to read secret material from Key Vault, and the list of secrets, keys, and certificates to read from Key Vault. For each object, you can optionally indicate an alternative name or alias using the objectAlias property. In this case, the driver will create a file with the alias as the name. You can even indicate a specific version of a secret, key, or certificate. You can retrieve the latest version just by assigning the objectVersion the null value or empty string. #/bin/bash # For more information, see: # https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver # https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-identity-access # Variables source ../00-variables.sh source ./00-variables.sh # Get the managed identity client id echo "Retrieving clientId for [$MANAGED_IDENTITY_NAME] managed identity..." CLIENT_ID=$(az identity show \ --name $MANAGED_IDENTITY_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME \ --query clientId \ --output tsv) if [[ -n $CLIENT_ID ]]; then echo "[$CLIENT_ID] clientId for the [$MANAGED_IDENTITY_NAME] managed identity successfully retrieved" else echo "Failed to retrieve clientId for the [$MANAGED_IDENTITY_NAME] managed identity" exit fi # Create the SecretProviderClass for the secret store CSI driver with Azure Key Vault provider echo "Creating the SecretProviderClass for the secret store CSI driver with Azure Key Vault provider..." cat <<EOF | kubectl apply -n $NAMESPACE -f - apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: $SECRET_PROVIDER_CLASS_NAME spec: provider: azure parameters: clientID: "$CLIENT_ID" keyvaultName: "$KEY_VAULT_NAME" tenantId: "$TENANT_ID" objects: | array: - | objectName: username objectAlias: username objectType: secret objectVersion: "" - | objectName: password objectAlias: password objectType: secret objectVersion: "" EOF The Bash script creates a SecretProviderClass custom resource configured to read the latest value of the username and password secrets from the source Key Vault. You can now use the following Bash script to deploy the sample application. #/bin/bash # Variables source ../00-variables.sh source ./00-variables.sh # Create the pod echo "Creating the [$POD_NAME] pod in the [$NAMESPACE] namespace..." cat <<EOF | kubectl apply -n $NAMESPACE -f - kind: Pod apiVersion: v1 metadata: name: $POD_NAME labels: azure.workload.identity/use: "true" spec: serviceAccountName: $SERVICE_ACCOUNT_NAME containers: - name: nginx image: nginx resources: requests: memory: "32Mi" cpu: "50m" limits: memory: "64Mi" cpu: "100m" volumeMounts: - name: secrets-store mountPath: "/mnt/secrets" readOnly: true volumes: - name: secrets-store csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "$SECRET_PROVIDER_CLASS_NAME" EOF The YAML manifest contains a volume definition called secrets-store that uses the secrets-store.csi.k8s.io Secrets Store CSI Driver and references the SecretProviderClass resource created in the previous step by name. The YAML configuration defines a Pod with a container named nginx that mounts the secrets-store volume in read-only mode. On pod start and restart, the driver will communicate with the provider using gRPC to retrieve the secret content from the Key Vault resource you have specified in the SecretProviderClass custom resource. You can run the following Bash script to print the value of each files, one for each secret specified in the SecretProviderClass custom resource, from the /mnt/secrets mounted volume. #!/bin/bash # Variables source ../00-variables.sh source ./00-variables.sh # Check if the pod exists POD=$(kubectl get pod $POD_NAME -n $NAMESPACE -o 'jsonpath={.metadata.name}') if [[ -z $POD ]]; then echo "No [$POD_NAME] pod found in [$NAMESPACE] namespace." exit fi # List secrets from /mnt/secrets volume echo "Reading files from [/mnt/secrets] volume in [$POD_NAME] pod..." FILES=$(kubectl exec $POD -n $NAMESPACE -- ls /mnt/secrets) # Retrieve secrets from /mnt/secrets volume for FILE in ${FILES[@]} do echo "Retrieving [$FILE] secret from [$KEY_VAULT_NAME] key vault..." kubectl exec $POD --stdin --tty -n $NAMESPACE -- cat /mnt/secrets/$FILE;echo;sleep 1 done Hands-On Lab: Dapr Secret Store for Key Vault Distributed Application Runtime (Dapr) is is a versatile and event-driven runtime that can help you write and implement simple, portable, resilient, and secured microservices. Dapr works together with Kubernetes clusters such as Azure Kubernetes Services (AKS) and Azure Container Apps as an abstraction layer to provide a low-maintenance and scalable platform. The first step is running the following script to check if Dapr is actually installed on your AKS cluster, and if not, install the Dapr extension. For more information, see Install the Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes. #!/bin/bash # Variables source ../00-variables.sh source ./00-variables.sh # Install AKS cluster extension in your Azure subscription echo "Check if the [k8s-extension] is already installed in the [$SUBSCRIPTION_NAME] subscription..." az extension show --name k8s-extension &>/dev/null if [[ $? != 0 ]]; then echo "No [k8s-extension] extension actually exists in the [$SUBSCRIPTION_NAME] subscription" echo "Installing [k8s-extension] extension in the [$SUBSCRIPTION_NAME] subscription..." # install the extension az extension add --name k8s-extension if [[ $? == 0 ]]; then echo "[k8s-extension] extension successfully installed in the [$SUBSCRIPTION_NAME] subscription" else echo "Failed to install [k8s-extension] extension in the [$SUBSCRIPTION_NAME] subscription" exit fi else echo "[k8s-extension] extension already exists in the [$SUBSCRIPTION_NAME] subscription" fi # Checking if the the KubernetesConfiguration resource provider is registered in your Azure subscription echo "Checking if the [Microsoft.KubernetesConfiguration] resource provider is already registered in the [$SUBSCRIPTION_NAME] subscription..." az provider show --namespace Microsoft.KubernetesConfiguration &>/dev/null if [[ $? != 0 ]]; then echo "No [Microsoft.KubernetesConfiguration] resource provider actually exists in the [$SUBSCRIPTION_NAME] subscription" echo "Registering [Microsoft.KubernetesConfiguration] resource provider in the [$SUBSCRIPTION_NAME] subscription..." # register the resource provider az provider register --namespace Microsoft.KubernetesConfiguration if [[ $? == 0 ]]; then echo "[Microsoft.KubernetesConfiguration] resource provider successfully registered in the [$SUBSCRIPTION_NAME] subscription" else echo "Failed to register [Microsoft.KubernetesConfiguration] resource provider in the [$SUBSCRIPTION_NAME] subscription" exit fi else echo "[Microsoft.KubernetesConfiguration] resource provider already exists in the [$SUBSCRIPTION_NAME] subscription" fi # Check if the ExtenstionTypes feature is registered in your Azure subscription echo "Checking if the [ExtensionTypes] feature is already registered in the [Microsoft.KubernetesConfiguration] namespace..." az feature show --namespace Microsoft.KubernetesConfiguration --name ExtensionTypes &>/dev/null if [[ $? != 0 ]]; then echo "No [ExtensionTypes] feature actually exists in the [Microsoft.KubernetesConfiguration] namespace" echo "Registering [ExtensionTypes] feature in the [Microsoft.KubernetesConfiguration] namespace..." # register the feature az feature register --namespace Microsoft.KubernetesConfiguration --name ExtensionTypes if [[ $? == 0 ]]; then echo "[ExtensionTypes] feature successfully registered in the [Microsoft.KubernetesConfiguration] namespace" else echo "Failed to register [ExtensionTypes] feature in the [Microsoft.KubernetesConfiguration] namespace" exit fi else echo "[ExtensionTypes] feature already exists in the [Microsoft.KubernetesConfiguration] namespace" fi # Check if Dapr extension is installed on your AKS cluster echo "Checking if the [Dapr] extension is already installed on the [$AKS_NAME] AKS cluster..." az k8s-extension show \ --name dapr \ --cluster-name $AKS_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME \ --cluster-type managedClusters &>/dev/null if [[ $? != 0 ]]; then echo "No [Dapr] extension actually exists on the [$AKS_NAME] AKS cluster" echo "Installing [Dapr] extension on the [$AKS_NAME] AKS cluster..." # install the extension az k8s-extension create \ --name dapr \ --cluster-name $AKS_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME \ --cluster-type managedClusters \ --extension-type "Microsoft.Dapr" \ --scope cluster \ --release-namespace "dapr-system" if [[ $? == 0 ]]; then echo "[Dapr] extension successfully installed on the [$AKS_NAME] AKS cluster" else echo "Failed to install [Dapr] extension on the [$AKS_NAME] AKS cluster" exit fi else echo "[Dapr] extension already exists on the [$AKS_NAME] AKS cluster" fi You can create a user-assigned managed identity for the workload, create federated credentials, and assign the proper permissions to it to read secrets from the source Key Vault using the create-managed-identity.sh Bash script. Then, you can run the following Bash script to retrieve the clientId for the user-assigned managed identity used to access Key Vault and create a Dapr secret store component for the secret store CSI driver with Azure Key Vault provider. The YAML manifest of the Dapr component assigns the following values to the component metadata: Key Vault name to the vaultName attribute. Client id of the user-assigned managed identity to the azureClientId attribute. #!/bin/bash # Variables source ../00-variables.sh source ./00-variables.sh # Get the managed identity client id echo "Retrieving clientId for [$MANAGED_IDENTITY_NAME] managed identity..." CLIENT_ID=$(az identity show \ --name $MANAGED_IDENTITY_NAME \ --resource-group $AKS_RESOURCE_GROUP_NAME \ --query clientId \ --output tsv) if [[ -n $CLIENT_ID ]]; then echo "[$CLIENT_ID] clientId for the [$MANAGED_IDENTITY_NAME] managed identity successfully retrieved" else echo "Failed to retrieve clientId for the [$MANAGED_IDENTITY_NAME] managed identity" exit fi # Create the Dapr secret store for Azure Key Vault echo "Creating the secret store for [$KEY_VAULT_NAME] Azure Key Vault..." cat <<EOF | kubectl apply -n $NAMESPACE -f - apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: $SECRET_STORE_NAME spec: type: secretstores.azure.keyvault version: v1 metadata: - name: vaultName value: ${KEY_VAULT_NAME,,} - name: azureClientId value: $CLIENT_ID EOF The next step is deploying the demo application using the following Bash script. The service account used by the Kubernetes deployment is federated with the user-assigned managed identity. Aldo note that the deployment is configured to use Dapr via the following Kubernetes annotations: dapr.io/app-id: The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID. dapr.io/enabled: Setting this paramater to true injects the Dapr sidecar into the pod. dapr.io/app-port: This parameter tells Dapr which port your application is listening on. For more information on Dapr annotations, see Dapr arguments and annotations for daprd, CLI, and Kubernetes. #!/bin/bash # Variables source ../00-variables.sh source ./00-variables.sh # Check if the namespace exists in the cluster RESULT=$(kubectl get namespace -o 'jsonpath={.items[?(@.metadata.name=="'$NAMESPACE'")].metadata.name'}) if [[ -n $RESULT ]]; then echo "[$NAMESPACE] namespace already exists in the cluster" else echo "[$NAMESPACE] namespace does not exist in the cluster" echo "Creating [$NAMESPACE] namespace in the cluster..." kubectl create namespace $NAMESPACE fi # Create deployment echo "Creating [$APP_NAME] deployment in the [$NAMESPACE] namespace..." cat <<EOF | kubectl apply -n $NAMESPACE -f - kind: Deployment apiVersion: apps/v1 metadata: name: $APP_NAME labels: app: $APP_NAME spec: replicas: 1 selector: matchLabels: app: $APP_NAME azure.workload.identity/use: "true" template: metadata: labels: app: $APP_NAME azure.workload.identity/use: "true" annotations: dapr.io/enabled: "true" dapr.io/app-id: "$APP_NAME" dapr.io/app-port: "80" spec: serviceAccountName: $SERVICE_ACCOUNT_NAME containers: - name: nginx image: nginx imagePullPolicy: Always ports: - containerPort: 80 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" EOF You can run the following Bash script to connect to the demo pod and print out the value of the two sample secrets stored in Key Vault. #!/bin/bash # Variables source ../00-variables.sh source ./00-variables.sh # Get pod name POD=$(kubectl get pod -n $NAMESPACE -o 'jsonpath={.items[].metadata.name}') if [[ -z $POD ]]; then echo 'no pod found, please check the name of the deployment and namespace' exit fi # List secrets from /mnt/secrets volume for SECRET in ${SECRETS[@]} do echo "Retrieving [$SECRET] secret from [$KEY_VAULT_NAME] key vault..." json=$(kubectl exec --stdin --tty -n $NAMESPACE -c $CONTAINER $POD \ -- curl http://localhost:3500/v1.0/secrets/key-vault-secret-store/$SECRET;echo) echo $json | jq . done Hands-On Lab: External Secrets Operator with Azure Key Vault In this sectioon you will see the steps to configure the External Secrets Operator to use Microsoft Entra Workload ID to access an Azure Key Vault resource. You can install the operator to your AKS cluster using Helm, as shown in the following Bash script: #!/bin/bash # Variables source ../00-variables.sh source ./00-variables.sh # Add the external secrets repository helm repo add external-secrets https://charts.external-secrets.io # Update local Helm chart repository cache helm repo update # Deploy external secrets via Helm helm upgrade external-secrets external-secrets/external-secrets \ --install \ --namespace external-secrets \ --create-namespace \ --set installCRDs=true Then, you can create a user-assigned managed identity for the workload, create federated credentials, and assign the proper permissions to it to read secrets from the source Key Vault using the create-managed-identity.sh Bash script. Next, you can run the following Bash script to retrieve the vaultUri of your Key Vault resource and create a secret store custom resource. The YAML manifest of the secret store assigns the following values to the properties of the azurekv provider for Key Vault: authType: WorkloadIdentity configures the provider to utilize user-assigned managed identity with the proper permissions to access Key Vault. vaultUrl: Specifies the vaultUri Key Vault endpoint URL. serviceAccountRef.name: specifies the Kubernetes service account in the workload namespace that is federated with the user-assigned managed identity. #/bin/bash # For more information, see: # https://medium.com/@rcdinesh1/access-secrets-via-argocd-through-external-secrets-9173001be885 # https://external-secrets.io/latest/provider/azure-key-vault/ # Variables source ../00-variables.sh source ./00-variables.sh # Get key vault URL VAULT_URL=$(az keyvault show \ --name $KEY_VAULT_NAME \ --resource-group $KEY_VAULT_RESOURCE_GROUP_NAME \ --query properties.vaultUri \ --output tsv \ --only-show-errors) if [[ -z $VAULT_URL ]]; then echo "[$KEY_VAULT_NAME] key vault URL not found" exit fi # Create secret store echo "Creating the [$SECRET_STORE_NAME] secret store..." cat <<EOF | kubectl apply -n $NAMESPACE -f - apiVersion: external-secrets.io/v1beta1 kind: SecretStore metadata: name: $SECRET_STORE_NAME spec: provider: azurekv: authType: WorkloadIdentity vaultUrl: "$VAULT_URL" serviceAccountRef: name: $SERVICE_ACCOUNT_NAME EOF # Get the secret store kubectl get secretstore azure-store -n $NAMESPACE -o yaml For more information on secret stores for Key Vault, see Azure Key Vault in the official documentation of the External Secrets Operator. #/bin/bash # Variables source ../00-variables.sh source ./00-variables.sh # Create secrets cat <<EOF | kubectl apply -n $NAMESPACE -f - apiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: $EXTERNAL_SECRET_NAME spec: refreshInterval: 1h secretStoreRef: kind: SecretStore name: $SECRET_STORE_NAME target: name: $EXTERNAL_SECRET_NAME creationPolicy: Owner dataFrom: # find all secrets starting with user - find: name: regexp: "^user" data: # explicit type and name of secret in the Azure KV - secretKey: password remoteRef: key: secret/password EOF Azure Key Vault manages different object types. The External Secrets Operator supports keys, secrets, and certificates. Simply prefix the key with key, secret, or cert to retrieve the desired type (defaults to secret). Object Type Return Value secret The raw secret value. key A JWK which contains the public key. Azure Key Vault does not export the private key. certificate The raw CER contents of the x509 certificate. You can create one or more ExternalSecret objects in your workload namespace to read keys, secrets, and certificates from Key Vault. To create a Kubernetes secret from the Azure Key Vault secret, you need to use Kind=ExternalSecret. You can retrieve keys, secrets, and certificates stored inside your Key Vault by setting a / prefixed type in the secret name. The default type is secret, but other supported values are cert and key. The following Bash script creates an ExternalSecret object configured to reference the secret store created in the previous step. The ExternalSecret object has two sections: dataFrom: This section contains a find element that uses regular expressions to retrieve any secret whose name starts with user. For each secret, the Key Vault provider will create a key-value mapping in the data section of the Kubernetes secret using the name and value of the corresponding Key Vault secret. data: This section specifies the explicit type and name of the secrets, keys, and certificates to retrieve from Key Vault. In this sample, it tells the Key Vault provider to create a key-value mapping in the data section of the Kubernetes secret for the password Key Vault secret, using password as the key. For more information on external secrets, see Azure Key Vault in the official documentation of the External Secrets Operator. #/bin/bash # Variables source ../00-variables.sh source ./00-variables.sh # Create secrets cat <<EOF | kubectl apply -n $NAMESPACE -f - apiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: $EXTERNAL_SECRET_NAME spec: refreshInterval: 1h secretStoreRef: kind: SecretStore name: $SECRET_STORE_NAME target: name: $EXTERNAL_SECRET_NAME creationPolicy: Owner dataFrom: # find all secrets starting with user - find: name: regexp: "^user" data: # explicit type and name of secret in the Azure KV - secretKey: password remoteRef: key: secret/password EOF Finally, you can run the following Bash script to print the key-value mappings contained in the Kubernetes secret created by the External Secrets Operator. #/bin/bash # Variables source ../00-variables.sh source ./00-variables.sh # Print secret values from the Kubernetes secret json=$(kubectl get secret $EXTERNAL_SECRET_NAME -n $NAMESPACE -o jsonpath='{.data}') # Decode the base64 of each value in the returned json echo $json | jq -r 'to_entries[] | .key + ": " + (.value | @base64d)' Conclusions In this article, we explored different methods for reading secrets from Azure Key Vault in Azure Kubernetes Services (AKS). Each technology offers its own advantages and considerations. Here's a summary: Microsoft Entra Workload ID: Transparently assigns a user-defined managed identity to a pod or deployment. Allows using Microsoft Entra integrated security and Azure RBAC for authorization. Provides secure access to Azure Key Vault and other managed services. Azure Key Vault provider for Secrets Store CSI Driver: Secrets, keys, and certificates can be accessed as files from mounted volumes. Optionally, Kubernetes secrets can be created to store keys, secrets, and certificates from Key Vault. No need for Azure-specific libraries to access secrets. Simplifies secret management with transparent integration. Dapr Secret Store for Key Vault: Allows applications to retrieve secrets from various secret stores, including Azure Key Vault. Simplifies secret management with Dapr's consistent API. Supports Azure Key Vault integration with managed identities. Supports third-party secret stores, such as Azure Key Vault, AWS Secret Manager, and Google Key Management, and Hashicorp Vault. External Secrets Operator: Manages secrets stored in external secret stores like Azure Key Vault, AWS Secret Manager, and Google Key Management, Hashicorp Vault, and more. Provides synchronization of Key Vault secrets into Kubernetes secrets. Simplifies secret management with Kubernetes-native integration. Depending on your requirements and preferences, you can choose the method that best fits your use case. Each technology offers unique features and benefits to securely access and manage secrets in your AKS workloads. For more information and detailed documentation on each mechanism, refer to the provided resources in this article.Introducing Serverless GPUs on Azure Container Apps
We're excited to announce the public preview of Azure Container Apps Serverless GPUs accelerated by NVIDIA. This feature provides customers with NVIDIA A100 GPUs and NVIDIA T4 GPUs in a serverless environment, enabling effortless scaling and flexibility for real-time custom model inferencing and other machine learning tasks. Serverless GPUs accelerate the speed of your AI development team by allowing you to focus on your core AI code and less on managing infrastructure when using NVIDIA accelerated computing. They provide an excellent middle layer option between Azure AI Model Catalog's serverless APIs and hosting models on managed compute. It provides full data governance as your data never leaves the boundaries of your container while still providing a managed, serverless platform from which to build your applications. Serverless GPUs are designed to meet the growing demands of modern applications by providing powerful NVIDIA accelerated computing resources without the need for dedicated infrastructure management. "Azure Container Apps' serverless GPU offering is a leap forward for AI workloads. Serverless NVIDIA GPUs are well suited for a wide array of AI workloads from real-time inferencing scenarios with custom models to fine-tuning. NVIDIA is also working with Microsoft to bring NVIDIA NIM microservices to Azure Container Apps to optimize AI inference performance.” - Dave Salvator, Director, Accelerated Computing Products, NVIDIA Key benefits of serverless GPUs Scale-to zero GPUs: Support for serverless scaling of NVIDIA A100 and T4 GPUs. Per-second billing: Pay only for the GPU compute you use. Built-in data governance: Your data never leaves the container boundary. Flexible compute options: Choose between NVIDIA A100 and T4 GPUs. Middle-layer for AI development: Bring your own model on a managed, serverless compute platform. Scenarios Whether you choose to use NVIDIA A100 or T4 GPUs will depend on the types of apps you're creating. The following are a couple example scenarios. For each scenario with serverless GPUs, you pay only for the compute you use with per-second billing, and your apps will automatically scale in and out from zero to meet the demand. NVIDIA T4 Real-time and batch inferencing: Using custom open-source models with fast startup times, automatic scaling, and a per-second billing model, serverless GPUs are ideal for dynamic applications that don't already have a serverless API in the model catalog. NVIDIA A100 Compute intensive machine learning scenarios: Significantly speed up applications that implement fine-tuned custom generative AI models, deep learning, or neural networks. High performance computing (HPC) and data analytics: Applications that require complex calculations or simulations, such as scientific computing and financial modeling as well as accelerated data processing and analysis among massive datasets. Get started with serverless GPUs Serverless GPUs are now available for workload profile environments in West US 3, Australia East, and Sweden Central regions with more regions to come. You will need to have quota enabled on your subscription in order to use serverless GPUs. By default, all Microsoft Enterprise Agreement customers will have one quota. If additional quota is needed, please request it here. Note: In order to achieve the best performance with serverless GPUs, use an Azure Container Registry (ACR) with artifact streaming enabled for your image tag. Follow steps here to enable artifact streaming on your ACR. From the portal, you can select to enable GPUs for your Consumption app in the container tab when creating your Container App or your Container App Job. You can also add a new consumption GPU workload profile to your existing Container App environment through the workload profiles UX in portal or through the CLI commands for managing workload profiles. Deploy a sample Stable Diffusion app To try out serverless GPUs, you can use the stable diffusion image which is provided as a quickstart during the container app create experience: In the container tab select the Use quickstart image box. In the quickstart image dropdown, select GPU hello world container. If you wish to pull the GPU container image into your own ACR to enable artifact streaming for improved performance, or if you wish to manually enter the image, you can find the image at mcr.microsoft.com/k8se/gpu-quickstart:latest. For full steps on using your own image with serverless GPUs, see the tutorial on using serverless GPUs in Azure Container Apps. Learn more about serverless GPUs With serverless GPUs, Azure Container Apps now simplifies the development of your AI applications by providing scale-to-zero compute, pay-as you go pricing, reduced infrastructure management, and more. To learn more, visit: Using serverless GPUs in Azure Container Apps (preview) | Microsoft Learn Tutorial: Generate images using serverless GPUs in Azure Container Apps (preview) | Microsoft Learn3.9KViews1like0CommentsAzure DevOps - Agent pool report and replace.
As usage of Azure DevOps organisations grow so do the number of projects, repositories, pipelines and agent pools used. With new services available such as Managed DevOps Pools it can appear a mammoth task for central IT function to manually trawl through every pipeline noting down each agentpool being used. Replacing these values potentially even more complicated after creating the new agent pools and mapping them with potential for human error.542Views0likes0CommentsSelf Hosted AI Application on AKS in a day with KAITO and CoPilot.
In this blog post I document my experience of spending a full day using KAITO and Copilot to accelerate deployment and development of a self managed AI enabled chatbot deployed in a managed cluster. The goal is to showcase how quickly using a mix of AI tooling we can go from zero to a self hosted, tuned LLM and chatbot application. At the top of this article I want to share my perspective on the future of projects such as KAITO. At the moment I believe KAITO to be somewhat ahead of its time, as most enterprises begin adopting abstracted artificial intelligence it is brilliant to see projects like KAITO being developed ready for the eventual abstraction pendulum to swing back, motivated by usual factors such as increased skills in the market, cost and governance. Enterprises will undoubtedly in the future look to take centralised control of the AI models being used by their enterprises as GPU's become cheaper, more readily available and powerful. When this shift happens open source projects like KAITO will become common place in enterprises. It is also my opinion that Kubernetes lends itself perfectly to be the AI platform of the future a position shared by the CNCF (albeit both sources here may be somewhat biased). The resiliency, scaling and existence of Kuberentes primitives such as "Jobs" mean that Kubernetes is already the de-facto platform for machine learning training and inference. These same reasons also make Kuberentes the best underlying platform for AI development. Companies including DHL, Wayve and even OpenAI all run ML or AI workloads already on Kubernetes. That does not mean that Data Scientists and engineers will suddenly be creating Dockerfiles or exploring admission controllers, Kubernetes instead, as a platform will be multiple layers of abstraction away (Full scale self service platform engineering) however the engineers responsible for running and operating the platform will hail projects like KAITO.669Views2likes0CommentsSeamlessly Integrating Azure KeyVault with Jarsigner for Enhanced Security
Dive into the world of enhanced security with our step-by-step guide on integrating Azure KeyVault with Jarsigner. Whether you're a beginner or an experienced developer, this guide will walk you through the process of securely signing your Java applications using Azure's robust security features. Learn how to set up, execute, and verify digital signatures with ease, ensuring your applications are protected in an increasingly digital world. Join us to boost your security setup now!6.9KViews0likes1CommentLearn New Skills in the New Year
New year’s resolution: Start writing better code faster in 2025. Kick off the new year by learning new developer skills and elevate your career to the next level. In this post, we explore learning resources and live events that will help you build critical skills and get started with cutting-edge technologies. Learn how to build custom agents, code intelligent apps with familiar tools, discover new possibilities in .NET 9, use Copilot for testing and debugging, and more. Plus, get details about using GitHub Copilot in Visual Studio Code—for free! New AI for Developers page Check out the new AI for Developers page. It's packed with free GitHub courses on building apps, machine learning, and mastering GitHub Copilot for paired programming. Learn your way and skill up for what's next in AI. Use GitHub Copilot in Visual Studio Code for free Did you hear the news? You can now use GitHub Copilot in Visual Studio Code for free. Get details about the new Copilot Free plan and add Copilot to your developer toolbox. What is Copilot Studio? Have questions about Copilot Studio? This article from Microsoft Learn covers all the basics you need to know about Copilot Studio—the low-code tool for easily building agents and extending Microsoft 365 Copilot. From C# to ChatGPT: Build Generative AI Solutions with Azure Combine your C# skills with the cutting-edge power of ChatGPT and Azure OpenAI Service. This free learning path introduces you to building GenAI solutions, using REST APIs, SDKs, and Azure tools to create more intelligent applications. Register for the Powerful Devs Conference + Hackathon Register for the Powerful Devs Conference + Hackathon (February 12-28, 2025) and get more out of Power Platform. This one-day online conference is followed by a 2-week hackathon focused on building intelligent applications with less effort. Code the future with Java and AI: RSVP for Microsoft JDConf 2025 today Get ready for the JDConf 2025—Microsoft's annual event for Java developers. Taking place April 9-10, this year’s event will have three separate live streams to cover different regions. Join to explore tools and skills for building modern apps in the cloud and integrating AI. Build custom agents for Microsoft Teams Learn how to build custom agents for Microsoft Teams. This free learning path will teach you about different copilot stacks, working with Azure OpenAI, building a custom engine agent. Start building intelligent Microsoft Teams apps using the LLMs and AI components. Microsoft Learn: Debug your app with GitHub Copilot in Visual Studio Debug more efficiently using GitHub Copilot. This Microsoft Learn article shows you how. Discover how Copilot will answer detailed questions about your code and provide bug fixes. Make Azure AI Real: Watch Season 2 Elevate your AI game with Make Azure AI Real on demand. Season 2 digs into the latest Azure AI advancements, with practical demos, code samples, and real-world use cases. GitHub Copilot Bootcamp Streamline your workflow with GitHub Copilot—craft more effective prompts and automate repetitive tasks like testing. This GitHub Copilot Bootcamp is a 4-part live streaming series that will help you master GitHub Copilot. 10 Days of GenAI – Gift Guide Edition Start building your own Gen AI application. These short videos outline 10 steps for creating your app—choose a model, add functions, fine tune responses, and more. Extend Microsoft 365 Copilot with declarative agents using Visual Studio Code Check out this new learning path from Microsoft Learn to discover how you can extend Microsoft 365 Copilot with declarative agents using VS Code. Learn about declarative agents and how they work. Developer's guide to building your own agents Want to build your own agents? Watch this Ignite session on demand for a look at the new agent development tools. Find out how to create agents built on Microsoft 365 Copilot or your custom AI engine. Master distributed application development with .NET Aspire Get started with .NET Aspire—an opinionated, cloud-ready stack for building distributed applications with .NET. This series covers everything from setup to deployment. Start your journey toward mastering distributed app development. Learn: What's new in .NET 9 Discover what's new in .NET 9. Learn about new features for AI, improvements for building cloud-native apps, performance enhancements, updates to C#, and more. Read the overview and get started with .NET 9. Become a .NET AI engineer using the OpenAI library for .NET Use your .NET skills to become an AI engineer. With the OpenAI library, .NET developers can quickly master critical AI skills and apply them to real world apps. Read the blog to learn more about the OpenAI library for .NET. Test like a pro with Playwright and GitHub Copilot Supercharge your testing using Playwright and GitHub Copilot. Watch this in-depth demo and discover how you can easily create end-to-end tests using Playwright's powerful built-in code generator. Other news and resources from around Microsoft · Microsoft Learn: Why and how to adopt AI in your organization · Microsoft Learn: Learn to use Copilot in Microsoft Fabric · AI Toolkit for Visual Studio Code: Update highlights · Teams Toolkit for Visual Studio Code update · RAG Deep Dive: Live streams · Learn Together: SQL database in Fabric · Become an AI security expert using OpenAI with Azure Managed Identity · Deploy, monitor, and manage development resources with Microsoft Dev Box · Microsoft Playwright testing · Introduction to artificial intelligence and Azure AI services · Azure AI-900 Fundamentals Training event series · Leveraging cloud-native infra for your intelligent apps · Platform engineering with GitHub · Extend declarative agents for Microsoft 365 Copilot with API plugins using Visual Studio Code · Introducing the Microsoft 365 Agents SDK · Azure Live Q&A events · Get started with multimodal parsing for RAG using GPT-4o, Azure AI Search, and LlamaParse3.3KViews2likes0CommentsUnlock New AI and Cloud Potential with .NET 9 & Azure: Faster, Smarter, and Built for the Future
.NET 9, now available to developers, marks a significant milestone in the evolution of the .NET platform, pushing the boundaries of performance, cloud-native development, and AI integration. This release, shaped by contributions from over 9,000 community members worldwide, introduces thousands of improvements that set the stage for the future of application development. With seamless integration with Azure and a focus on cloud-native development and AI capabilities, .NET 9 empowers developers to build scalable, intelligent applications with unprecedented ease. Expanding Azure PaaS Support for .NET 9 With the release of .NET 9, a comprehensive range of Azure Platform as a Service (PaaS) offerings now fully support the platform’s new capabilities, including the latest .NET SDK for any Azure developer. This extensive support allows developers to build, deploy, and scale .NET 9 applications with optimal performance and adaptability on Azure. Additionally, developers can access a wealth of architecture references and sample solutions to guide them in creating high-performance .NET 9 applications on Azure’s powerful cloud services: Azure App Service: Run, manage, and scale .NET 9 web applications efficiently. Check out this blog to learn more about what's new in Azure App Service. Azure Functions: Leverage serverless computing to build event-driven .NET 9 applications with improved runtime capabilities. Azure Container Apps: Deploy microservices and containerized .NET 9 workloads with integrated observability. Azure Kubernetes Service (AKS): Run .NET 9 applications in a managed Kubernetes environment with expanded ARM64 support. Azure AI Services and Azure OpenAI Services: Integrate advanced AI and OpenAI capabilities directly into your .NET 9 applications. Azure API Management, Azure Logic Apps, Azure Cognitive Services, and Azure SignalR Service: Ensure seamless integration and scaling for .NET 9 solutions. These services provide developers with a robust platform to build high-performance, scalable, and cloud-native applications while leveraging Azure’s optimized environment for .NET. Streamlined Cloud-Native Development with .NET Aspire .NET Aspire is a game-changer for cloud-native applications, enabling developers to build distributed, production-ready solutions efficiently. Available in preview with .NET 9, Aspire streamlines app development, with cloud efficiency and observability at its core. The latest updates in Aspire include secure defaults, Azure Functions support, and enhanced container management. Key capabilities include: Optimized Azure Integrations: Aspire works seamlessly with Azure, enabling fast deployments, automated scaling, and consistent management of cloud-native applications. Easier Deployments to Azure Container Apps: Designed for containerized environments, .NET Aspire integrates with Azure Container Apps (ACA) to simplify the deployment process. Using the Azure Developer CLI (azd), developers can quickly provision and deploy .NET Aspire projects to ACA, with built-in support for Redis caching, application logging, and scalability. Built-In Observability: A real-time dashboard provides insights into logs, distributed traces, and metrics, enabling local and production monitoring with Azure Monitor. With these capabilities, .NET Aspire allows developers to deploy microservices and containerized applications effortlessly on ACA, streamlining the path from development to production in a fully managed, serverless environment. Integrating AI into .NET: A Seamless Experience In our ongoing effort to empower developers, we’ve made integrating AI into .NET applications simpler than ever. Our strategic partnerships, including collaborations with OpenAI, LlamaIndex, and Qdrant, have enriched the AI ecosystem and strengthened .NET’s capabilities. This year alone, usage of Azure OpenAI services has surged to nearly a billion API calls per month, illustrating the growing impact of AI-powered .NET applications. Real-World AI Solutions with .NET: .NET has been pivotal in driving AI innovations. From internal teams like Microsoft Copilot creating AI experiences with .NET Aspire to tools like GitHub Copilot, developed with .NET to enhance productivity in Visual Studio and VS Code, the platform showcases AI at its best. KPMG Clara is a prime example, developed to enhance audit quality and efficiency for 95,000 auditors worldwide. By leveraging .NET and scaling securely on Azure, KPMG implemented robust AI features aligned with strict industry standards, underscoring .NET and Azure as the backbone for high-performing, scalable AI solutions. Performance Enhancements in .NET 9: Raising the Bar for Azure Workloads .NET 9 introduces substantial performance upgrades with over 7,500 merged pull requests focused on speed and efficiency, ensuring .NET 9 applications run optimally on Azure. These improvements contribute to reduced cloud costs and provide a high-performance experience across Windows, Linux, and macOS. To see how significant these performance gains can be for cloud services, take a look at what past .NET upgrades achieved for Microsoft’s high-scale internal services: Bing achieved a major reduction in startup times, enhanced efficiency, and decreased latency across its high-performance search workflows. Microsoft Teams improved efficiency by 50%, reduced latency by 30–45%, and achieved up to 100% gains in CPU utilization for key services, resulting in faster user interactions. Microsoft Copilot and other AI-powered applications benefited from optimized runtime performance, enabling scalable, high-quality experiences for users. Upgrading to the latest .NET version offers similar benefits for cloud apps, optimizing both performance and cost-efficiency. For more information on updating your applications, check out the .NET Upgrade Assistant. For additional details on ASP.NET Core, .NET MAUI, NuGet, and more enhancements across the .NET platform, check out the full Announcing .NET 9 blog post. Conclusion: Your Path to the Future with .NET 9 and Azure .NET 9 isn’t just an upgrade—it’s a leap forward, combining cutting-edge AI integration, cloud-native development, and unparalleled performance. Paired with Azure’s scalability, these advancements provide a trusted, high-performance foundation for modern applications. Get started by downloading .NET 9 and exploring its features. Leverage .NET Aspire for streamlined cloud-native development, deploy scalable apps with Azure, and embrace new productivity enhancements to build for the future. For additional insights on ASP.NET, .NET MAUI, NuGet, and more, check out the full Announcing .NET 9 blog post. Explore the future of cloud-native and AI development with .NET 9 and Azure—your toolkit for creating the next generation of intelligent applications.9KViews2likes1Comment