powershell
86 TopicsUsing OpenAI on Azure Web App
TOC Introduction to OpenAI System Architecture Architecture Focus of This Tutorial Setup Azure Resources File and Directory Structure ARM Template ARM Template From Azure Portal Running Locally Training Models and Training Data Predicting with the Model Publishing the Project to Azure Running on Azure Web App Training the Model Using the Model for Prediction Troubleshooting Startup Command Issue App Becomes Unresponsive After a Period az cli command for Linux webjobs fail Others Conclusion References 1. Introduction to OpenAI OpenAI is a leading artificial intelligence research and deployment company founded in December 2015. Its mission is to ensure that artificial general intelligence (AGI)—highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. OpenAI focuses on developing safe and scalable AI technologies and ensuring equitable access to these innovations. Known for its groundbreaking advancements in natural language processing, OpenAI has developed models like GPT (Generative Pre-trained Transformer), which powers applications for text generation, summarization, translation, and more. GPT models have revolutionized fields like conversational AI, creative writing, and programming assistance. OpenAI has also released models like Codex, designed to understand and generate computer code, and DALL·E, which creates images from textual descriptions. OpenAI operates with a unique hybrid structure: a for-profit company governed by a nonprofit entity to balance the development of AI technology with ethical considerations. The organization emphasizes safety, research transparency, and alignment to human values. By providing access to its models through APIs and fostering partnerships, OpenAI empowers developers, businesses, and researchers to leverage AI for innovative solutions across diverse industries. Its long-term goal is to ensure AI advances benefit humanity as a whole. 2. System Architecture Architecture Development Environment OS: Ubuntu Version: Ubuntu 18.04 Bionic Beaver Python Version: 3.7.3 Azure Resources App Service Plan: SKU - Premium Plan 0 V3 App Service: Platform - Linux (Python 3.9, Version 3.9.19) Storage Account: SKU - General Purpose V2 File Share: No backup plan Focus of This Tutorial This tutorial walks you through the following stages: Setting up Azure resources Running the project locally Publishing the project to Azure Running the application on Azure Troubleshooting common issues Each of the mentioned aspects has numerous corresponding tools and solutions. The relevant information for this session is listed in the table below. Local OS Windows Linux Mac V How to setup Azure resources Portal (i.e., REST api) ARM Bicep Terraform V V How to deploy project to Azure VSCode CLI Azure DevOps GitHub Action V 3. Setup Azure Resources File and Directory Structure Please open a bash terminal and enter the following commands: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai bash ./openai/tools/add-venv.sh If you are using a Windows platform, use the following alternative PowerShell commands instead: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai .\openai\tools\add-venv.cmd After completing the execution, you should see the following directory structure: File and Path Purpose openai/tools/add-venv.* The script executed in the previous step (cmd for Windows, sh for Linux/Mac) to create all Python virtual environments required for this tutorial. .venv/openai-webjob/ A virtual environment specifically used for training models (i.e., calculating embedding vectors indeed). openai/webjob/requirements.txt The list of packages (with exact versions) required for the openai-webjob virtual environment. .venv/openai/ A virtual environment specifically used for the Flask application, enabling API endpoint access for querying predictions (i.e., suggestion). openai/requirements.txt The list of packages (with exact versions) required for the openai virtual environment. openai/ The main folder for this tutorial. openai/tools/arm-template.json The ARM template to setup all the Azure resources related to this tutorial, including an App Service Plan, a Web App, and a Storage Account. openai/tools/create-folder.* A script to create all directories required for this tutorial in the File Share, including train, model, and test. openai/tools/download-sample-training-set.* A script to download a sample training set from News-Headlines-Dataset-For-Sarcasm-Detection, containing headlines data from TheOnion and HuffPost, into the train directory of the File Share. openai/webjob/cal_embeddings.py A script for calculating embedding vectors from headlines. It loads the training set, applies the transformation on OpenAI API, and saves the embedding vectors in the model directory of the File Share. openai/App_Data/jobs/triggered/cal-embeddings/cal_embeddings.sh A shell script for Azure App Service web jobs. It activates the openai-webjob virtual environment and starts the cal_embeddings.py script. openai/api/app.py Code for the Flask application, including routes, port configuration, input parsing, vectors loading, predictions, and output generation. openai/start.sh A script executed after deployment (as specified in the ARM template startup command I will introduce it later). It sets up the virtual environment and starts the Flask application to handle web requests. ARM Template We need to create the following resources or services: Manual Creation Required Resource/Service App Service Plan No Resource (plan) App Service Yes Resource (app) Storage Account Yes Resource (storageAccount) File Share Yes Service Let’s take a look at the openai/tools/arm-template.json file. Refer to the configuration section for all the resources. Since most of the configuration values don’t require changes, I’ve placed them in the variables section of the ARM template rather than the parameters section. This helps keep the configuration simpler. However, I’d still like to briefly explain some of the more critical settings. As you can see, I’ve adopted a camelCase naming convention, which combines the [Resource Type] with [Setting Name and Hierarchy]. This makes it easier to understand where each setting will be used. The configurations in the diagram are sorted by resource name, but the following list is categorized by functionality for better clarity. Configuration Name Value Purpose storageAccountFileShareName data-and-model [Purpose 1: Link File Share to Web App] Use this fixed name for File Share storageAccountFileShareShareQuota 5120 [Purpose 1: Link File Share to Web App] The value is in GB storageAccountFileShareEnabledProtocols SMB [Purpose 1: Link File Share to Web App] appSiteConfigAzureStorageAccountsType AzureFiles [Purpose 1: Link File Share to Web App] appSiteConfigAzureStorageAccountsProtocol Smb [Purpose 1: Link File Share to Web App] planKind linux [Purpose 2: Specify platform and stack runtime] Select Linux (default if Python stack is chosen) planSkuTier Premium0V3 [Purpose 2: Specify platform and stack runtime] Choose at least Premium Plan to ensure enough memory for your AI workloads planSkuName P0v3 [Purpose 2: Specify platform and stack runtime] Same as above appKind app,linux [Purpose 2: Specify platform and stack runtime] Same as above appSiteConfigLinuxFxVersion PYTHON|3.9 [Purpose 2: Specify platform and stack runtime] Select Python 3.9 to avoid dependency issues appSiteConfigAppSettingsWEBSITES_CONTAINER_START_TIME_LIMIT 600 [Purpose 3: Deploying] The value is in seconds, ensuring the Startup Command can continue execution beyond the default timeout of 230 seconds. This tutorial’s Startup Command typically takes around 300 seconds, so setting it to 600 seconds provides a safety margin and accommodates future project expansion (e.g., adding more packages) appSiteConfigAppCommandLine [ -f /home/site/wwwroot/start.sh ] && bash /home/site/wwwroot/start.sh || GUNICORN_CMD_ARGS=\"--timeout 600 --access-logfile '-' --error-logfile '-' -c /opt/startup/gunicorn.conf.py --chdir=/opt/defaultsite\" gunicorn application:app [Purpose 3: Deploying] This is the Startup Command, which can be break down into 3 parts: First (-f /home/site/wwwroot/start.sh): Checks whether start.sh exists. This is used to determine whether the app is in its initial state (just created) or has already been deployed. Second (bash /home/site/wwwroot/start.sh): If the file exists, it means the app has already been deployed. The start.sh script will be executed, which installs the necessary packages and starts the Flask application. Third (GUNICORN_CMD_ARGS=\"--timeout 600 --access-logfile '-' --error-logfile '-' -c /opt/startup/gunicorn.conf.py --chdir=/opt/defaultsite\" gunicorn application:app): If the file does not exist, the command falls back to the default HTTP server (gunicorn) to start the web app. Since the command is enclosed in double quotes within the ARM template, during actual execution, replace \" with " appSiteConfigAppSettingsSCM_DO_BUILD_DURING_DEPLOYMENT false [Purpose 3: Deploying] Since we have already defined the handling for different virtual environments in start.sh, we do not need to initiate the default build process of the Web App appSiteConfigAppSettingsWEBSITES_ENABLE_APP_SERVICE_STORAGE true [Purpose 4: Webjobs] This setting is required to enable the App Service storage feature, which is necessary for using web jobs (e.g., for model training) storageAccountPropertiesAllowSharedKeyAccess true [Purpose 5: Troubleshooting] This setting is enabled by default. The reason for highlighting it is that certain enterprise IT policies may enforce changes to this configuration after a period, potentially causing a series of issues. For more details, please refer to the Troubleshooting section below. Return to bash terminal and execute the following commands (their purpose has been described earlier). # Please change <ResourceGroupName> to your prefer name, for example: azure-appservice-ai # Please change <RegionName> to your prefer region, for example: eastus2 # Please change <ResourcesPrefixName> to your prefer naming pattern, for example: openai-arm (it will create openai-arm-asp as App Service Plan, openai-arm-app for web app, and openaiarmsa for Storage Account) az group create --name <ResourceGroupName> --location <RegionName> az deployment group create --resource-group <ResourceGroupName> --template-file ./openai/tools/arm-template.json --parameters resourcePrefix=<ResourcesPrefixName> If you are using a Windows platform, use the following alternative PowerShell commands instead: # Please change <ResourceGroupName> to your prefer name, for example: azure-appservice-ai # Please change <RegionName> to your prefer region, for example: eastus2 # Please change <ResourcesPrefixName> to your prefer naming pattern, for example: openai-arm (it will create openai-arm-asp as App Service Plan, openai-arm-app for web app, and openaiarmsa for Storage Account) az group create --name <ResourceGroupName> --location <RegionName> az deployment group create --resource-group <ResourceGroupName> --template-file .\openai\tools\arm-template.json --parameters resourcePrefix=<ResourcesPrefixName> After execution, please copy the output section containing 3 key-value pairs from the result like this. Return to bash terminal and execute the following commands: # Please setup 3 variables you've got from the previous step OUTPUT_STORAGE_NAME="<outputStorageName>" OUTPUT_STORAGE_KEY="<outputStorageKey>" OUTPUT_SHARE_NAME="<outputShareName>" sudo mkdir -p /mnt/$OUTPUT_SHARE_NAME if [ ! -d "/etc/smbcredentials" ]; then sudo mkdir /etc/smbcredentials fi CREDENTIALS_FILE="/etc/smbcredentials/$OUTPUT_STORAGE_NAME.cred" if [ ! -f "$CREDENTIALS_FILE" ]; then sudo bash -c "echo \"username=$OUTPUT_STORAGE_NAME\" >> $CREDENTIALS_FILE" sudo bash -c "echo \"password=$OUTPUT_STORAGE_KEY\" >> $CREDENTIALS_FILE" fi sudo chmod 600 $CREDENTIALS_FILE sudo bash -c "echo \"//$OUTPUT_STORAGE_NAME.file.core.windows.net/$OUTPUT_SHARE_NAME /mnt/$OUTPUT_SHARE_NAME cifs nofail,credentials=$CREDENTIALS_FILE,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30\" >> /etc/fstab" sudo mount -t cifs //$OUTPUT_STORAGE_NAME.file.core.windows.net/$OUTPUT_SHARE_NAME /mnt/$OUTPUT_SHARE_NAME -o credentials=$CREDENTIALS_FILE,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30 Or you could simply go to Azure Portal, navigate to the File Share you just created, and refer to the diagram below to copy the required command. You can choose Windows or Mac if you are using such OS in your dev environment. After executing the command, the network drive will be successfully mounted. You can use df to verify, as illustrated in the diagram. ARM Template From Azure Portal In addition to using az cli to invoke ARM Templates, if the JSON file is hosted on a public network URL, you can also load its configuration directly into the Azure Portal by following the method described in the article [Deploy to Azure button - Azure Resource Manager]. This is my example. Click Me After filling in all the required information, click Create. Once the creation process is complete, click Outputs on the left menu to retrieve the connection information for the File Share. 4. Running Locally Training Models and Training Data In the next steps, you will need to use OpenAI services. Please ensure that you have registered as a member and added credits to your account (Billing overview - OpenAI API). For this example, adding $10 USD will be sufficient. Additionally, you will need to generate a new API key (API keys - OpenAI API), you may choose to create a project as well for future project organization, depending on your needs (Projects - OpenAI API). After getting the API key, create a text file named apikey.txt in the openai/tools/ folder. Paste the key you just copied into the file and save it. Return to bash terminal and execute the following commands (their purpose has been described earlier). source .venv/openai-webjob/bin/activate bash ./openai/tools/create-folder.sh bash ./openai/tools/download-sample-training-set.sh python ./openai/webjob/cal_embeddings.py --sampling_ratio 0.002 If you are using a Windows platform, use the following alternative PowerShell commands instead: .\.venv\openai-webjob\Scripts\Activate.ps1 .\openai\tools\create-folder.cmd .\openai\tools\download-sample-training-set.cmd python .\openai\webjob\cal_embeddings.py --sampling_ratio 0.002 After execution, the File Share will now include the following directories and files. Let’s take a brief detour to examine the structure of the training data downloaded from the GitHub. The right side of the image explains each field of the data. This dataset was originally used to detect whether news headlines contain sarcasm. However, I am repurposing it for another application. In this example, I will use the "headline" field to create embeddings. The left side displays the raw data, where each line is a standalone JSON string containing the necessary fields. In the code, I first extract the "headline" field from each record and send it to OpenAI to compute the embedding vector for the text. This embedding represents the position of the text in a semantic space (akin to coordinates in a multi-dimensional space). After the computation, I obtain an embedding vector for each headline. Moving forward, I will refer to these simply as embeddings. By the way, the sampling_ratio parameter in the command is something I configured to speed up the training process. The original dataset contains nearly 30,000 records, which would result in a training time of around 8 hours. To simplify the tutorial, you can specify a relatively low sampling_ratio value (ranging from 0 to 1, representing 0% to 100% sampling from the original records). For example, a value of 0.01 corresponds to a 1% sample, allowing you to accelerate the experiment. In this semantic space, vectors that are closer to each other often have similar values, which corresponds to similar meanings. In this context, the distance between vectors will serve as our metric to evaluate the semantic similarity between pieces of text. For this, we will use a method called cosine similarity. In the subsequent tutorial, we will construct some test texts. These test texts will also be converted into embeddings using the same method. Each test embedding will then be compared against the previously computed headline embeddings. The comparison will identify the nearest headline embeddings in the multi-dimensional vector space, and their original text will be returned. Additionally, we will leverage OpenAI's well-known generative AI capabilities to provide a textual explanation. This explanation will describe why the constructed test text is related to the recommended headline. Predicting with the Model Return to terminal and execute the following commands. First, deactivate the virtual environment used for calculating the embeddings, then activate the virtual environment for the Flask application, and finally, start the Flask app. Commands for Linux or Mac: deactivate source .venv/openai/bin/activate python ./openai/api/app.py Commands for Windows: deactivate .\.venv\openai\Scripts\Activate.ps1 python .\openai\api\app.py When you see a screen similar to the following, it means the server has started successfully. Press Ctrl+C to stop the server if needed. Before conducting the actual test, let’s construct some sample query data: education Next, open a terminal and use the following curl commands to send requests to the app: curl -X GET http://127.0.0.1:8000/api/detect?text=education You should see the calculation results, confirming that the embeddings and Gen AI is working as expected. PS: Your results may differ from mine due to variations in the sampling of your training dataset compared to mine. Additionally, OpenAI's generative content can produce different outputs depending on the timing and context. Please keep this in mind. 5. Publishing the Project to Azure Return to terminal and execute the following commands. Commands for Linux or Mac: # Please change <resourcegroup_name> and <webapp_name> to your own # Create the Zip file from project zip -r openai/app.zip openai/* # Deploy the App az webapp deploy --resource-group <resourcegroup_name> --name <webapp_name> --src-path openai/app.zip --type zip # Delete the Zip file rm openai/app.zip Commands for Windows: # Please change <resourcegroup_name> and <webapp_name> to your own # Create the Zip file from project Compress-Archive -Path openai\* -DestinationPath openai\app.zip # Deploy the App az webapp deploy --resource-group <resourcegroup_name> --name <webapp_name> --src-path openai\app.zip --type zip # Delete the Zip file del openai\app.zip PS: WebJobs follow the directory structure of App_Data/jobs/triggered/<webjob_name>/. As a result, once the Web App is deployed, the WebJob is automatically deployed along with it, requiring no additional configuration. 6. Running on Azure Web App Training the Model Return to terminal and execute the following commands to invoke the WebJobs. Commands for Linux or Mac: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; curl -X POST -H "Authorization: Bearer $token" -H "Content-Type: application/json" -d '{}' "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" Commands for Windows: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"; "Content-type" = "application/json"} -Method POST -Body '{}' You could see the training status by execute the following commands. Commands for Linux or Mac: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; response=$(curl -s -H "Authorization: Bearer $token" "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01") ; echo "$response" | jq Commands for Windows: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv); $response = Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"} -Method GET ; $response | ConvertTo-Json -Depth 10 Processing Complete And you can get the latest detail log by execute the following commands. Commands for Linux or Mac: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; history_id=$(az webapp webjob triggered log --resource-group <resourcegroup_name> --name <webapp_name> --webjob-name cal-embeddings --query "[0].id" -o tsv | sed 's|.*/history/||') ; response=$(curl -X GET -H "Authorization: Bearer $token" -H "Content-Type: application/json" "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/history/$history_id/?api-version=2024-04-01") ; log_url=$(echo "$response" | jq -r '.properties.output_url') ; curl -X GET -H "Authorization: Bearer $token" "$log_url" Commands for Windows: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own $token = az account get-access-token --resource https://management.azure.com --query accessToken -o tsv ; $history_id = az webapp webjob triggered log --resource-group <resourcegroup_name> --name <webapp_name> --webjob-name cal-embeddings --query "[0].id" -o tsv | ForEach-Object { ($_ -split "/history/")[-1] } ; $response = Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/history/$history_id/?api-version=2024-04-01" -Headers @{ Authorization = "Bearer $token" } -Method GET ; $log_url = $response.properties.output_url ; Invoke-RestMethod -Uri $log_url -Headers @{ Authorization = "Bearer $token" } -Method GET Once you see the report in the Logs, it indicates that the embeddings calculation is complete, and the Flask app is ready for predictions. You can also find the newly calculated embeddings in the File Share mounted in your local environment. Using the Model for Prediction Just like in local testing, open a bash terminal and use the following curl commands to send requests to the app: # Please change <webapp_name> to your own curl -X GET https://<webapp_name>.azurewebsites.net/api/detect?text=education As with the local environment, you should see the expected results. 7. Troubleshooting Startup Command Issue Symptom: Without any code changes and when the app was previously functioning, updating the Startup Command causes the app to stop working. The related default_docker.log shows multiple attempts to run the container without errors in a short time, but the container does not respond on port 8000 as seen in docker.log. Cause: Since Linux Web Apps actually run in containers, the final command in the Startup Command must function similarly to the CMD instruction in a Dockerfile. CMD ["/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"] This command must ensure it runs in the foreground (i.e., not in daemon mode) and cannot exit the process unless manually interrupted. Resolution: Check the final command in the Startup Command to ensure it does not include a daemon execution mode. Alternatively, use the Web SSH interface to execute and verify these commands directly. App Becomes Unresponsive After a Period Symptom: An app that runs normally becomes unresponsive after some time. Both the front-end webpage and the Kudu page display an "Application Error," and the deployment log shows "Too many requests." Additionally, the local environment cannot connect to the associated File Share. Cause: Clicking on "diagnostic resources" in the initial error screen provides more detailed error information. In this example, the issue is caused by internal enterprise Policies or Automations (e.g., enterprise applications) that periodically or randomly scan storage account settings created by employees. If the settings are deemed non-compliant with security standards, they are automatically adjusted. For instance, the allowSharedKeyAccess parameter may be forcibly set to false, preventing both the Web App and the local development environment from connecting to the File Share under the Storage Account. Modification history for such settings can be checked via the Activity Log of the Storage Account (note that only the last 90 days of data are retained). Resolution: The proper approach is to work offline with the enterprise IT team to coordinate and request the necessary permissions. As a temporary workaround, modify the affected settings to Enable during testing periods and revert them to Disabled afterward. You can find the setting for allowSharedKeyAccess here. Note: Azure Storage Mount currently does not support access via Managed Identity. az cli command for Linux webjobs fail Symptom: Got "Operation returned an invalid status 'Unauthorized'" message from different platforms even in Azure CloudShell with latest az version Cause: After using "--debug --verbose" from the command I can see the actual error occurred on which REST API, for example, I'm using this command (az webapp webjob triggered): az webapp webjob triggered list --resource-group azure-appservice-ai --name openai-arm-app --debug --verbose Which represent that the operation has invoked under this API: /Microsoft.Web/sites/{app_name}/triggeredwebjobs (Web Apps - List Triggered Web Jobs) After I directly test that API from the official doc, I still get such the error, which means this preview feature is still under construction, and we cannot use it currently. Resolution: I found a related API endpoint via Azure Portal: /Microsoft.Web/sites/{app_name}/webjobs (Web Apps - List Web Jobs) After I directly test that API from the official doc, I can get the trigger list now. So I have modified the original command: az webapp webjob triggered list --resource-group azure-appservice-ai --name openai-arm-app To the following command (please note the differences between Linux/Mac and Windows commands). Make sure to replace <subscription_id>, <resourcegroup_name>, and <webapp_name> with your specific values. Commands for Linux or Mac: token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; response=$(curl -s -H "Authorization: Bearer $token" "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01") ; echo "$response" | jq Commands for Windows: $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv); $response = Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"} -Method GET ; $response | ConvertTo-Json -Depth 10 For "run" commands, due to the same issue when invoking the problematic API, so I also modify the operation. Commands for Linux or Mac: token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; curl -X POST -H "Authorization: Bearer $token" -H "Content-Type: application/json" -d '{}' "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" Commands for Windows: $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/029b4739-1f55-4cab-bf84-a9393f8ac8fe/resourceGroups/azure-appservice-ai/providers/Microsoft.Web/sites/openai-arm-app/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"; "Content-type" = "application/json"} -Method POST -Body '{}' Others Using Scikit-learn on Azure Web App 8. Conclusion Beyond simple embedding vector calculations, OpenAI's most notable strength is generative AI. You can provide instructions to the GPT model through natural language (as a prompt), clearly specifying the format you need in the instruction. You can then parse the returned content easily. While PaaS products are not ideal for heavy vector calculations, they are well-suited for acting as intermediaries to forward commands to generative AI. These outputs can even be used for various applications, such as patent infringement detection, plagiarism detection in research papers, or trending news analysis. I believe that in the future, we will see more similar applications on Azure Web Apps. 9. References Overview - OpenAI API News-Headlines-Dataset-For-Sarcasm-Detection Quickstart: Deploy a Python (Django, Flask, or FastAPI) web app to Azure - Azure App Service Configure a custom startup file for Python apps on Azure App Service on Linux - Python on Azure Mount Azure Storage as a local share - Azure App Service Deploy to Azure button - Azure Resource Manager Using Scikit-learn on Azure Web App857Views0likes0CommentsPowerShell script to delete all Containers from a Storage Account
After move the BootDiag settings out of the Custom Storage Account, the original Storage Account used for are still consuming space for nothing. This is part of the standard Clean Up stream need to be consider into the FinOps Plan. This script will help you to clean these Storage Accounts quickly and avoid cost paid for nothing. Connect-AzAccount #Your Subscription $MySubscriptionToClean = "MyGuid-MyGuid-MyGuid-MyGuid-MyGuid" $MyStorageAccountName = "MyStorageAccountForbootdiags" $MyStorageAccountKey = "MySAKeyWithAllCodeProvidedByYourStorageAccountSetting+MZ3cUvdQ==" $ContainerStartName = "bootdiag*" #Set subscription ID Set-AzContext -Subscription $MySubscriptionToClean Get-AzContext $ctx = New-AzStorageContext -StorageAccountName $MyStorageAccountName -StorageAccountKey $MyStorageAccountKey $myContainers = Get-AzStorageContainer -Name $ContainerStartName -Context $ctx -MaxCount 1000 foreach($mycontainer in $myContainers) { Remove-AzStorageContainer -Name $mycontainer.Name -Force -Context $ctx } I used this script to remove millions of BootDiag Containers from several Storage Accounts. You can also use it for any other cleanup use case if you need it. Fab61Views0likes1CommentAnnouncing General Availability of Terraform Azure Verified Modules for Platform Landing Zone (ALZ)
Azure Verified Modules ALZ ❤️ AVM. We are moving to a more modular approach to deploying your platform landing zones. In line with consistent feedback from you, we have now released a set of modules that together will deploy your platform landing zone architecture (ALZ). Azure Verified Modules for Platform Landing Zones (ALZ) is collection of Azure Verified Modules that are composed together to create your Platform Landing Zone. This replaces the existing CAF Enterprise Scale module that you may already be familiar with. The core Azure Verified Modules that are composed together are: Management Groups and Policy Pattern Module: avm-ptn-alz Management Resources Pattern Module: avm-ptn-management-alz Hub Virtual Networking Pattern Module: avm-ptn-hubnetworking Virtual Network Gateway Pattern Module: avm-ptn-vnetgateway Virtual WAN Networking Pattern Module: avm-ptn-virtualwan Private DNS Zone for Private Link Pattern Module: avm-ptn-network-private-link-private-dns-zones This means that you can now choose your own adventure by selecting only the modules that you need. It also means we can add new features faster and allows us the opportunity to do more rigorous testing of each module. To improve deployment reliability, we now use our own Terraform provider. The provider generates data for use by the module and does not directly deploy any resources. The move to a provider allows us to add many more features and checks to improve your deployment reliability. ALZ IaC Accelerator updates for Terraform The Azure Landing Zones IaC Accelerator is our recommended approach for deploying the Terraform Azure Verified Modules for Platform Landing Zone (ALZ). The Azure Verified Modules for Platform Landing Zone is now our default selection for the Terraform ALZ IaC Accelerator. This module will be the focus of our development and improvement efforts moving forward. The module implements best practices by default, including multi-region and availability zones for resiliency. The ALZ IaC Accelerator bootstrap continues to implement best practices, such as version control and Workload identity federation security. Along with supporting the Azure Verified Modules for Platform Landing Zone (ALZ) approach, we have also made many enhancements to the ALZ IaC Accelerator process. A summary of the improvements include: We now support HCL (HashiCorp Configuration language) tfvars file as the platform landing zone configuration file format We have introduced a Phase 0 to help you plan for your ALZ IaC Accelerator deployment We have introduced the concepts of Scenarios and Options to simplify the decisions you need to make Platform landing zone configuration file Before the introduction of the Azure Verified Modules for Platform Landing Zone (ALZ) starter module the platform landing zone configuration file was supplied in YAML format. Due to the lack of support for YAML in Terraform, we had to then convert this to JSON. Once converted to JSON the configuration file lost all it's ordering, formatting and comments. This made day 2 updates to the configuration very cumbersome. With the support for the tfvars file (in HashiCorp Configuration Language format), we are now able to pass the configuration file in its original format to the version control system repository. This makes for a much easier day 2 update process as the file retains it's ordering, comments and formatting as defined by you. Phase 0 Phase 0 is a new planning phase we have added to the documentation. This phase takes you through 3 sets of decisions you need to make about the ALZ IaC Accelerator deployment: Bootstrap decisions Platform Landing Zone Scenarios Platform Landing Zone Options In order to assist with this, we also provide a downloadable Excel checklist , which lists all the decisions so you can consider them up front prior to completing any configuration file updates. Phase 0 guides you through this process and provides explanations of the decisions. The Bootstrap decisions relate to the resources deployed to Azure and the configuration of your Version Control System required for the Continuous Delivery pipeline. These decisions are not new to the ALZ IaC Accelerator, but we now provide more structured guidance. Platform Landing Zone Scenarios The Scenarios are a new concept introduced for the Azure Verified Modules for Platform Landing Zone (ALZ) starter module. We aim to cover the most common Platform landing zone use cases we hear requested from partners and customers with the ALZ IaC Accelerator. These include: Multi-Region Hub and Spoke Virtual Network with Azure Firewall Multi-Region Virtual WAN with Azure Firewall Multi-Region Hub and Spoke Virtual Network with Network Virtual Appliance (NVA) Multi-Region Virtual WAN with Network Virtual Appliance (NVA) Management Groups, Policy and Management Resources Only Single-Region Hub and Spoke Virtual Network with Azure Firewall Single-Region Virtual WAN with Azure Firewall For each scenario we provide an example Platform landing zone configuration file that is ready to deploy immediately. We know that customers will want to modify some of the settings and that is where Options come in. NOTE: At the time this blog post was published, we support the 7 Scenarios listed above. We may update or add to these Scenarios based on feedback we hear from you, so keep an eye on our documentation. Platform Landing Zone Options The Options build on the Scenarios. For each Scenario, you can choose to customise it with one or more Options. Each Options includes detailed instructions of how to update the Platform landing zone configuration file or introduce library files to implement to the option. The Options are: Customise Resource Names Customize Management Group Names and IDs Turn off DDOS protection plan Turn off Bastion host Turn off Private DNS zones and Private DNS resolver Turn off Virtual Network Gateways Additional Regions IP Address Ranges Change a policy assignment enforcement mode Remove a policy assignment Turn off Azure Monitoring Agent Deploy Azure Monitoring Baseline Alerts (AMBA) Turn off Defender Plans Implement Zero Trust Networking NOTE: At the time this blog post was published, we support the 14 Options listed above. We may update or add to these Options based on feedback we hear from you, so keep an eye on our documentation. Azure Landing Zones Library Another new offering is the Azure Landing Zones Library. This is an evolution of the library concept in the caf-enterprise-scale module. Principally, the Library allows us to decouple the update cycle of the ALZ architecture, from the module and provider. We are separating the data from the deployment logic. This allows you to update the module to take advantage of a bug fix, without having to change the policies that are deployed. Something that wasn't easily possible before. Conversely, you are able to update to the latest policy refresh of Azure Landing Zones without updating the module itself. The Library has its own documentation site, which introduces the concepts. We plan to make the library the single source of truth for all Azure Landing Zones implementation options (e.g. Portal, Terraform and Bicep) in the future. Azure Landing Zones Documentation Site Furthermore, we have a new place to go for all technical documentation for Azure Verified Modules for Platform Landing Zones (ALZ). With the move to multiple modules, and the new accelerator all having multiple GitHub repositories, we felt the need to centralize the documentation to make it the one place to go to get technical details. We currently have documentation for the Accelerator and Terraform, with Bicep coming soon. The new vanity URL is: https://aka.ms/alz/tech-docs. Please let us know what you think! What about ALZ-Bicep? Finally, some of you may be wondering what the future for our Bicep implementation option (ALZ Bicep) for Azure Verified Modules for Platform Landing Zones (ALZ) may be with this evolution on the Terraform side. And we have good news to share! Work is underway to also build the next version of ALZ in Bicep, which will be known as “Bicep Azure Verified Modules for Platform Landing Zone (ALZ)”. This will also use the new Azure Landing Zones Library and be built from Azure Verified Modules (where appropriate). We are currently looking to complete this work before August 2025, if not a lot sooner than this; as we are making good progress as we speak! But for now, for Bicep you do not do anything and continue to use ALZ Bicep via the ALZ IaC Accelerator and we will provide more updates on the next version of Bicep ALZ in the coming months! Staying up-to-date We highly recommend joining, or watching back, our quarterly Azure Landing Zones Community Calls, to get all the latest and greatest from the ALZ team. Our next one is on the 29th January 2025 and you can find the link to sign up to attend or watch back previous ones at: aka.ms/ALZ/Community We look forward to seeing you all there soon!6KViews11likes0CommentsConnection Between Web App and O365 Resources: Using SharePoint as an Example
TOC Introduction [not recommended] Application permission [not recommended] Delegate permission with Device Code Flow Managed Identity Multi-Tenant App Registration Restrict Resources for Application permission References Introduction In late 2024, Microsoft Entra enforced MFA (Multi-Factor Authentication) for all user login processes. This change has caused some Web Apps using delegated permissions to fail in acquiring access tokens, thereby interrupting communication with O365 resources. This tutorial will present various alternative solutions tailored to different business requirements. We will use a Linux Python Web App as an example in the following sections. [not recommended] Application permission Traditionally, using delegated permissions has the advantages of being convenient, quick, and straightforward, without being limited by whether the Web App and Target resources (e.g., SharePoint) are in the same tenant. This is because it leverages the user identity in the SharePoint tenant as the login user. However, its drawbacks are quite evident—it is not secure. Delegated permissions are not designed for automated processes (i.e., Web Apps), and if the associated connection string (i.e., app secret) is obtained by a malicious user, it can be directly exploited. Against this backdrop, Microsoft Entra enforced MFA for all user login processes in late 2024. Since delegated permissions rely on user-based authentication, they are also impacted. Specifically, if your automated processes originally used delegated permissions to interact with other resources, they are likely to be interrupted by errors similar to the following in recent times. AADSTS50076: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '00000003-0000-0000-c000-000000000000' The root cause lies in the choice of permission type. While delegated permissions can technically be used for automated processes, there is a more appropriate option—application permissions, which are specifically designed for use in automated workflows. Therefore, when facing such issues, the quickest solution is to create a set of application permissions, align their settings with your previous delegated permissions, and then update your code to use the new app ID and secret to interact with the target resource. This method resolves the issue caused by the mandatory MFA process interruption. However, it is still not entirely secure, as the app secret, if obtained by a malicious user, can be exploited directly. Nonetheless, it serves as a temporary solution while planning for a large-scale modification or refactor of your existing system. [not recommended] Delegate permission with Device Code Flow Similarly, here's another temporary solution. The advantage of this approach is that you don't even need to create a new set of application permissions. Instead, you can retain the existing delegated permissions and resolve the issue by integrating Device Code Flow. Let's see how this can be achieved. First, navigate to Microsoft Entra > App Registration > Your Application > Authentication, and enable "Allow public client flows". Next, modify your code to implement the following method to acquire the token. Replace [YOUR_TENANT_ID] and [YOUR_APPLICATION_ID] with your own values. import os, atexit, msal, sys def get_access_token_device(): cache_filename = os.path.join( os.getenv("XDG_RUNTIME_DIR", ""), "my_cache.bin" ) cache = msal.SerializableTokenCache() if os.path.exists(cache_filename): cache.deserialize(open(cache_filename, "r").read()) atexit.register(lambda: open(cache_filename, "w").write(cache.serialize()) if cache.has_state_changed else None ) config = { "authority": "https://login.microsoftonline.com/[YOUR_TENANT_ID]", "client_id": "[YOUR_APPLICATIOM_ID]", "scope": ["https://graph.microsoft.com/.default"] } app = msal.PublicClientApplication( config["client_id"], authority=config["authority"], token_cache=cache, ) result = None accounts = app.get_accounts() if accounts: print("Pick the account you want to use to proceed:") for a in accounts: print(a["username"]) chosen = accounts[0] result = app.acquire_token_silent(["User.Read"], account=chosen) if not result: flow = app.initiate_device_flow(scopes=config["scope"]) print(flow["message"]) sys.stdout.flush() result = app.acquire_token_by_device_flow(flow) if "access_token" in result: access_token = result["access_token"] return access_token else: error = result.get("error") if error == "invalid_client": print("Invalid client ID.Please check your Azure AD application configuration") else: print(error) Demonstrating the Process Before acquiring the token for the first time, there is no cache file named my_cache.bin in your project directory. Start the test code, which includes obtaining the token and interacting with the corresponding service (e.g., SharePoint) using the token. Since this is the first use, the system will prompt you to manually visit https://microsoft.com/devicelogin and input the provided code. Once the manual process is complete, the system will obtain the token and execute the workflow. After acquiring the token, the cache file my_cache.bin will appear in your project directory. This file contains the access_token and refresh_token. For subsequent processes, whether triggered manually or automatically, the system will no longer prompt for manual login. The cached token has a validity period of approximately one hour, which may seem insufficient. However, the acquire_token_silent function in the program will automatically use the refresh token to renew the access token and update the cache. Therefore, as long as an internal script or web job is triggered at least once every hour, the token can theoretically be used continuously. Managed Identity Using Managed Identity to enable interaction between an Azure Web App and other resources is currently the best solution. It ensures that no sensitive information (e.g., app secrets) is included in the code and guarantees that only the current Web App can use this authentication method. Therefore, it meets both convenience and security requirements for production environments. Let’s take a detailed look at how to set it up. Step 1: Setup Managed Identity You will get an Object ID for further use. Step 2: Enterprise Application for Managed Identity Your Managed Identity will generate a corresponding Enterprise Application in Microsoft Entra. However, unlike App Registration, where permissions can be assigned directly via the Azure Portal, Enterprise Application permissions must be configured through commands. Step 3: Log in to Azure via CloudShell Use your account to access Azure Portal, open a CloudShell, and input the following command. This step will require you to log in with your credentials using the displayed code: Connect-MgGraph -Scopes "Application.ReadWrite.All", "AppRoleAssignment.ReadWrite.All" Continue by inputting the following command to target the Enterprise Application corresponding to your Managed Identity that requires permission assignment: $PrincipalId = "<Your web app managed identity object id>" $ResourceId = (Get-MgServicePrincipal -Filter "displayName eq 'Microsoft Graph'" | Select-Object -ExpandProperty Id) Step 4: Assign Permissions to the Enterprise Application Execute the following commands to assign permissions. Key Points: This example assigns all permissions with the prefix Sites.*. However, you can modify this to request only the necessary permissions, such as: Sites.Selected Sites.Read.All Sites.ReadWrite.All Sites.Manage.All Sites.FullControl.All If you do not wish to assign all permissions, you can change { $_.Value -like "*Sites.*" } to the specific permission you need, for example: { $_.Value -like "*Sites.Selected*" } Each time you modify the permission, you will need to rerun all the commands below. $AppRoles = Get-MgServicePrincipal -Filter "displayName eq 'Microsoft Graph'" -Property AppRoles | Select -ExpandProperty AppRoles | Where-Object { $_.Value -like "*Sites.*" } $AppRoles | ForEach-Object { $params = @{ "PrincipalId" = $PrincipalId "ResourceId" = $ResourceId "AppRoleId" = $_.Id } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $PrincipalId -BodyParameter $params } Step 5: Confirm Assigned Permissions If, in Azure Portal, you see a screen similar to this: (Include screenshot or example text for granted permissions) This means that the necessary permissions have been successfully assigned. Step 6: Retrieve a Token in Python In your Python code, you can use the following approach to retrieve the token: from azure.identity import ManagedIdentityCredential def get_access_token(): credential = ManagedIdentityCredential() token = credential.get_token("https://graph.microsoft.com/.default") return token.token Important Notes: When permissions are assigned or removed in the Enterprise Application, the ManagedIdentityCredential in your Python code caches the token for a while. These changes will not take effect immediately. You need to restart your application and wait approximately 10 minutes for the changes to take effect. Step 7: Perform Operations with the Token Finally, you can use this token to perform the desired operations. Below is an example of creating a file in SharePoint: You will notice that the uploader’s identity is no longer a person but instead the app itself, indicating that Managed Identity is indeed in effect and functioning properly. While this method is effective, it is limited by the inability of Managed Identity to handle cross-tenant resource requests. I will introduce one final method to resolve this limitation. Multi-Tenant App Registration In many business scenarios, resources are distributed across different tenants. For example, SharePoint is managed by Company (Tenant) B, while the Web App is developed by Company (Tenant) A. Since these resources belong to different tenants, Managed Identity cannot be used in such cases. Instead, we need to use a Multi-Tenant Application to resolve the issue. The principle of this approach is to utilize an Entra ID Application created by the administrator of Tenant A (i.e., the tenant that owns the Web App) that allows cross-tenant use. This application will be pre-authorized by future user from Tenant B (i.e., the administrator of the tenant that owns SharePoint) to perform operations related to SharePoint. It should be noted that the entire configuration process requires the participation of administrators from both tenants to a certain extent. Please refer to the following demonstration. This is a sequential tutorial; please note that the execution order cannot be changed. Step 1: Actions Required by the Administrator of the Tenant that Owns the Web App 1.1. In Microsoft Entra, create an Enterprise Application and select "Multi-Tenant." After creation, note down the Application ID. 1.2. In App Registration under AAD, locate the previously created application, generate an App Secret, and record it. 1.3. Still in App Registration, configure the necessary permissions. Choose "Application Permissions", then determine which permissions (all starting with "Sites.") are needed based on the actual operations your Web App will perform on SharePoint. For demonstration purposes, all permissions are selected here. Step 2: Actions Required by the Administrator of the Tenant that Owns SharePoint 2.1. Use the PowerShell interface to log in to Azure and select the tenant where SharePoint resides. az login --allow-no-subscriptions --use-device-code 2.2. Add the Multi-Tenant Application to this tenant. az ad sp create --id <App id get from step 1.1> 2.3. Visit the Azure Portal, go to Enterprise Applications, locate the Multi-Tenant Application added earlier, and navigate to Permissions to grant the permissions specified in step 1.3. Step 3: Actions Required by the Administrator of the Tenant that Owns the Web App 3.1. In your Python code, you can use the following method to obtain an access token: from msal import ConfidentialClientApplication def get_access_token_cross_tenant(): tenant_id = "your-sharepoint-tenant-id" # Tenant ID where the SharePoint resides (i.e., shown in step 2.1) client_id = "your-multi-tenant-app-client-id" # App ID created in step 1.1 client_secret = "your-app-secret" # Secret created in step 1.2 authority = f"https://login.microsoftonline.com/{tenant_id}" app = ConfidentialClientApplication( client_id, authority=authority, client_credential=client_secret ) scopes = ["https://graph.microsoft.com/.default"] token_response = app.acquire_token_for_client(scopes=scopes) return token_response.get("access_token") 3.2. Use this token to perform the required operations. Restrict Resources for Application permission Application permission, allows authorization under the App’s identity, enabling access to all SharePoint sites within the tenant, which could be overly broad for certain use cases. To restrict this permission to access a limited number of SharePoint sites, we need to configure the following settings: Actions Required by the Administrator of the Tenant that Owns SharePoint During the authorization process, only select Sites.Selected. (refer to Step 4 from Managed Identity, and Step 1.3 from Multu-tenant App Registration) Subsequently, configure access separately for different SharePoint sites. During the process, we will create a temporary App Registration to issue an access token, allowing us to assign specific SharePoint sites' read/write permissions to the target Application. Once the permission settings are completed, this App Registration can be deleted. Refer to the above images, we need to note down the App Registration's object ID and tenant ID. Additionally, we need to create an app secret and grant it the Application permission Sites.FullControl.All. Once the setup is complete, the token can be obtained using the following command. $tenantId = "<Your_Tenant_ID>" $clientId = "<Your_Temp_AppID>" $clientSecret = "<Your_Temp_App_Secret>" $scope = "https://graph.microsoft.com/.default" $url = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" $body = @{ grant_type = "client_credentials" client_id = $clientId client_secret = $clientSecret scope = $scope } $response = Invoke-RestMethod -Uri $url -Method Post -Body $body $accessToken = $response.access_token Before granting the write permission to the target application, even if the Application Permission already has the Sites.Selected scope, an error will still occur. Checking the current SharePoint site's allowed access list shows that it is empty. $headers = @{ "Authorization" = "Bearer $accessToken" "Content-Type" = "application/json" } Invoke-RestMethod -Method Get -Uri "https://graph.microsoft.com/v1.0/sites/<Your_SharePoint_Site>.sharepoint.com" -Headers $headers Next, we manually add the corresponding Application to the SharePoint site's allowed access list and assign it the write permission. $headers = @{ "Authorization" = "Bearer $accessToken" "Content-Type" = "application/json" } $body = @{ roles = @("write") grantedToIdentities = @( @{ application = @{ id = "<Your_Target_AppID>" displayName = "<Your_Target_AppName>" } } ) grantedToIdentitiesV2 = @( @{ application = @{ id = "<Your_Target_AppID>" displayName = "<Your_Target_AppName>" } } ) } | ConvertTo-Json -Depth 10 Invoke-RestMethod -Method Post -Uri "https://graph.microsoft.com/v1.0/sites/<Your_SharePoint_Site>.sharepoint.com/permissions" -Headers $headers -Body $body Rechecking the current SharePoint site's allowed access list confirms the addition. After that, writing files to the site will succeed, and you could delete the temp App Registration. References Microsoft will require MFA for all Azure users Acquire a token to call a web API using device code flow (desktop app) - Microsoft identity platform | Microsoft Learn Implement cross-tenant communication by using multitenant applications - Azure Architecture Center514Views1like0CommentsAzure PowerShell find LastOwnershipUpdateTime on disk
Hello: I wondering if it's possible to find LastOwnershipUpdateTime on the disk via PowerShell. I can see this info in the portal, but cannot figure out how to find it via script (PowerShell). Looks like MSFT recently released it, but even updating my Az.Compute module to the latest (9.0.0) version I still do not see it. Any help would be really appreciated. Thank you!Solved69Views0likes3CommentsPowershell Script to remove all Blobs from Storage account
With large number of Blobs in Storage Account, the manual cleanup from the Portal is more complicated and time consuming, as it's per set of 10'000. This script is simple and and can be executed in background to clean all items from a defined Blob Container. You have to specify the Storage Account connection string and the blob container name. [string]$myConnectionString = "DefaultEndpointsProtocol=https;AccountName=YourStorageAccountName;AccountKey=YourKeyFromStorageAccountConnectionString;EndpointSuffix=core.windows.net" [string]$ContainerName = "YourBlobContainerName" [int]$blobCountAfter = 0 [int]$blobCountBefore = 0 $context = New-AzStorageContext -ConnectionString $myConnectionString $blobCountBefore = (Get-AzStorageBlob -Container $ContainerName -Context $context).Count Write-Host "Total number of blobs in the container Before deletion: $blobCount" -ForegroundColor Yellow Get-AzStorageBlob -Container $ContainerName -Context $context | ForEach-Object { $_ | Remove-AzureStorageBlob # or: Remove-AzureStorageBlob -ICloudBlob $_.ICloudBlob -Context $ctx } $blobCountAfter = (Get-AzStorageBlob -Container $ContainerName -Context $context).Count Write-Host "Total number of blobs in the container After deletion : $blobCount" -ForegroundColor Green It was used for large blob storage container with more than 5 millions of blob items. Sources: https://learn.microsoft.com/en-us/powershell/module/az.storage/new-azstoragecontext?view=azps-13.0.0#examples https://learn.microsoft.com/en-us/answers/questions/1637785/what-is-the-easiest-way-to-find-the-total-number-o https://stackoverflow.com/questions/57119087/powershell-remove-all-blobs-in-a-container Fab198Views1like1CommentUsing Scikit-learn on Azure Web App
TOC Introduction to Scikit-learn System Architecture Architecture Focus of This Tutorial Setup Azure Resources Web App Storage Running Locally File and Directory Structure Training Models and Training Data Predicting with the Model Publishing the Project to Azure Deployment Configuration Running on Azure Web App Training the Model Using the Model for Prediction Troubleshooting Missing Environment Variables After Deployment Virtual Environment Resource Lock Issues Package Version Dependency Issues Default Binding Missing System Commands in Restricted Environments Conclusion References 1. Introduction to Scikit-learn Scikit-learn is a popular open-source Python library for machine learning, built on NumPy, SciPy, and matplotlib. It offers an efficient and easy-to-use toolkit for data analysis, data mining, and predictive modeling. Scikit-learn supports a variety of machine learning algorithms, including classification, regression, clustering, and dimensionality reduction (e.g., SVM, Random Forest, K-means). Its preprocessing utilities handle tasks like scaling, encoding, and missing data imputation. It also provides tools for model evaluation (e.g., accuracy, precision, recall) and pipeline creation, enabling users to chain preprocessing and model training into seamless workflows. 2. System Architecture Architecture Development Environment OS: Windows 11 Version: 24H2 Python Version: 3.7.3 Azure Resources App Service Plan: SKU - Premium Plan 0 V3 App Service: Platform - Linux (Python 3.9, Version 3.9.19) Storage Account: SKU - General Purpose V2 File Share: No backup plan Focus of This Tutorial This tutorial walks you through the following stages: Setting up Azure resources Running the project locally Publishing the project to Azure Running the application on Azure Troubleshooting common issues Each of the mentioned aspects has numerous corresponding tools and solutions. The relevant information for this session is listed in the table below. Local OS Windows Linux Mac V How to setup Azure resources Portal (i.e., REST api) ARM Bicep Terraform V How to deploy project to Azure VSCode CLI Azure DevOps GitHub Action V 3. Setup Azure Resources Web App We need to create the following resources or services: Manual Creation Required Resource/Service App Service Plan No Resource App Service Yes Resource Storage Account Yes Resource File Share Yes Service Go to the Azure Portal and create an App Service. Important configuration: OS: Select Linux (default if Python stack is chosen). Stack: Select Python 3.9 to avoid dependency issues. SKU: Choose at least Premium Plan to ensure enough memory for your AI workloads. Storage Create a Storage Account in the Azure Portal. Create a file share named data-and-model in the Storage Account. Mount the File Share to the App Service: Use the name data-and-model for consistency with tutorial paths. At this point, all Azure resources and services have been successfully created. Let’s take a slight detour and mount the recently created File Share to your Windows development environment. Navigate to the File Share you just created, and refer to the diagram below to copy the required command. Before copying, please ensure that the drive letter remains set to the default "Z" as the sample code in this tutorial will rely on it. Return to your development environment. Open a PowerShell terminal (do not run it as Administrator) and input the command copied in the previous step, as shown in the diagram. After executing the command, the network drive will be successfully mounted. You can open File Explorer to verify, as illustrated in the diagram. 4. Running Locally File and Directory Structure Please use VSCode to open a PowerShell terminal and enter the following commands: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai .\scikit-learn\tools\add-venv.cmd If you are using a Linux or Mac platform, use the following alternative commands instead: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai bash ./scikit-learn/tools/add-venv.sh After completing the execution, you should see the following directory structure: File and Path Purpose scikit-learn/tools/add-venv.* The script executed in the previous step (cmd for Windows, sh for Linux/Mac) to create all Python virtual environments required for this tutorial. .venv/scikit-learn-webjob/ A virtual environment specifically used for training models. scikit-learn/webjob/requirements.txt The list of packages (with exact versions) required for the scikit-learn-webjob virtual environment. .venv/scikit-learn/ A virtual environment specifically used for the Flask application, enabling API endpoint access for querying predictions. scikit-learn/requirements.txt The list of packages (with exact versions) required for the scikit-learn virtual environment. scikit-learn/ The main folder for this tutorial. scikit-learn/tools/create-folder.* A script to create all directories required for this tutorial in the File Share, including train, model, and test. scikit-learn/tools/download-sample-training-set.* A script to download a sample training set from the UCI Machine Learning Repository, containing heart disease data, into the train directory of the File Share. scikit-learn/webjob/train_heart_disease_model.py A script for training the model. It loads the training set, applies a machine learning algorithm (Logistic Regression), and saves the trained model in the model directory of the File Share. scikit-learn/webjob/train_heart_disease_model.sh A shell script for Azure App Service web jobs. It activates the scikit-learn-webjob virtual environment and starts the train_heart_disease_model.py script. scikit-learn/webjob/train_heart_disease_model.zip A ZIP file containing the shell script for Azure web jobs. It must be recreated manually whenever train_heart_disease_model.sh is modified. Ensure it does not include any directory structure. scikit-learn/api/app.py Code for the Flask application, including routes, port configuration, input parsing, model loading, predictions, and output generation. scikit-learn/.deployment A configuration file for deploying the project to Azure using VSCode. It disables the default Oryx build process in favor of custom scripts. scikit-learn/start.sh A script executed after deployment (as specified in the Portal's startup command). It sets up the virtual environment and starts the Flask application to handle web requests. Training Models and Training Data Return to VSCode and execute the following commands (their purpose has been described earlier). .\.venv\scikit-learn-webjob\Scripts\Activate.ps1 .\scikit-learn\tools\create-folder.cmd .\scikit-learn\tools\download-sample-training-set.cmd python .\scikit-learn\webjob\train_heart_disease_model.py If you are using a Linux or Mac platform, use the following alternative commands instead: source .venv/scikit-learn-webjob/bin/activate bash ./scikit-learn/tools/create-folder.sh bash ./scikit-learn/tools/download-sample-training-set.sh python ./scikit-learn/webjob/train_heart_disease_model.py After execution, the File Share will now include the following directories and files. Let’s take a brief detour to examine the structure of the training data downloaded from the public dataset website. The right side of the figure describes the meaning of each column in the dataset, while the left side shows the actual training data (after preprocessing). This is a predictive model that uses an individual’s physiological characteristics to determine the likelihood of having heart disease. Columns 1-13 represent various physiological features and background information of the patients, while Column 14 (originally Column 58) is the label indicating whether the individual has heart disease. The supervised learning process involves using a large dataset containing both features and labels. Machine learning algorithms (such as neural networks, SVMs, or in this case, logistic regression) identify the key features and their ranges that differentiate between labels. The trained model is then saved and can be used in services to predict outcomes in real time by simply providing the necessary features. Predicting with the Model Return to VSCode and execute the following commands. First, deactivate the virtual environment used for training the model, then activate the virtual environment for the Flask application, and finally, start the Flask app. Commands for Windows: deactivate .\.venv\scikit-learn\Scripts\Activate.ps1 python .\scikit-learn\api\app.py Commands for Linux or Mac: deactivate source .venv/scikit-learn/bin/activate python ./scikit-learn/api/app.py When you see a screen similar to the following, it means the server has started successfully. Press Ctrl+C to stop the server if needed. Before conducting the actual test, let’s construct some sample human feature data: [63, 1, 3, 145, 233, 1, 0, 150, 0, 2.3, 0, 0, 1] [63, 1, 3, 305, 233, 1, 0, 150, 0, 2.3, 0, 0, 1] Referring to the feature description table from earlier, we can see that the only modified field is Column 4 ("Resting Blood Pressure"), with the second sample having an abnormally high value. (Note: Normal resting blood pressure ranges are typically 90–139 mmHg.) Next, open a PowerShell terminal and use the following curl commands to send requests to the app: curl -X GET http://127.0.0.1:8000/api/detect -H "Content-Type: application/json" -d '{"info": [63, 1, 3, 145, 233, 1, 0, 150, 0, 2.3, 0, 0, 1]}' curl -X GET http://127.0.0.1:8000/api/detect -H "Content-Type: application/json" -d '{"info": [63, 1, 3, 305, 233, 1, 0, 150, 0, 2.3, 0, 0, 1]}' You should see the prediction results, confirming that the trained model is working as expected. 5. Publishing the Project to Azure Deployment In the VSCode interface, right-click on the target App Service where you plan to deploy your project. Manually select the local project folder named scikit-learn as the deployment source, as shown in the image below. Configuration After deployment, the App Service will not be functional yet and will still display the default welcome page. This is because the App Service has not been configured to build the virtual environment and start the Flask application. To complete the setup, go to the Azure Portal and navigate to the App Service. The following steps are critical, and their execution order must be correct. To avoid delays, it’s recommended to open two browser tabs beforehand, complete the settings in each, and apply them in sequence. Refer to the following two images for guidance. You need to do the following: Set the Startup Command: Specify the path to the script you deployed bash /home/site/wwwroot/start.sh Set Two App Settings: WEBSITES_CONTAINER_START_TIME_LIMIT=600 The value is in seconds, ensuring the Startup Command can continue execution beyond the default timeout of 230 seconds. This tutorial’s Startup Command typically takes around 300 seconds, so setting it to 600 seconds provides a safety margin and accommodates future project expansion (e.g., adding more packages). WEBSITES_ENABLE_APP_SERVICE_STORAGE=1 This setting is required to enable the App Service storage feature, which is necessary for using web jobs (e.g., for model training). Step-by-Step Process: Before clicking Continue, switch to the next browser tab and set up all the app settings. In the second tab, apply all app settings, then switch back to the first tab. Click Continue in the first tab and wait for several seconds for the operation to complete. Once completed, switch to the second tab and click Continue within 5 seconds. Ensure to click Continue promptly within 5 seconds after the previous step to finish all settings. After completing the configuration, wait for about 10 minutes for the settings to take effect. Then, navigate to the WebJobs section in the Azure Portal and upload the ZIP file mentioned in the earlier sections. Set its trigger type to Manual. At this point, the entire deployment process is complete. For future code updates, you only need to redeploy from VSCode; there is no need to reconfigure settings in the Azure Portal. 6. Running on Azure Web App Training the Model Go to the Azure Portal, locate your App Service, and navigate to the WebJobs section. Click on Start to initiate the job and wait for the results. During this process, you may need to manually refresh the page to check the status of the job execution. Refer to the image below for guidance. Once you see the model report in the Logs, it indicates that the model training is complete, and the Flask app is ready for predictions. You can also find the newly trained model in the File Share mounted in your local environment. Using the Model for Prediction Just like in local testing, open a PowerShell terminal and use the following curl commands to send requests to the app: # Note: Replace both instances of scikit-learn-portal-app with the name of your web app. curl -X GET https://scikit-learn-portal-app.azurewebsites.net/api/detect -H "Content-Type: application/json" -d '{"info": [63, 1, 3, 145, 233, 1, 0, 150, 0, 2.3, 0, 0, 1]}' curl -X GET https://scikit-learn-portal-app.azurewebsites.net/api/detect -H "Content-Type: application/json" -d '{"info": [63, 1, 3, 305, 233, 1, 0, 150, 0, 2.3, 0, 0, 1]}' As with the local environment, you should see the expected results. 7. Troubleshooting Missing Environment Variables After Deployment Symptom: Even after setting values in App Settings (e.g., WEBSITES_CONTAINER_START_TIME_LIMIT), they do not take effect. Cause: App Settings (e.g., WEBSITES_CONTAINER_START_TIME_LIMIT, WEBSITES_ENABLE_APP_SERVICE_STORAGE) are reset after updating the startup command. Resolution: Use Azure CLI or the Azure Portal to reapply the App Settings after deployment. Alternatively, set the startup command first, and then apply app settings. Virtual Environment Resource Lock Issues Symptom: The app fails to redeploy, even though no configuration or code changes were made. Cause: The virtual environment folder cannot be deleted due to active resource locks from the previous process. Files or processes from the previous virtual environment session remain locked. Resolution: Deactivate processes before deletion and use unique epoch-based folder names to avoid conflicts. Refer to scikit-learn/start.sh in this tutorial for implementation. Package Version Dependency Issues Symptom: Conflicts occur between package versions specified in requirements.txt and the versions required by the Python environment. This results in errors during installation or runtime. Cause: Azure deployment environments enforce specific versions of Python and pre-installed packages, leading to mismatches when older or newer versions are explicitly defined. Additionally, the read-only file system in Azure App Service prevents modifying global packages like typing-extensions. Resolution: Pin compatible dependency versions. For example, follow the instructions for installing scikit-learn from the scikit-learn 1.5.2 documentation. Refer to scikit-learn/requirements.txt in this tutorial. Default Binding Symptom: Despite setting the WEBSITES_PORT parameter in App Settings to match the port Flask listens on (e.g., Flask's default 5000), the deployment still fails. Cause: The Flask framework's default settings are not overridden to bind to 0.0.0.0 or the required port. Resolution: Explicitly bind Flask to 0.0.0.0:8000 in app.py . To avoid additional issues, it’s recommended to use the Azure Python Linux Web App's default port (8000), as this minimizes the need for extra configuration. Missing System Commands in Restricted Environments Symptom: In the WebJobs log, an error is logged stating that the ls command is missing. Cause: This typically occurs in minimal environments, such as Azure App Services, containers, or highly restricted shells. Resolution: Use predefined paths or variables in the script instead of relying on system commands. Refer to scikit-learn/webjob/train_heart_disease_model.sh in this tutorial for an example of handling such cases. 8. Conclusion Azure App Service, while being a PaaS product with less flexibility compared to a VM, still offers several powerful features that allow us to fully leverage the benefits of AI frameworks. For example, the resource-intensive model training phase can be offloaded to a high-performance local machine. This approach enables the App Service to focus solely on loading models and serving predictions. Additionally, if the training dataset is frequently updated, we can configure WebJobs with scheduled triggers to retrain the model periodically, ensuring the prediction service always uses the latest version. These capabilities make Azure App Service well-suited for most business scenarios. 9. References Scikit-learn Documentation UCI Machine Learning Repository Azure App Service Documentation525Views1like0CommentsAzure CLI and Azure PowerShell Ignite 2024 Announcement
The priority for Azure CLI and Azure PowerShell remains to provide our customers with the most complete, secure, and easy-to-use tools to manage Azure resources. At Microsoft Ignite 2024, we are announcing the following new capabilities delivering on our priorities: Extending our coverage and commands API version upgrade. Security improvements. Investments in Copilot in Azure Extending our coverage In the past six months, we have added or refreshed coverage for new or existing Azure services within 30 days of their general availability. You will see new and updated command lines for ArcGateway, AzTerraform, ConnectedMachine, Fabric, Astro, Synapse, AppComplianceAutomation, Storage, App, and other modules. Note: To use the associated commands, you need to install the Azure CLI extension or the Azure PowerShell module. For details about all the commands that have been updated, as well as a complete list of the new features in this release for the Azure client tools, see the release notes for each tool: Azure CLI: https://learn.microsoft.com/cli/azure/release-notes-azure-cli Azure PowerShell: https://learn.microsoft.com/powershell/azure/release-notes-azureps Credential detection from Az CLIs outputs We have been actively working on hardening your defense in depth with secrets awareness in Azure command line tools. For Azure CLI and Azure PowerShell, in the past 6 months, we have collaborated with our internal team to replace verification patten with the Azure secret common library, expanding the coverage types of patches and the range of command lines. The range of command line detection has been almost 100% covered. The Azure CLI and Azure PowerShell use the same detection logic and are continually being upgraded. We still encourage users to enable environment parameters: AZURE_CLIENTS_SHOW_SECRETS_WARNING=True (Default) For Azure PowerShell only Our team is gradually transitioning to using SecureString for tokens, account keys, and secrets, replacing the traditional string types. Currently, we offer an opt-in method for the Get-AzAccessToken command line, which does not introduce breaking changes: Get-AzAccessToken –AsSecureString We encourage users to utilize the -AsSecureString parameter to output tokens securely. Over the next year, we plan to implement this security method across more command lines, converting all keys, tokens, and similar data from string types to SecureString. Please note that when the command line output defaults to -AsSecureString mode, it may result in breaking changes. Therefore, we advise users to stay updated with our breaking change documentation. Support Azure Linux 3.0 for Azure CLI Azure CLI has supported Azure Linux 3.0 from 2.65.0. The Azure Linux 3 user can install CLI with tdnf install azure-cli Starting from version 2.64.0 of Azure CLI, the base Linux distribution of Azure CLI is now Azure Linux 3.0. It’s available at Microsoft Artifact Registry (MAR) https://mcr.microsoft.com/en-us/artifact/mar/azure-cli/about. You can get it with the below command: docker pull mcr.microsoft.com/azure-cli or docker pull mcr.microsoft.com/azure-cli:azurelinux3.0 For further migration guidance especially involved with GitHub Actions, please check out more details from blog. Deprecate SP with certificate with az login –password for Azure CLI For az login, --password will no longer accept service principal certificate in Azure CLI 2.67.0. Use `--certificate` to pass a service principal certificate. # Logging in with secret should work as before az login --service-principal --username xxx --password mysecret --tenant xxx # Old way to log in with a certificate, will show a deprecation warning az login --service-principal --username xxx --password ~/mycert.pem --tenant xxx # New way to log in with a certificate az login --service-principal --username xxx --certificate ~/mycert.pem --tenant xxx Note: To sign in with a certificate, the certificate must be available locally as a PEM or DER file in ASCII format. PKCS#12 files (.p12/.pfx) don't work. When you use a PEM file, the PRIVATE KEY and CERTIFICATE must be appended together within the file. You don't need to prefix the path with an `@` like you do with other az commands. Azure PowerShell WAM Authentication Issues and Fixes Since version Az 12.0.0, Azure PowerShell has supported Web Authentication Manager (WAM) as the default authentication mechanism. During this period, several critical issues affected users logging in interactively. These issues have been addressed and fixed by version 13.0.0, including: The WAM login interface failing to pop up, resulting in login failures. Login failures for users using the device-code authentication method. The "Work and school account" option does not appear in the WAM pop-up window. Incompatibility of the Export-AzSshConfig and Enter-AzVM commands from the Az.Ssh module when WAM is enabled. For detailed announcements on specific issues, please refer to our WAM issues and Workarounds/azure-powershell issue. In response to these WAM issues, our team has been actively fixing bugs, making improvements, and establishing monitoring and alert mechanisms with relevant teams to detect issues early and assess their impact. Additionally, we have integrated test cases baseline into the release pipeline. We encourage users to enable the WAM function for security by using the command: Update-AzConfig -EnableLoginByWam $true If you encounter problems, please report them in Issues · Azure/azure-powershell Note: Sovereign Cloud does not currently support WAM, we plan to implement this in the coming months. Change in Azure CLI extension management Starting with Azure CLI version 2.56.0, a new `--allow-preview` parameter was introduced for the extension installations, with its default value set to True. This change, as outlined in our extension versioning guidelines, helps distinguish between stable and preview versions, ensuring consistency across stable releases while still enabling the publication of preview features. Beginning with version 2.67.0, Azure CLI will now install only stable versions of extension modules by default. If a later preview version of an extension is available, users will receive a warning message that explains how to enable preview versions by using the `--allow-preview` parameter. Important Note: If no stable version of an extension is available, preview versions will be installed by default, along with a warning message, like below, notifying users of this behavior. "No stable version of 'xxx' to install. Preview versions are allowed." Azure PowerShell Long Term Support releases (LTS) support Azure PowerShell already supports both Standard Term Support releases (STS) and Long-Term Support releases (LTS). Users can choose the appropriate version according to their project needs. Users can choose to stay with the LTS version until the next LTS version, or upgrade with the latest version to experience new features. The following document details the definitions of LTS and STS. Beginning with Az 12, even numbered releases are LTS versions. Azure PowerShell support lifecycle: Azure PowerShell support lifecycle | Microsoft Learn Azure CLI will provide LTS version in early 2025. More details could be found at Azure CLI lifecycle and support | Microsoft Learn Enhancement to Invoke-AzRestMethod in Azure PowerShell Azure PowerShell 13.0.0 introduces major enhancements to the Invoke-AzRestMethod cmdlet, empowering users with a new option to enable long-running operations (LRO) and flexible control over operation status polling in complex Azure workflows. Key Features of Invoke-AzRestMethod Enhancement: LRO Support with Enhanced Status Tracking: With the new -WaitForCompletion parameter, users can wait for the operations to complete and directly receive the final status. In debug mode, users can also monitor long-running operations (such as deployments or resource provisioning) and receive real-time status updates directly in their PowerShell session. Flexible Polling Options for Customized Control: The addition of -PollFrom and -FinalResultFrom parameters enable users to define custom polling URIs and specify final result header sources, ensuring compatibility across various Azure resources and scenarios. Example Usage: Using the new -WaitForCompletion parameter, here’s how to create a Managed HSM (Hardware Security Module) and track its provisioning status until it’s fully completed: Invoke-AzRestMethod -Method PUT -WaitForCompletion -Path "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.KeyVault/managedHSMs/{hsmName}" This example monitors the creation of a Managed HSM, providing real-time updates throughout the longer provisioning process (in debug mode), ensuring the resource reaches a fully operational state. For more details and examples, refer to the updated release notes: Azure PowerShell release notes Azure CLI/PS scenarios with Copilot in Azure In the second half of 2024, we improved knowledge of Azure CLI commands and end-to-end scenarios for Copilot in Azure to answer questions related to Azure CLI commands or scripts, following our best practices. In the past 6 months, we have optimized the following scenarios: Enhanced Prompt Flow and RAG architecture tailored for CLI script generation, ensuring higher command and scenario accuracy. Improved user intent recognition with hybrid search, enabling more precise retrieval of relevant knowledge from user queries. Supported parameter value injection, simplifying the process for customers to input parameter values and generate directly usable scripts on Copilot in Azure. Optimized the knowledge base to reduce hallucination issues. More accurately identified out-of-scope questions. In the 2024 Ignite Event, we’ve also released a public preview of AI Shell, which lets you access Copilot in Azure to help answer any questions you have about Azure CLI or Azure PowerShell. For more information about the AI Shell release please check out. AI Shell To learn more about Microsoft Copilot for Azure and how it can help you, visit: Microsoft Copilot for Azure Breaking Changes The latest breaking change guidance documents can be found at the links below. To read more about the breaking changes migration guide, ensure your environment is ready to install the newest version of Azure CLI and Azure PowerShell. Azure CLI: Release notes & updates – Azure CLI | Microsoft Learn Azure PowerShell: Migration guide for Az 13.0.0 | Microsoft Learn Milestone timelines: Azure CLI Milestones Azure PowerShell Milestones Thank you for using the Azure command-line tools. We look forward to continuing to improve your experience. We hope you enjoy Ignite and all the great work released this week. We'd love to hear your feedback, so feel free to reach out anytime. GitHub: https://github.com/Azure/azure-cli https://github.com/Azure/azure-powershell Let's be in touch on X (Twitter) : @azureposh @AzureCli Azure CLI and Azure PowerShell team722Views2likes0CommentsMicrosoft Entra /Azure Connect Reinstallation and Source Anchor Change
Hello everyone, I would like to talk about the possibility of changing the SourceAnchor in Azure Connect. Officially, this is not supported by Microsoft, but there is still a way to do this via a few detours. the running AD Sync must be stopped first of all. to make the changeover, all users must first be soft-deleted. The most practical way to do this is to synchronize an OU in which there are no users. Now the Entra objects are stored under deleted It is important to note that before restoring the users who do not have an Exchange Online mailbox, Entra P1 or 2 must be removed so that a second mailbox is not created here. Now all users must be restored. After this has been done, the Immutable id of the users must be removed via PowerShell. This is possible with the following command: Get-MsolUser -all | Set-MsolUser -ImmutableID "$Null" (If this command is required for individual users, replace -all with -userpricipalname "example@email,de") If the Immutable Id has been removed for all users, the status in Entra must be set to Cloud Only. If this is the case, you can start with the next steps. It is important to note that the actions carried out above can lead to short-term failures and should therefore ideally be carried out before the weekend! In the next step, a clean uninstallation of Azure Connect must be carried out. Here I would recommend the article ADsync uninstallation from MSXFAQ where it is well explained. When uninstalling, only the steps that do not hinder a new installation should be carried out, but this is well explained in the article. after successfully uninstalling the AD Sync, there may be delays, which is why I would recommend waiting 24 hours before reinstalling. The waiting time can be skipped, but it still worked for me. as soon as you have installed the AD Sync with the new desired attribute, you can start the sync. The users should now be matched with the existing cloud objects via Softmatch. If this does not work, it is possible to delete the Immutable ID again or to correct the errors via the AD Connect error display of the Entra ID. Under the function other errors, several errors may be displayed, this was fixed by us by fixing all duplicate attribute errors. I hope this has helped you a little. I am always open to feedback!253Views0likes0Comments