devops
212 TopicsUsing OpenAI on Azure Web App
TOC Introduction to OpenAI System Architecture Architecture Focus of This Tutorial Setup Azure Resources File and Directory Structure ARM Template ARM Template From Azure Portal Running Locally Training Models and Training Data Predicting with the Model Publishing the Project to Azure Running on Azure Web App Training the Model Using the Model for Prediction Troubleshooting Startup Command Issue App Becomes Unresponsive After a Period az cli command for Linux webjobs fail Others Conclusion References 1. Introduction to OpenAI OpenAI is a leading artificial intelligence research and deployment company founded in December 2015. Its mission is to ensure that artificial general intelligence (AGI)—highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. OpenAI focuses on developing safe and scalable AI technologies and ensuring equitable access to these innovations. Known for its groundbreaking advancements in natural language processing, OpenAI has developed models like GPT (Generative Pre-trained Transformer), which powers applications for text generation, summarization, translation, and more. GPT models have revolutionized fields like conversational AI, creative writing, and programming assistance. OpenAI has also released models like Codex, designed to understand and generate computer code, and DALL·E, which creates images from textual descriptions. OpenAI operates with a unique hybrid structure: a for-profit company governed by a nonprofit entity to balance the development of AI technology with ethical considerations. The organization emphasizes safety, research transparency, and alignment to human values. By providing access to its models through APIs and fostering partnerships, OpenAI empowers developers, businesses, and researchers to leverage AI for innovative solutions across diverse industries. Its long-term goal is to ensure AI advances benefit humanity as a whole. 2. System Architecture Architecture Development Environment OS: Ubuntu Version: Ubuntu 18.04 Bionic Beaver Python Version: 3.7.3 Azure Resources App Service Plan: SKU - Premium Plan 0 V3 App Service: Platform - Linux (Python 3.9, Version 3.9.19) Storage Account: SKU - General Purpose V2 File Share: No backup plan Focus of This Tutorial This tutorial walks you through the following stages: Setting up Azure resources Running the project locally Publishing the project to Azure Running the application on Azure Troubleshooting common issues Each of the mentioned aspects has numerous corresponding tools and solutions. The relevant information for this session is listed in the table below. Local OS Windows Linux Mac V How to setup Azure resources Portal (i.e., REST api) ARM Bicep Terraform V V How to deploy project to Azure VSCode CLI Azure DevOps GitHub Action V 3. Setup Azure Resources File and Directory Structure Please open a bash terminal and enter the following commands: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai bash ./openai/tools/add-venv.sh If you are using a Windows platform, use the following alternative PowerShell commands instead: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai .\openai\tools\add-venv.cmd After completing the execution, you should see the following directory structure: File and Path Purpose openai/tools/add-venv.* The script executed in the previous step (cmd for Windows, sh for Linux/Mac) to create all Python virtual environments required for this tutorial. .venv/openai-webjob/ A virtual environment specifically used for training models (i.e., calculating embedding vectors indeed). openai/webjob/requirements.txt The list of packages (with exact versions) required for the openai-webjob virtual environment. .venv/openai/ A virtual environment specifically used for the Flask application, enabling API endpoint access for querying predictions (i.e., suggestion). openai/requirements.txt The list of packages (with exact versions) required for the openai virtual environment. openai/ The main folder for this tutorial. openai/tools/arm-template.json The ARM template to setup all the Azure resources related to this tutorial, including an App Service Plan, a Web App, and a Storage Account. openai/tools/create-folder.* A script to create all directories required for this tutorial in the File Share, including train, model, and test. openai/tools/download-sample-training-set.* A script to download a sample training set from News-Headlines-Dataset-For-Sarcasm-Detection, containing headlines data from TheOnion and HuffPost, into the train directory of the File Share. openai/webjob/cal_embeddings.py A script for calculating embedding vectors from headlines. It loads the training set, applies the transformation on OpenAI API, and saves the embedding vectors in the model directory of the File Share. openai/App_Data/jobs/triggered/cal-embeddings/cal_embeddings.sh A shell script for Azure App Service web jobs. It activates the openai-webjob virtual environment and starts the cal_embeddings.py script. openai/api/app.py Code for the Flask application, including routes, port configuration, input parsing, vectors loading, predictions, and output generation. openai/start.sh A script executed after deployment (as specified in the ARM template startup command I will introduce it later). It sets up the virtual environment and starts the Flask application to handle web requests. ARM Template We need to create the following resources or services: Manual Creation Required Resource/Service App Service Plan No Resource (plan) App Service Yes Resource (app) Storage Account Yes Resource (storageAccount) File Share Yes Service Let’s take a look at the openai/tools/arm-template.json file. Refer to the configuration section for all the resources. Since most of the configuration values don’t require changes, I’ve placed them in the variables section of the ARM template rather than the parameters section. This helps keep the configuration simpler. However, I’d still like to briefly explain some of the more critical settings. As you can see, I’ve adopted a camelCase naming convention, which combines the [Resource Type] with [Setting Name and Hierarchy]. This makes it easier to understand where each setting will be used. The configurations in the diagram are sorted by resource name, but the following list is categorized by functionality for better clarity. Configuration Name Value Purpose storageAccountFileShareName data-and-model [Purpose 1: Link File Share to Web App] Use this fixed name for File Share storageAccountFileShareShareQuota 5120 [Purpose 1: Link File Share to Web App] The value is in GB storageAccountFileShareEnabledProtocols SMB [Purpose 1: Link File Share to Web App] appSiteConfigAzureStorageAccountsType AzureFiles [Purpose 1: Link File Share to Web App] appSiteConfigAzureStorageAccountsProtocol Smb [Purpose 1: Link File Share to Web App] planKind linux [Purpose 2: Specify platform and stack runtime] Select Linux (default if Python stack is chosen) planSkuTier Premium0V3 [Purpose 2: Specify platform and stack runtime] Choose at least Premium Plan to ensure enough memory for your AI workloads planSkuName P0v3 [Purpose 2: Specify platform and stack runtime] Same as above appKind app,linux [Purpose 2: Specify platform and stack runtime] Same as above appSiteConfigLinuxFxVersion PYTHON|3.9 [Purpose 2: Specify platform and stack runtime] Select Python 3.9 to avoid dependency issues appSiteConfigAppSettingsWEBSITES_CONTAINER_START_TIME_LIMIT 600 [Purpose 3: Deploying] The value is in seconds, ensuring the Startup Command can continue execution beyond the default timeout of 230 seconds. This tutorial’s Startup Command typically takes around 300 seconds, so setting it to 600 seconds provides a safety margin and accommodates future project expansion (e.g., adding more packages) appSiteConfigAppCommandLine [ -f /home/site/wwwroot/start.sh ] && bash /home/site/wwwroot/start.sh || GUNICORN_CMD_ARGS=\"--timeout 600 --access-logfile '-' --error-logfile '-' -c /opt/startup/gunicorn.conf.py --chdir=/opt/defaultsite\" gunicorn application:app [Purpose 3: Deploying] This is the Startup Command, which can be break down into 3 parts: First (-f /home/site/wwwroot/start.sh): Checks whether start.sh exists. This is used to determine whether the app is in its initial state (just created) or has already been deployed. Second (bash /home/site/wwwroot/start.sh): If the file exists, it means the app has already been deployed. The start.sh script will be executed, which installs the necessary packages and starts the Flask application. Third (GUNICORN_CMD_ARGS=\"--timeout 600 --access-logfile '-' --error-logfile '-' -c /opt/startup/gunicorn.conf.py --chdir=/opt/defaultsite\" gunicorn application:app): If the file does not exist, the command falls back to the default HTTP server (gunicorn) to start the web app. Since the command is enclosed in double quotes within the ARM template, during actual execution, replace \" with " appSiteConfigAppSettingsSCM_DO_BUILD_DURING_DEPLOYMENT false [Purpose 3: Deploying] Since we have already defined the handling for different virtual environments in start.sh, we do not need to initiate the default build process of the Web App appSiteConfigAppSettingsWEBSITES_ENABLE_APP_SERVICE_STORAGE true [Purpose 4: Webjobs] This setting is required to enable the App Service storage feature, which is necessary for using web jobs (e.g., for model training) storageAccountPropertiesAllowSharedKeyAccess true [Purpose 5: Troubleshooting] This setting is enabled by default. The reason for highlighting it is that certain enterprise IT policies may enforce changes to this configuration after a period, potentially causing a series of issues. For more details, please refer to the Troubleshooting section below. Return to bash terminal and execute the following commands (their purpose has been described earlier). # Please change <ResourceGroupName> to your prefer name, for example: azure-appservice-ai # Please change <RegionName> to your prefer region, for example: eastus2 # Please change <ResourcesPrefixName> to your prefer naming pattern, for example: openai-arm (it will create openai-arm-asp as App Service Plan, openai-arm-app for web app, and openaiarmsa for Storage Account) az group create --name <ResourceGroupName> --location <RegionName> az deployment group create --resource-group <ResourceGroupName> --template-file ./openai/tools/arm-template.json --parameters resourcePrefix=<ResourcesPrefixName> If you are using a Windows platform, use the following alternative PowerShell commands instead: # Please change <ResourceGroupName> to your prefer name, for example: azure-appservice-ai # Please change <RegionName> to your prefer region, for example: eastus2 # Please change <ResourcesPrefixName> to your prefer naming pattern, for example: openai-arm (it will create openai-arm-asp as App Service Plan, openai-arm-app for web app, and openaiarmsa for Storage Account) az group create --name <ResourceGroupName> --location <RegionName> az deployment group create --resource-group <ResourceGroupName> --template-file .\openai\tools\arm-template.json --parameters resourcePrefix=<ResourcesPrefixName> After execution, please copy the output section containing 3 key-value pairs from the result like this. Return to bash terminal and execute the following commands: # Please setup 3 variables you've got from the previous step OUTPUT_STORAGE_NAME="<outputStorageName>" OUTPUT_STORAGE_KEY="<outputStorageKey>" OUTPUT_SHARE_NAME="<outputShareName>" sudo mkdir -p /mnt/$OUTPUT_SHARE_NAME if [ ! -d "/etc/smbcredentials" ]; then sudo mkdir /etc/smbcredentials fi CREDENTIALS_FILE="/etc/smbcredentials/$OUTPUT_STORAGE_NAME.cred" if [ ! -f "$CREDENTIALS_FILE" ]; then sudo bash -c "echo \"username=$OUTPUT_STORAGE_NAME\" >> $CREDENTIALS_FILE" sudo bash -c "echo \"password=$OUTPUT_STORAGE_KEY\" >> $CREDENTIALS_FILE" fi sudo chmod 600 $CREDENTIALS_FILE sudo bash -c "echo \"//$OUTPUT_STORAGE_NAME.file.core.windows.net/$OUTPUT_SHARE_NAME /mnt/$OUTPUT_SHARE_NAME cifs nofail,credentials=$CREDENTIALS_FILE,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30\" >> /etc/fstab" sudo mount -t cifs //$OUTPUT_STORAGE_NAME.file.core.windows.net/$OUTPUT_SHARE_NAME /mnt/$OUTPUT_SHARE_NAME -o credentials=$CREDENTIALS_FILE,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30 Or you could simply go to Azure Portal, navigate to the File Share you just created, and refer to the diagram below to copy the required command. You can choose Windows or Mac if you are using such OS in your dev environment. After executing the command, the network drive will be successfully mounted. You can use df to verify, as illustrated in the diagram. ARM Template From Azure Portal In addition to using az cli to invoke ARM Templates, if the JSON file is hosted on a public network URL, you can also load its configuration directly into the Azure Portal by following the method described in the article [Deploy to Azure button - Azure Resource Manager]. This is my example. Click Me After filling in all the required information, click Create. Once the creation process is complete, click Outputs on the left menu to retrieve the connection information for the File Share. 4. Running Locally Training Models and Training Data In the next steps, you will need to use OpenAI services. Please ensure that you have registered as a member and added credits to your account (Billing overview - OpenAI API). For this example, adding $10 USD will be sufficient. Additionally, you will need to generate a new API key (API keys - OpenAI API), you may choose to create a project as well for future project organization, depending on your needs (Projects - OpenAI API). After getting the API key, create a text file named apikey.txt in the openai/tools/ folder. Paste the key you just copied into the file and save it. Return to bash terminal and execute the following commands (their purpose has been described earlier). source .venv/openai-webjob/bin/activate bash ./openai/tools/create-folder.sh bash ./openai/tools/download-sample-training-set.sh python ./openai/webjob/cal_embeddings.py --sampling_ratio 0.002 If you are using a Windows platform, use the following alternative PowerShell commands instead: .\.venv\openai-webjob\Scripts\Activate.ps1 .\openai\tools\create-folder.cmd .\openai\tools\download-sample-training-set.cmd python .\openai\webjob\cal_embeddings.py --sampling_ratio 0.002 After execution, the File Share will now include the following directories and files. Let’s take a brief detour to examine the structure of the training data downloaded from the GitHub. The right side of the image explains each field of the data. This dataset was originally used to detect whether news headlines contain sarcasm. However, I am repurposing it for another application. In this example, I will use the "headline" field to create embeddings. The left side displays the raw data, where each line is a standalone JSON string containing the necessary fields. In the code, I first extract the "headline" field from each record and send it to OpenAI to compute the embedding vector for the text. This embedding represents the position of the text in a semantic space (akin to coordinates in a multi-dimensional space). After the computation, I obtain an embedding vector for each headline. Moving forward, I will refer to these simply as embeddings. By the way, the sampling_ratio parameter in the command is something I configured to speed up the training process. The original dataset contains nearly 30,000 records, which would result in a training time of around 8 hours. To simplify the tutorial, you can specify a relatively low sampling_ratio value (ranging from 0 to 1, representing 0% to 100% sampling from the original records). For example, a value of 0.01 corresponds to a 1% sample, allowing you to accelerate the experiment. In this semantic space, vectors that are closer to each other often have similar values, which corresponds to similar meanings. In this context, the distance between vectors will serve as our metric to evaluate the semantic similarity between pieces of text. For this, we will use a method called cosine similarity. In the subsequent tutorial, we will construct some test texts. These test texts will also be converted into embeddings using the same method. Each test embedding will then be compared against the previously computed headline embeddings. The comparison will identify the nearest headline embeddings in the multi-dimensional vector space, and their original text will be returned. Additionally, we will leverage OpenAI's well-known generative AI capabilities to provide a textual explanation. This explanation will describe why the constructed test text is related to the recommended headline. Predicting with the Model Return to terminal and execute the following commands. First, deactivate the virtual environment used for calculating the embeddings, then activate the virtual environment for the Flask application, and finally, start the Flask app. Commands for Linux or Mac: deactivate source .venv/openai/bin/activate python ./openai/api/app.py Commands for Windows: deactivate .\.venv\openai\Scripts\Activate.ps1 python .\openai\api\app.py When you see a screen similar to the following, it means the server has started successfully. Press Ctrl+C to stop the server if needed. Before conducting the actual test, let’s construct some sample query data: education Next, open a terminal and use the following curl commands to send requests to the app: curl -X GET http://127.0.0.1:8000/api/detect?text=education You should see the calculation results, confirming that the embeddings and Gen AI is working as expected. PS: Your results may differ from mine due to variations in the sampling of your training dataset compared to mine. Additionally, OpenAI's generative content can produce different outputs depending on the timing and context. Please keep this in mind. 5. Publishing the Project to Azure Return to terminal and execute the following commands. Commands for Linux or Mac: # Please change <resourcegroup_name> and <webapp_name> to your own # Create the Zip file from project zip -r openai/app.zip openai/* # Deploy the App az webapp deploy --resource-group <resourcegroup_name> --name <webapp_name> --src-path openai/app.zip --type zip # Delete the Zip file rm openai/app.zip Commands for Windows: # Please change <resourcegroup_name> and <webapp_name> to your own # Create the Zip file from project Compress-Archive -Path openai\* -DestinationPath openai\app.zip # Deploy the App az webapp deploy --resource-group <resourcegroup_name> --name <webapp_name> --src-path openai\app.zip --type zip # Delete the Zip file del openai\app.zip PS: WebJobs follow the directory structure of App_Data/jobs/triggered/<webjob_name>/. As a result, once the Web App is deployed, the WebJob is automatically deployed along with it, requiring no additional configuration. 6. Running on Azure Web App Training the Model Return to terminal and execute the following commands to invoke the WebJobs. Commands for Linux or Mac: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; curl -X POST -H "Authorization: Bearer $token" -H "Content-Type: application/json" -d '{}' "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" Commands for Windows: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"; "Content-type" = "application/json"} -Method POST -Body '{}' You could see the training status by execute the following commands. Commands for Linux or Mac: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; response=$(curl -s -H "Authorization: Bearer $token" "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01") ; echo "$response" | jq Commands for Windows: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv); $response = Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"} -Method GET ; $response | ConvertTo-Json -Depth 10 Processing Complete And you can get the latest detail log by execute the following commands. Commands for Linux or Mac: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; history_id=$(az webapp webjob triggered log --resource-group <resourcegroup_name> --name <webapp_name> --webjob-name cal-embeddings --query "[0].id" -o tsv | sed 's|.*/history/||') ; response=$(curl -X GET -H "Authorization: Bearer $token" -H "Content-Type: application/json" "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/history/$history_id/?api-version=2024-04-01") ; log_url=$(echo "$response" | jq -r '.properties.output_url') ; curl -X GET -H "Authorization: Bearer $token" "$log_url" Commands for Windows: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own $token = az account get-access-token --resource https://management.azure.com --query accessToken -o tsv ; $history_id = az webapp webjob triggered log --resource-group <resourcegroup_name> --name <webapp_name> --webjob-name cal-embeddings --query "[0].id" -o tsv | ForEach-Object { ($_ -split "/history/")[-1] } ; $response = Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/history/$history_id/?api-version=2024-04-01" -Headers @{ Authorization = "Bearer $token" } -Method GET ; $log_url = $response.properties.output_url ; Invoke-RestMethod -Uri $log_url -Headers @{ Authorization = "Bearer $token" } -Method GET Once you see the report in the Logs, it indicates that the embeddings calculation is complete, and the Flask app is ready for predictions. You can also find the newly calculated embeddings in the File Share mounted in your local environment. Using the Model for Prediction Just like in local testing, open a bash terminal and use the following curl commands to send requests to the app: # Please change <webapp_name> to your own curl -X GET https://<webapp_name>.azurewebsites.net/api/detect?text=education As with the local environment, you should see the expected results. 7. Troubleshooting Startup Command Issue Symptom: Without any code changes and when the app was previously functioning, updating the Startup Command causes the app to stop working. The related default_docker.log shows multiple attempts to run the container without errors in a short time, but the container does not respond on port 8000 as seen in docker.log. Cause: Since Linux Web Apps actually run in containers, the final command in the Startup Command must function similarly to the CMD instruction in a Dockerfile. CMD ["/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"] This command must ensure it runs in the foreground (i.e., not in daemon mode) and cannot exit the process unless manually interrupted. Resolution: Check the final command in the Startup Command to ensure it does not include a daemon execution mode. Alternatively, use the Web SSH interface to execute and verify these commands directly. App Becomes Unresponsive After a Period Symptom: An app that runs normally becomes unresponsive after some time. Both the front-end webpage and the Kudu page display an "Application Error," and the deployment log shows "Too many requests." Additionally, the local environment cannot connect to the associated File Share. Cause: Clicking on "diagnostic resources" in the initial error screen provides more detailed error information. In this example, the issue is caused by internal enterprise Policies or Automations (e.g., enterprise applications) that periodically or randomly scan storage account settings created by employees. If the settings are deemed non-compliant with security standards, they are automatically adjusted. For instance, the allowSharedKeyAccess parameter may be forcibly set to false, preventing both the Web App and the local development environment from connecting to the File Share under the Storage Account. Modification history for such settings can be checked via the Activity Log of the Storage Account (note that only the last 90 days of data are retained). Resolution: The proper approach is to work offline with the enterprise IT team to coordinate and request the necessary permissions. As a temporary workaround, modify the affected settings to Enable during testing periods and revert them to Disabled afterward. You can find the setting for allowSharedKeyAccess here. Note: Azure Storage Mount currently does not support access via Managed Identity. az cli command for Linux webjobs fail Symptom: Got "Operation returned an invalid status 'Unauthorized'" message from different platforms even in Azure CloudShell with latest az version Cause: After using "--debug --verbose" from the command I can see the actual error occurred on which REST API, for example, I'm using this command (az webapp webjob triggered): az webapp webjob triggered list --resource-group azure-appservice-ai --name openai-arm-app --debug --verbose Which represent that the operation has invoked under this API: /Microsoft.Web/sites/{app_name}/triggeredwebjobs (Web Apps - List Triggered Web Jobs) After I directly test that API from the official doc, I still get such the error, which means this preview feature is still under construction, and we cannot use it currently. Resolution: I found a related API endpoint via Azure Portal: /Microsoft.Web/sites/{app_name}/webjobs (Web Apps - List Web Jobs) After I directly test that API from the official doc, I can get the trigger list now. So I have modified the original command: az webapp webjob triggered list --resource-group azure-appservice-ai --name openai-arm-app To the following command (please note the differences between Linux/Mac and Windows commands). Make sure to replace <subscription_id>, <resourcegroup_name>, and <webapp_name> with your specific values. Commands for Linux or Mac: token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; response=$(curl -s -H "Authorization: Bearer $token" "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01") ; echo "$response" | jq Commands for Windows: $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv); $response = Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"} -Method GET ; $response | ConvertTo-Json -Depth 10 For "run" commands, due to the same issue when invoking the problematic API, so I also modify the operation. Commands for Linux or Mac: token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; curl -X POST -H "Authorization: Bearer $token" -H "Content-Type: application/json" -d '{}' "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" Commands for Windows: $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/029b4739-1f55-4cab-bf84-a9393f8ac8fe/resourceGroups/azure-appservice-ai/providers/Microsoft.Web/sites/openai-arm-app/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"; "Content-type" = "application/json"} -Method POST -Body '{}' Others Using Scikit-learn on Azure Web App 8. Conclusion Beyond simple embedding vector calculations, OpenAI's most notable strength is generative AI. You can provide instructions to the GPT model through natural language (as a prompt), clearly specifying the format you need in the instruction. You can then parse the returned content easily. While PaaS products are not ideal for heavy vector calculations, they are well-suited for acting as intermediaries to forward commands to generative AI. These outputs can even be used for various applications, such as patent infringement detection, plagiarism detection in research papers, or trending news analysis. I believe that in the future, we will see more similar applications on Azure Web Apps. 9. References Overview - OpenAI API News-Headlines-Dataset-For-Sarcasm-Detection Quickstart: Deploy a Python (Django, Flask, or FastAPI) web app to Azure - Azure App Service Configure a custom startup file for Python apps on Azure App Service on Linux - Python on Azure Mount Azure Storage as a local share - Azure App Service Deploy to Azure button - Azure Resource Manager Using Scikit-learn on Azure Web App857Views0likes0CommentsUnable to process AAS model connecting to Azure SQL with Service Account
Hello I have built a demo SSAS model that I am hosting on an Azure Analysis Services Server. The model connects to an Azure SQL database in my tenant (the Database is the default AdventureWorks provided by Azure when creating your first DB). To connect to the Azure SQL, I have created an App (service principal) and granted it reader access to my Azure SQL DB. If I login to the Azure SQL DB from SSMS with this account, using Microsoft Entra Service Principal Authentication providing ClientId@TenantID for the Username and SecretValue as the password, I am able to login and SELECT from the tables. However, when I try to process the SSAS model, I get an error. For reference, below I have put the TMSL script that sets the DataSource part of the SSAS after deployment via YAML pipelines (variables are replaced when running). I think the issue lies in the "AuthenticationKind" value I have provided in the credential, but I can't figure out what to use. When I create the datasource like this and process, I get error: Failed to save modifications to the server. Error returned: '<ccon>Windows authentication has been disabled in the current context.</ccon>. I don't understand why since I am not using Windows authentication kind. Every other keyword I used ib the "AuthenticationKind" part returns error AuthenticationKind not supported. Any help on how to change this script would be useful. { "createOrReplace": { "object": { "database": "$(AAS_DATABASE)", "dataSource": "$(AZSQLDataSourceName)" }, "dataSource": { "type": "structured", "name": "$(AZSQLDataSourceName)", "connectionDetails": { "protocol": "tds", "address": { "server": "$(AZSQLServer)" }, "initialCatalog": "$(AZSQLDatabase)" }, "credential": { "AuthenticationKind": "ServiceAccount", "username": "$(AZSQL_CLIENT_ID)@$(AZSQL_TENANT_ID)", "password": "$(AZSQL_CLIENT_SECRET)" } } } }40Views0likes1CommentNew Features in Azure Container Apps VS Code extension
👆 Install VS Code extension Summary of Major Changes New Managed Identity Support for connecting container apps to container registries. This is now the preferred method for securing these resources, provided you have sufficient privileges. New Container View: Introduced with several commands for easier editing of container images and environment variables. One-Click Deployment: Deploy to Container App... added to the top-level container app node. This supports deployments from a workspace project or container registry. To manage multiple applications in a workspace project or enable faster deployments with saved settings, use Deploy Project from Workspace. It can be accessed via the workspace view. Improved Activity Log Output: All major commands now include improved activity log outputs, making it easier to track and manage your activities. Quickstart Image for Container App Creation: The "Create container app..." command now initializes with a quickstart image, simplifying the setup process. New Commands and Enhancements Managed Identity support for new connections to container registries New command Deploy to Container App... found on the container app item. This one-click deploy command allows deploying from a workspace project or container registry while in single revision mode. New Container view under the container app item allows direct access to the container's image and environment variables. New command Edit Container Image... allows editing of container images without prompting to update environment variables. Environment Variable CRUD Commands: Multiple new commands for creating, reading, updating, and deleting environment variables. Convert Environment Variable to Secret: Quickly turn an environment variable into a container app secret with this new command. Changes and Improvements Command Create Container App... now always starts with a quickstart image. Renamed the Update Container Image... command to Edit Container.... This command is now found on the container item. When running Deploy Project from Workspace..., if remote environment variables conflict with saved settings, prompt for update. Add new envPath option useRemoteConfiguration. Deploying an image with the Docker extension now allows targeting specific revisions and containers. When deploying a new image to a container app, only show ingress prompt when more than the image tag is changed. Improved ACR selection dropdowns, providing better pick recommendations and sorting by resource group. Improved activity log outputs for major commands. Changed draft deploy prompt to be a quick pick instead of a pop-up window. We hope these new features and improvements will simplify deployments and make your Azure Container Apps experience even better. Stay tuned for more updates, and as always, we appreciate your feedback! Try out these new features today and let us know what you think! Your feedback is invaluable in helping us continue to improve and innovate. Azure Container Apps VS Code Extension Full changelog:533Views1like0CommentsAzure DevOps - Agent pool report and replace.
As usage of Azure DevOps organisations grow so do the number of projects, repositories, pipelines and agent pools used. With new services available such as Managed DevOps Pools it can appear a mammoth task for central IT function to manually trawl through every pipeline noting down each agentpool being used. Replacing these values potentially even more complicated after creating the new agent pools and mapping them with potential for human error.542Views0likes0CommentsHow to track the parent and child relationships within the entire hierarchy in Azure DevOps (ADO)?
I am currently facing a situation where I can track the parent-child relationship up to only two levels. Our structure consists of the following hierarchy: EPIC > FEATURE > USER STORY > TASK. At present, I can trace relationships up to two levels but need to modify my query to capture the subsequent child relationships. Could you please let me know if it is possible to track all these relationships in a single query?Solved213Views0likes2CommentsAzure DevOps - How to Publish Artifacts Using REST API in Python?
I'm struggling to implement this functionality and could use some help. Here's my setup: I'm using YAML pipelines in Azure DevOps. Part of the pipeline includes a Python task with a long and complex processing script that generates output reports. For better control, I want to add the Publish Artifacts functionality (based on some logic) directly within the Python script. So far, I have tried the following REST API calls without success. Part 1 - Works url = f"https://dev.azure.com/{organization}/{project}/_apis/build/builds/{build_id}/artifacts?artifactName={artifact_name}&api-version=7.1-preview.5" headers = { "Authorization": f"Basic {ACCESS_TOKEN}", "Accept": "application/json", "Content-Type": "application/json", } payload = { "name": artifact_name, "resource": { "type": "Container", "data": artifact_name, "properties": { "RootId": artifact_name } }, } logger.info("Creating artifact metadata...") response = requests.post(url, headers=headers, json=payload) if response.status_code == 200: logger.info("Artifact metadata created successfully.") response_json = response.json() logger.info(f"Create Pre-Payload Response: {response_json}") Part 2 - FAILS headers = { "Authorization": f"Basic {ACCESS_TOKEN}", "Content-Type": "application/octet-stream", } for artifact_file in artifact_files: if not os.path.exists(artifact_file): logger.warning(f"File {artifact_file} does not exist. Skipping.") continue # Construct the full upload URL item_path = f"{artifact_name}/{os.path.basename(artifact_file)}" upload_url = f"{container_url}?itemPath={item_path}" logger.info(f"Uploading: {artifact_file} to {upload_url}") with open(artifact_file, "rb") as f: response = requests.put(upload_url, headers=headers, data=f) if response.status_code == 201: logger.info(f"File {artifact_file} uploaded successfully.") else: logger.error(f"Failed to upload {artifact_file}: {response.status_code}, {response.text}") Part 2 returns a 404 - like the one below.. INFO:__main__:Uploading: reports/test.json to https://dev.azure.com/OrgDevOps/_apis/resources/Containers/c82916f3-4665-43bf-8927-e05a3b6492a9?itemPath=drop_v3/test.json ERROR:__main__:Failed to upload reports/test.json: 404, <!DOCTYPE html > <html> <head> <title>The controller for path '/_apis/resources/Containers/c82916f3-4665-43bf-8927-e05a3b6492a9' was not found or does not implement IController.</title> <style type="text/css">html { height: 100%; } … Any guidance or working examples would be greatly appreciated. Thanks in advance!125Views0likes2CommentsAzure Devops Pipeline for Power Platform solution : Tenant to Tenant
I have a query related to Azure Devops Pipeline. Is it possible to move form one tenant to another tenant. Is it possible to move canvas app solution using Azure Devop Pipelines. If yes, I am using Sharepoint lists. Can this be moved, as it wont be the part of solution, should it be done manually? I am also using environment variables, How will this be mapped in the receiver tenant using pipelines. I have few solutions that i need to move from one solution to another every sprint. So what will be the suitable plan/license for it. Can you please share relevant documents or steps or related videos if any regarding this.44Views0likes1Comment