azure paas
116 TopicsTransitioning from Non-managed to Managed WordPress on App Service Linux
Introduction We've received numerous queries about WordPress on App Service, and we love it! Your feedback helps us improve our offerings. A common theme is the challenges faced with non-managed WordPress setups. Our managed WordPress offering on App Service is designed to be highly performant, secure, and seamlessly integrated with Azure services like MySQL flexible server, CDN/Front Door, Blob Storage, VNET, and Azure Communication Services. While some specific cases might require a custom WordPress setup, most users benefit significantly from our managed service, enjoying better performance, security, easier management, and cost savings. If you're experiencing performance issues or problems with stack updates, you might be using a non-managed WordPress setup. This could happen if you didn't use our marketplace offering or if you replaced WordPress files via FTP after setup. In such cases, check if you're using the managed WordPress service. In this article, we'll explore how to check if you're using the managed offering and how to transition if you're not. Why Choose Managed WordPress on App Service? Under the Hood Optimized Container Image: We use a container image with numerous optimizations. Learn more: https://github.com/Azure/wordpress-linux-appservice Environment Variables: These configure WordPress and integrate various Azure resources. Learn more: https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/wordpress_application_settings.md Azure resources: We integrate multiple Azure resources like App Service, MySQL flexible database, Entra ID, VNET, ACS Email, CDN/Front Door, and Blob storage, all configured via environment variables. Also, the resources individually are configured to best work with WordPress. Benefits of Managed Offering Managed Tech Stack: Our team handles updates for PHP, Nginx, WordPress, etc., ensuring you're always on the latest versions without performance or security concerns. Read more: https://techcommunity.microsoft.com/blog/appsonazureblog/how-to-keep-your-wordpress-website-stack-on-azure-app-service-up-to-date/3832193 Managed MySQL Instance: We use Azure Database for MySQL flexible server as the WordPress database. Many customers use in-App databases, which increase maintenance costs and require manual configurations. Our managed MySQL instance is optimized (server parameters) for performance and security, and you don't need to worry about upgrades. Azure Service Integrations: Our managed offering integrates seamlessly with Azure services like CDN, Front Door, Entra ID, VNET, and Communication Services for Email. These integrations are important for enhancing the WordPress experience. For example, without ACS Email, WordPress can't send emails, affecting tasks like password resets and user invitations. We handle these integrations through environment variables, simplifying the setup. Learn more: https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/wordpress_application_settings.md Simplified creation: Creating WordPress site involves configuring various resources, which can be complex. Our managed service simplifies this process. See how to create a WordPress site: https://learn.microsoft.com/en-us/azure/app-service/quickstart-wordpress Simplified management: Managing multiple resources can be complex. We manage this by environment variables. We extend this capability to complex WordPress configurations as well. For example, WordPress multisite: https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/wordpress_multisite_installation.md Security: We provide best in class security – like use of managed identities: https://techcommunity.microsoft.com/blog/appsonazureblog/managed-identity-support-for-wordpress-on-app-service/4241435. We ensure all resources are within a VNET and privde phpMyAdmin for database management: https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/wordpress_phpmyadmin.md Performance improvements: We have optimized performance with the W3TC plugin, local storage caching, and efficient use of caching, content delivery, and storage. Others: There are a bunch of other interesting features that you might be interested in: https://learn.microsoft.com/en-us/azure/app-service/overview-wordpress https://learn.microsoft.com/en-us/azure/app-service/wordpress-faq How to Check if You're Using the Managed WordPress on App Service? To determine if you're using the managed offering, follow these steps: Check the Container Image: Go to the App Service overview page in the Azure portal. Look for the "Container image" in the properties tab. If the image matches one of our supported images(https://github.com/Azure/wordpress-linux-appservice), you're likely using the managed service. If not, you'll need to migrate to the managed offering, which we'll cover later. 2. Verify Environment Variables: Access the Kudo console and navigate to the File manager. Open the /home/site/wwwroot/wp-config.php file and check if it uses the environment variables correctly. 3. Check Deployment Status: In the File manager, locate the /home/wp-locks/wp_deployment_status.txt file WARNING: Do not edit this file as it may cause unintended issues. Simply check the entries. If the file is missing or its contents differ from the expected entries, you're using a non-managed WordPress site. If the file is present and the contents match, you're on the managed offering. How to transition to the managed offering? Transitioning to the managed WordPress on App Service can be done in two ways: Highly Recommended Approach: Follow these steps: Create a new managed WordPress site: Follow the steps in this setup guide to create a new managed WordPress site on App Service. https://techcommunity.microsoft.com/blog/appsonazureblog/how-to-set-up-a-new-wordpress-website-on-azure-app-service/3729150 Migrate Content Using All-in-One Migration Plugin: Use the All-in-One Migration plugin to transfer your content from the source site to the new managed site. This migration guide provides detailed instructions. Although it’s tailored for migrating from WP Engine, the steps are applicable to this scenario as well. Simply skip the WP Engine-specific steps. https://techcommunity.microsoft.com/blog/appsonazureblog/how-to-migrate-from-wp-engine-to-wordpress-on-app-service/4259573 Point Your Custom Domain to the New Site: Update your custom domain to point to the new managed WordPress site. Follow the instructions in this custom domain guide. https://techcommunity.microsoft.com/blog/appsonazureblog/how-to-use-custom-domains-with-wordpress-on-app-service/3886247 Not Recommended Approach: Some customers ask if they can simply apply the managed container image, add environment variables, and create the necessary resources manually. While this is technically possible, it often leads to numerous errors and involves many steps. If any step goes wrong, you might not achieve the desired outcome and could potentially break your existing site. The recommended approach ensures your existing site remains safe and intact until the new site is fully operational. We hope you transition to the managed WordPress on App Service and enjoy the best WordPress experience! Support and Feedback We’re here to help! If you need any assistance, feel free to open a support request through the Microsoft Azure portal. New support request - Microsoft Azure For more details about our offering, check out the announcement on the General Availability of WordPress on Azure App Service in the Microsoft Tech Community. Announcing the General Availability of WordPress on Azure App Service - Microsoft Tech Community. We value your feedback and ideas on how we can improve WordPress on Azure App Service. Share your thoughts and suggestions on our Community page Post idea · Community (azure.com) or report any issues on our GitHub repository Issues · Azure/wordpress-linux-appservice (github.com). Alternatively, you can start a conversation with us by emailing wordpressonazure@microsoft.com.120Views1like0CommentsSuperfast using Web App and Managed Identity to invoke Function App triggers
TOC Introduction Setup References 1. Introduction Many enterprises prefer not to use App Keys to invoke Function App triggers, as they are concerned that these fixed strings might be exposed. This method allows you to invoke Function App triggers using Managed Identity for enhanced security. I will provide examples in both Bash and Node.js. 2. Setup 1. Create a Linux Python 3.11 Function App 1.1. Configure Authentication to block unauthenticated callers while allowing the Web App’s Managed Identity to authenticate. Identity Provider Microsoft Choose a tenant for your application and it's users Workforce Configuration App registration type Create Name [automatically generated] Client Secret expiration [fit-in your business purpose] Supported Account Type Any Microsoft Entra Directory - Multi-Tenant Client application requirement Allow requests from any application Identity requirement Allow requests from any identity Tenant requirement Use default restrictions based on issuer Token store [checked] 1.2. Create an anonymous trigger. Since your app is already protected by App Registration, additional Function App-level protection is unnecessary; otherwise, you will need a Function Key to trigger it. 1.3. Once the Function App is configured, try accessing the endpoint directly—you should receive a 401 Unauthorized error, confirming that triggers cannot be accessed without proper Managed Identity authorization. 1.4. After making these changes, wait 10 minutes for the settings to take effect. 2. Create a Linux Node.js 20 Web App and Obtain an Access Token and Invoke the Function App Trigger Using Web App (Bash Example) 2.1. Enable System Assigned Managed Identity in the Web App settings. 2.2. Open Kudu SSH Console for the Web App. 2.3. Run the following commands, making the necessary modifications: subscriptionsID → Replace with your Subscription ID. resourceGroupsID → Replace with your Resource Group ID. application_id_uri → Replace with the Application ID URI from your Function App’s App Registration. https://az-9640-faapp.azurewebsites.net/api/test_trigger → Replace with the corresponding Function App trigger URL. # Please setup the target resource to yours subscriptionsID="01d39075-XXXX-XXXX-XXXX-XXXXXXXXXXXX" resourceGroupsID="XXXX" # Variable Setting (No need to change) identityEndpoint="$IDENTITY_ENDPOINT" identityHeader="$IDENTITY_HEADER" application_id_uri="api://9c0012ad-XXXX-XXXX-XXXX-XXXXXXXXXXXX" # Install necessary tool apt install -y jq # Get Access Token tokenUri="${identityEndpoint}?resource=${application_id_uri}&api-version=2019-08-01" accessToken=$(curl -s -H "Metadata: true" -H "X-IDENTITY-HEADER: $identityHeader" "$tokenUri" | jq -r '.access_token') echo "Access Token: $accessToken" # Run Trigger response=$(curl -s -o response.json -w "%{http_code}" -X GET "https://az-9640-myfa.azurewebsites.net/api/my_test_trigger" -H "Authorization: Bearer $accessToken") echo "HTTP Status Code: $response" echo "Response Body:" cat response.json 2.4. If everything is set up correctly, you should see a successful invocation result. 3. Invoke the Function App Trigger Using Web App (nodejs Example) I have also provide my example, which you can modify accordingly and save it to /home/site/wwwroot/callFunctionApp.js and run it cd /home/site/wwwroot/ vi callFunctionApp.js npm init -y npm install azure/identity axios node callFunctionApp.js // callFunctionApp.js const { DefaultAzureCredential } = require("@azure/identity"); const axios = require("axios"); async function callFunctionApp() { try { const applicationIdUri = "api://9c0012ad-XXXX-XXXX-XXXX-XXXXXXXXXXXX"; // Change here const credential = new DefaultAzureCredential(); console.log("Requesting token..."); const tokenResponse = await credential.getToken(applicationIdUri); if (!tokenResponse || !tokenResponse.token) { throw new Error("Failed to acquire access token"); } const accessToken = tokenResponse.token; console.log("Token acquired:", accessToken); const apiUrl = "https://az-9640-myfa.azurewebsites.net/api/my_test_trigger"; // Change here console.log("Calling the API now..."); const response = await axios.get(apiUrl, { headers: { Authorization: `Bearer ${accessToken}`, }, }); console.log("HTTP Status Code:", response.status); console.log("Response Body:", response.data); } catch (error) { console.error("Failed to call the function", error.response ? error.response.data : error.message); } } callFunctionApp(); Below is my execution result: 3. References Tutorial: Managed Identity to Invoke Azure Functions | Microsoft Learn How to Invoke Azure Function App with Managed Identity | by Krizzia 🤖 | Medium Configure Microsoft Entra authentication - Azure App Service | Microsoft Learn192Views0likes0CommentsHow to set up subdirectory Multisite in WordPress on Azure App Service
WordPress Multisite is a feature of WordPress that enables you to run and manage multiple WordPress websites using the same WordPress installation. Follow these steps to setup Multisite in your WordPress website on App Service...9.6KViews1like14CommentsUsing NVIDIA Triton Inference Server on Azure Container Apps
TOC Introduction to Triton System Architecture Architecture Focus of This Tutorial Setup Azure Resources File and Directory Structure ARM Template ARM Template From Azure Portal Testing Azure Container Apps Conclusion References 1. Introduction to Triton Triton Inference Server is an open-source, high-performance inferencing platform developed by NVIDIA to simplify and optimize AI model deployment. Designed for both cloud and edge environments, Triton enables developers to serve models from multiple deep learning frameworks, including TensorFlow, PyTorch, ONNX Runtime, TensorRT, and OpenVINO, using a single standardized interface. Its goal is to streamline AI inferencing while maximizing hardware utilization and scalability. A key feature of Triton is its support for multiple model execution modes, including dynamic batching, concurrent model execution, and multi-GPU inferencing. These capabilities allow organizations to efficiently serve AI models at scale, reducing latency and optimizing throughput. Triton also offers built-in support for HTTP/REST and gRPC endpoints, making it easy to integrate with various applications and workflows. Additionally, it provides model monitoring, logging, and GPU-accelerated inference optimization, enhancing performance across different hardware architectures. Triton is widely used in AI-powered applications such as autonomous vehicles, healthcare imaging, natural language processing, and recommendation systems. It integrates seamlessly with NVIDIA AI tools, including TensorRT for high-performance inference and DeepStream for video analytics. By providing a flexible and scalable deployment solution, Triton enables businesses and researchers to bring AI models into production with ease, ensuring efficient and reliable inferencing in real-world applications. 2. System Architecture Architecture Development Environment OS: Ubuntu Version: Ubuntu 18.04 Bionic Beaver Docker version: 26.1.3 Azure Resources Storage Account: SKU - General Purpose V2 Container Apps Environments: SKU - Consumption Container Apps: N/A Focus of This Tutorial This tutorial walks you through the following stages: Setting up Azure resources Publishing the project to Azure Testing the application Each of the mentioned aspects has numerous corresponding tools and solutions. The relevant information for this session is listed in the table below. Local OS Windows Linux Mac V How to setup Azure resources and deploy Portal (i.e., REST api) ARM Bicep Terraform V 3. Setup Azure Resources File and Directory Structure Please open a terminal and enter the following commands: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai After completing the execution, you should see the following directory structure: File and Path Purpose triton/tools/arm-template.json The ARM template to setup all the Azure resources related to this tutorial, including a Container Apps Environments, a Container Apps, and a Storage Account with the sample dataset. ARM Template We need to create the following resources or services: Manual Creation Required Resource/Service Container Apps Environments Yes Resource Container Apps Yes Resource Storage Account Yes Resource Blob Yes Service Deployment Script Yes Resource Let’s take a look at the triton/tools/arm-template.json file. Refer to the configuration section for all the resources. Since most of the configuration values don’t require changes, I’ve placed them in the variables section of the ARM template rather than the parameters section. This helps keep the configuration simpler. However, I’d still like to briefly explain some of the more critical settings. As you can see, I’ve adopted a camelCase naming convention, which combines the [Resource Type] with [Setting Name and Hierarchy]. This makes it easier to understand where each setting will be used. The configurations in the diagram are sorted by resource name, but the following list is categorized by functionality for better clarity. Configuration Name Value Purpose storageAccountContainerName data-and-model [Purpose 1: Blob Container for Model Storage] Use this fixed name for the Blob Container. scriptPropertiesRetentionInterval P1D [Purpose 2: Script for Uploading Models to Blob Storage] No adjustments are needed. This script is designed to launch a one-time instance immediately after the Blob Container is created. It downloads sample model files and uploads them to the Blob Container. The Deployment Script resource will automatically be deleted after one day. caeNamePropertiesPublicNetworkAccess Enabled [Purpose 3: For Testing] ACA requires your local machine to perform tests; therefore, external access must be enabled. appPropertiesConfigurationIngressExternal true [Purpose 3: For Testing] Same as above. appPropertiesConfigurationIngressAllowInsecure true [Purpose 3: For Testing] Same as above. appPropertiesConfigurationIngressTargetPort 8000 [Purpose 3: For Testing] The Triton service container uses port 8000. appPropertiesTemplateContainers0Image nvcr.io/nvidia/tritonserver:22.04-py3 [Purpose 3: For Testing] The Triton service container utilizes this online resource. ARM Template From Azure Portal In addition to using az cli to invoke ARM Templates, if the JSON file is hosted on a public network URL, you can also load its configuration directly into the Azure Portal by following the method described in the article [Deploy to Azure button - Azure Resource Manager]. This is my example. Click Me After filling in all the required information, click Create. And we could have a test once the creation process is complete. 4. Testing Azure Container App In our local environment, use the following command to start a one-time Docker container. We will use NVIDIA's official test image and send a sample image from within it to the Triton service that was just deployed to Container Apps. # Replace XXX.YYY.ZZZ.azurecontainerapps.io with the actual FQDN of your app. There is no need to add https:// docker run --rm nvcr.io/nvidia/tritonserver:22.04-py3-sdk /workspace/install/bin/image_client -u XXX.YYY.ZZZ.azurecontainerapps.io -m densenet_onnx -c 3 -s INCEPTION /workspace/images/mug.jpg After sending the request, you should see the prediction results, indicating that the deployed Triton server service is functioning correctly. 5. Conclusion Beyond basic model hosting, Triton Inference Server's greatest strength lies in its ability to efficiently serve AI models at scale. It supports multiple deep learning frameworks, allowing seamless deployment of diverse models within a single infrastructure. With features like dynamic batching, multi-GPU execution, and optimized inference pipelines, Triton ensures high performance while reducing latency. While it may not replace custom-built inference solutions for highly specialized workloads, it excels as a standardized and scalable platform for deploying AI across cloud and edge environments. Its flexibility makes it ideal for applications such as real-time recommendation systems, autonomous systems, and large-scale AI-powered analytics. 6. References Quickstart — NVIDIA Triton Inference Server Deploying an ONNX Model — NVIDIA Triton Inference Server Model Repository — NVIDIA Triton Inference Server Triton Tutorials — NVIDIA Triton Inference Server272Views0likes0CommentsUsing OpenAI on Azure Web App
TOC Introduction to OpenAI System Architecture Architecture Focus of This Tutorial Setup Azure Resources File and Directory Structure ARM Template ARM Template From Azure Portal Running Locally Training Models and Training Data Predicting with the Model Publishing the Project to Azure Running on Azure Web App Training the Model Using the Model for Prediction Troubleshooting Startup Command Issue App Becomes Unresponsive After a Period az cli command for Linux webjobs fail Others Conclusion References 1. Introduction to OpenAI OpenAI is a leading artificial intelligence research and deployment company founded in December 2015. Its mission is to ensure that artificial general intelligence (AGI)—highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. OpenAI focuses on developing safe and scalable AI technologies and ensuring equitable access to these innovations. Known for its groundbreaking advancements in natural language processing, OpenAI has developed models like GPT (Generative Pre-trained Transformer), which powers applications for text generation, summarization, translation, and more. GPT models have revolutionized fields like conversational AI, creative writing, and programming assistance. OpenAI has also released models like Codex, designed to understand and generate computer code, and DALL·E, which creates images from textual descriptions. OpenAI operates with a unique hybrid structure: a for-profit company governed by a nonprofit entity to balance the development of AI technology with ethical considerations. The organization emphasizes safety, research transparency, and alignment to human values. By providing access to its models through APIs and fostering partnerships, OpenAI empowers developers, businesses, and researchers to leverage AI for innovative solutions across diverse industries. Its long-term goal is to ensure AI advances benefit humanity as a whole. 2. System Architecture Architecture Development Environment OS: Ubuntu Version: Ubuntu 18.04 Bionic Beaver Python Version: 3.7.3 Azure Resources App Service Plan: SKU - Premium Plan 0 V3 App Service: Platform - Linux (Python 3.9, Version 3.9.19) Storage Account: SKU - General Purpose V2 File Share: No backup plan Focus of This Tutorial This tutorial walks you through the following stages: Setting up Azure resources Running the project locally Publishing the project to Azure Running the application on Azure Troubleshooting common issues Each of the mentioned aspects has numerous corresponding tools and solutions. The relevant information for this session is listed in the table below. Local OS Windows Linux Mac V How to setup Azure resources Portal (i.e., REST api) ARM Bicep Terraform V V How to deploy project to Azure VSCode CLI Azure DevOps GitHub Action V 3. Setup Azure Resources File and Directory Structure Please open a bash terminal and enter the following commands: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai bash ./openai/tools/add-venv.sh If you are using a Windows platform, use the following alternative PowerShell commands instead: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai .\openai\tools\add-venv.cmd After completing the execution, you should see the following directory structure: File and Path Purpose openai/tools/add-venv.* The script executed in the previous step (cmd for Windows, sh for Linux/Mac) to create all Python virtual environments required for this tutorial. .venv/openai-webjob/ A virtual environment specifically used for training models (i.e., calculating embedding vectors indeed). openai/webjob/requirements.txt The list of packages (with exact versions) required for the openai-webjob virtual environment. .venv/openai/ A virtual environment specifically used for the Flask application, enabling API endpoint access for querying predictions (i.e., suggestion). openai/requirements.txt The list of packages (with exact versions) required for the openai virtual environment. openai/ The main folder for this tutorial. openai/tools/arm-template.json The ARM template to setup all the Azure resources related to this tutorial, including an App Service Plan, a Web App, and a Storage Account. openai/tools/create-folder.* A script to create all directories required for this tutorial in the File Share, including train, model, and test. openai/tools/download-sample-training-set.* A script to download a sample training set from News-Headlines-Dataset-For-Sarcasm-Detection, containing headlines data from TheOnion and HuffPost, into the train directory of the File Share. openai/webjob/cal_embeddings.py A script for calculating embedding vectors from headlines. It loads the training set, applies the transformation on OpenAI API, and saves the embedding vectors in the model directory of the File Share. openai/App_Data/jobs/triggered/cal-embeddings/cal_embeddings.sh A shell script for Azure App Service web jobs. It activates the openai-webjob virtual environment and starts the cal_embeddings.py script. openai/api/app.py Code for the Flask application, including routes, port configuration, input parsing, vectors loading, predictions, and output generation. openai/start.sh A script executed after deployment (as specified in the ARM template startup command I will introduce it later). It sets up the virtual environment and starts the Flask application to handle web requests. ARM Template We need to create the following resources or services: Manual Creation Required Resource/Service App Service Plan No Resource (plan) App Service Yes Resource (app) Storage Account Yes Resource (storageAccount) File Share Yes Service Let’s take a look at the openai/tools/arm-template.json file. Refer to the configuration section for all the resources. Since most of the configuration values don’t require changes, I’ve placed them in the variables section of the ARM template rather than the parameters section. This helps keep the configuration simpler. However, I’d still like to briefly explain some of the more critical settings. As you can see, I’ve adopted a camelCase naming convention, which combines the [Resource Type] with [Setting Name and Hierarchy]. This makes it easier to understand where each setting will be used. The configurations in the diagram are sorted by resource name, but the following list is categorized by functionality for better clarity. Configuration Name Value Purpose storageAccountFileShareName data-and-model [Purpose 1: Link File Share to Web App] Use this fixed name for File Share storageAccountFileShareShareQuota 5120 [Purpose 1: Link File Share to Web App] The value is in GB storageAccountFileShareEnabledProtocols SMB [Purpose 1: Link File Share to Web App] appSiteConfigAzureStorageAccountsType AzureFiles [Purpose 1: Link File Share to Web App] appSiteConfigAzureStorageAccountsProtocol Smb [Purpose 1: Link File Share to Web App] planKind linux [Purpose 2: Specify platform and stack runtime] Select Linux (default if Python stack is chosen) planSkuTier Premium0V3 [Purpose 2: Specify platform and stack runtime] Choose at least Premium Plan to ensure enough memory for your AI workloads planSkuName P0v3 [Purpose 2: Specify platform and stack runtime] Same as above appKind app,linux [Purpose 2: Specify platform and stack runtime] Same as above appSiteConfigLinuxFxVersion PYTHON|3.9 [Purpose 2: Specify platform and stack runtime] Select Python 3.9 to avoid dependency issues appSiteConfigAppSettingsWEBSITES_CONTAINER_START_TIME_LIMIT 600 [Purpose 3: Deploying] The value is in seconds, ensuring the Startup Command can continue execution beyond the default timeout of 230 seconds. This tutorial’s Startup Command typically takes around 300 seconds, so setting it to 600 seconds provides a safety margin and accommodates future project expansion (e.g., adding more packages) appSiteConfigAppCommandLine [ -f /home/site/wwwroot/start.sh ] && bash /home/site/wwwroot/start.sh || GUNICORN_CMD_ARGS=\"--timeout 600 --access-logfile '-' --error-logfile '-' -c /opt/startup/gunicorn.conf.py --chdir=/opt/defaultsite\" gunicorn application:app [Purpose 3: Deploying] This is the Startup Command, which can be break down into 3 parts: First (-f /home/site/wwwroot/start.sh): Checks whether start.sh exists. This is used to determine whether the app is in its initial state (just created) or has already been deployed. Second (bash /home/site/wwwroot/start.sh): If the file exists, it means the app has already been deployed. The start.sh script will be executed, which installs the necessary packages and starts the Flask application. Third (GUNICORN_CMD_ARGS=\"--timeout 600 --access-logfile '-' --error-logfile '-' -c /opt/startup/gunicorn.conf.py --chdir=/opt/defaultsite\" gunicorn application:app): If the file does not exist, the command falls back to the default HTTP server (gunicorn) to start the web app. Since the command is enclosed in double quotes within the ARM template, during actual execution, replace \" with " appSiteConfigAppSettingsSCM_DO_BUILD_DURING_DEPLOYMENT false [Purpose 3: Deploying] Since we have already defined the handling for different virtual environments in start.sh, we do not need to initiate the default build process of the Web App appSiteConfigAppSettingsWEBSITES_ENABLE_APP_SERVICE_STORAGE true [Purpose 4: Webjobs] This setting is required to enable the App Service storage feature, which is necessary for using web jobs (e.g., for model training) storageAccountPropertiesAllowSharedKeyAccess true [Purpose 5: Troubleshooting] This setting is enabled by default. The reason for highlighting it is that certain enterprise IT policies may enforce changes to this configuration after a period, potentially causing a series of issues. For more details, please refer to the Troubleshooting section below. Return to bash terminal and execute the following commands (their purpose has been described earlier). # Please change <ResourceGroupName> to your prefer name, for example: azure-appservice-ai # Please change <RegionName> to your prefer region, for example: eastus2 # Please change <ResourcesPrefixName> to your prefer naming pattern, for example: openai-arm (it will create openai-arm-asp as App Service Plan, openai-arm-app for web app, and openaiarmsa for Storage Account) az group create --name <ResourceGroupName> --location <RegionName> az deployment group create --resource-group <ResourceGroupName> --template-file ./openai/tools/arm-template.json --parameters resourcePrefix=<ResourcesPrefixName> If you are using a Windows platform, use the following alternative PowerShell commands instead: # Please change <ResourceGroupName> to your prefer name, for example: azure-appservice-ai # Please change <RegionName> to your prefer region, for example: eastus2 # Please change <ResourcesPrefixName> to your prefer naming pattern, for example: openai-arm (it will create openai-arm-asp as App Service Plan, openai-arm-app for web app, and openaiarmsa for Storage Account) az group create --name <ResourceGroupName> --location <RegionName> az deployment group create --resource-group <ResourceGroupName> --template-file .\openai\tools\arm-template.json --parameters resourcePrefix=<ResourcesPrefixName> After execution, please copy the output section containing 3 key-value pairs from the result like this. Return to bash terminal and execute the following commands: # Please setup 3 variables you've got from the previous step OUTPUT_STORAGE_NAME="<outputStorageName>" OUTPUT_STORAGE_KEY="<outputStorageKey>" OUTPUT_SHARE_NAME="<outputShareName>" sudo mkdir -p /mnt/$OUTPUT_SHARE_NAME if [ ! -d "/etc/smbcredentials" ]; then sudo mkdir /etc/smbcredentials fi CREDENTIALS_FILE="/etc/smbcredentials/$OUTPUT_STORAGE_NAME.cred" if [ ! -f "$CREDENTIALS_FILE" ]; then sudo bash -c "echo \"username=$OUTPUT_STORAGE_NAME\" >> $CREDENTIALS_FILE" sudo bash -c "echo \"password=$OUTPUT_STORAGE_KEY\" >> $CREDENTIALS_FILE" fi sudo chmod 600 $CREDENTIALS_FILE sudo bash -c "echo \"//$OUTPUT_STORAGE_NAME.file.core.windows.net/$OUTPUT_SHARE_NAME /mnt/$OUTPUT_SHARE_NAME cifs nofail,credentials=$CREDENTIALS_FILE,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30\" >> /etc/fstab" sudo mount -t cifs //$OUTPUT_STORAGE_NAME.file.core.windows.net/$OUTPUT_SHARE_NAME /mnt/$OUTPUT_SHARE_NAME -o credentials=$CREDENTIALS_FILE,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30 Or you could simply go to Azure Portal, navigate to the File Share you just created, and refer to the diagram below to copy the required command. You can choose Windows or Mac if you are using such OS in your dev environment. After executing the command, the network drive will be successfully mounted. You can use df to verify, as illustrated in the diagram. ARM Template From Azure Portal In addition to using az cli to invoke ARM Templates, if the JSON file is hosted on a public network URL, you can also load its configuration directly into the Azure Portal by following the method described in the article [Deploy to Azure button - Azure Resource Manager]. This is my example. Click Me After filling in all the required information, click Create. Once the creation process is complete, click Outputs on the left menu to retrieve the connection information for the File Share. 4. Running Locally Training Models and Training Data In the next steps, you will need to use OpenAI services. Please ensure that you have registered as a member and added credits to your account (Billing overview - OpenAI API). For this example, adding $10 USD will be sufficient. Additionally, you will need to generate a new API key (API keys - OpenAI API), you may choose to create a project as well for future project organization, depending on your needs (Projects - OpenAI API). After getting the API key, create a text file named apikey.txt in the openai/tools/ folder. Paste the key you just copied into the file and save it. Return to bash terminal and execute the following commands (their purpose has been described earlier). source .venv/openai-webjob/bin/activate bash ./openai/tools/create-folder.sh bash ./openai/tools/download-sample-training-set.sh python ./openai/webjob/cal_embeddings.py --sampling_ratio 0.002 If you are using a Windows platform, use the following alternative PowerShell commands instead: .\.venv\openai-webjob\Scripts\Activate.ps1 .\openai\tools\create-folder.cmd .\openai\tools\download-sample-training-set.cmd python .\openai\webjob\cal_embeddings.py --sampling_ratio 0.002 After execution, the File Share will now include the following directories and files. Let’s take a brief detour to examine the structure of the training data downloaded from the GitHub. The right side of the image explains each field of the data. This dataset was originally used to detect whether news headlines contain sarcasm. However, I am repurposing it for another application. In this example, I will use the "headline" field to create embeddings. The left side displays the raw data, where each line is a standalone JSON string containing the necessary fields. In the code, I first extract the "headline" field from each record and send it to OpenAI to compute the embedding vector for the text. This embedding represents the position of the text in a semantic space (akin to coordinates in a multi-dimensional space). After the computation, I obtain an embedding vector for each headline. Moving forward, I will refer to these simply as embeddings. By the way, the sampling_ratio parameter in the command is something I configured to speed up the training process. The original dataset contains nearly 30,000 records, which would result in a training time of around 8 hours. To simplify the tutorial, you can specify a relatively low sampling_ratio value (ranging from 0 to 1, representing 0% to 100% sampling from the original records). For example, a value of 0.01 corresponds to a 1% sample, allowing you to accelerate the experiment. In this semantic space, vectors that are closer to each other often have similar values, which corresponds to similar meanings. In this context, the distance between vectors will serve as our metric to evaluate the semantic similarity between pieces of text. For this, we will use a method called cosine similarity. In the subsequent tutorial, we will construct some test texts. These test texts will also be converted into embeddings using the same method. Each test embedding will then be compared against the previously computed headline embeddings. The comparison will identify the nearest headline embeddings in the multi-dimensional vector space, and their original text will be returned. Additionally, we will leverage OpenAI's well-known generative AI capabilities to provide a textual explanation. This explanation will describe why the constructed test text is related to the recommended headline. Predicting with the Model Return to terminal and execute the following commands. First, deactivate the virtual environment used for calculating the embeddings, then activate the virtual environment for the Flask application, and finally, start the Flask app. Commands for Linux or Mac: deactivate source .venv/openai/bin/activate python ./openai/api/app.py Commands for Windows: deactivate .\.venv\openai\Scripts\Activate.ps1 python .\openai\api\app.py When you see a screen similar to the following, it means the server has started successfully. Press Ctrl+C to stop the server if needed. Before conducting the actual test, let’s construct some sample query data: education Next, open a terminal and use the following curl commands to send requests to the app: curl -X GET http://127.0.0.1:8000/api/detect?text=education You should see the calculation results, confirming that the embeddings and Gen AI is working as expected. PS: Your results may differ from mine due to variations in the sampling of your training dataset compared to mine. Additionally, OpenAI's generative content can produce different outputs depending on the timing and context. Please keep this in mind. 5. Publishing the Project to Azure Return to terminal and execute the following commands. Commands for Linux or Mac: # Please change <resourcegroup_name> and <webapp_name> to your own # Create the Zip file from project zip -r openai/app.zip openai/* # Deploy the App az webapp deploy --resource-group <resourcegroup_name> --name <webapp_name> --src-path openai/app.zip --type zip # Delete the Zip file rm openai/app.zip Commands for Windows: # Please change <resourcegroup_name> and <webapp_name> to your own # Create the Zip file from project Compress-Archive -Path openai\* -DestinationPath openai\app.zip # Deploy the App az webapp deploy --resource-group <resourcegroup_name> --name <webapp_name> --src-path openai\app.zip --type zip # Delete the Zip file del openai\app.zip PS: WebJobs follow the directory structure of App_Data/jobs/triggered/<webjob_name>/. As a result, once the Web App is deployed, the WebJob is automatically deployed along with it, requiring no additional configuration. 6. Running on Azure Web App Training the Model Return to terminal and execute the following commands to invoke the WebJobs. Commands for Linux or Mac: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; curl -X POST -H "Authorization: Bearer $token" -H "Content-Type: application/json" -d '{}' "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" Commands for Windows: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"; "Content-type" = "application/json"} -Method POST -Body '{}' You could see the training status by execute the following commands. Commands for Linux or Mac: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; response=$(curl -s -H "Authorization: Bearer $token" "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01") ; echo "$response" | jq Commands for Windows: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv); $response = Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"} -Method GET ; $response | ConvertTo-Json -Depth 10 Processing Complete And you can get the latest detail log by execute the following commands. Commands for Linux or Mac: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; history_id=$(az webapp webjob triggered log --resource-group <resourcegroup_name> --name <webapp_name> --webjob-name cal-embeddings --query "[0].id" -o tsv | sed 's|.*/history/||') ; response=$(curl -X GET -H "Authorization: Bearer $token" -H "Content-Type: application/json" "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/history/$history_id/?api-version=2024-04-01") ; log_url=$(echo "$response" | jq -r '.properties.output_url') ; curl -X GET -H "Authorization: Bearer $token" "$log_url" Commands for Windows: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own $token = az account get-access-token --resource https://management.azure.com --query accessToken -o tsv ; $history_id = az webapp webjob triggered log --resource-group <resourcegroup_name> --name <webapp_name> --webjob-name cal-embeddings --query "[0].id" -o tsv | ForEach-Object { ($_ -split "/history/")[-1] } ; $response = Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/history/$history_id/?api-version=2024-04-01" -Headers @{ Authorization = "Bearer $token" } -Method GET ; $log_url = $response.properties.output_url ; Invoke-RestMethod -Uri $log_url -Headers @{ Authorization = "Bearer $token" } -Method GET Once you see the report in the Logs, it indicates that the embeddings calculation is complete, and the Flask app is ready for predictions. You can also find the newly calculated embeddings in the File Share mounted in your local environment. Using the Model for Prediction Just like in local testing, open a bash terminal and use the following curl commands to send requests to the app: # Please change <webapp_name> to your own curl -X GET https://<webapp_name>.azurewebsites.net/api/detect?text=education As with the local environment, you should see the expected results. 7. Troubleshooting Startup Command Issue Symptom: Without any code changes and when the app was previously functioning, updating the Startup Command causes the app to stop working. The related default_docker.log shows multiple attempts to run the container without errors in a short time, but the container does not respond on port 8000 as seen in docker.log. Cause: Since Linux Web Apps actually run in containers, the final command in the Startup Command must function similarly to the CMD instruction in a Dockerfile. CMD ["/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"] This command must ensure it runs in the foreground (i.e., not in daemon mode) and cannot exit the process unless manually interrupted. Resolution: Check the final command in the Startup Command to ensure it does not include a daemon execution mode. Alternatively, use the Web SSH interface to execute and verify these commands directly. App Becomes Unresponsive After a Period Symptom: An app that runs normally becomes unresponsive after some time. Both the front-end webpage and the Kudu page display an "Application Error," and the deployment log shows "Too many requests." Additionally, the local environment cannot connect to the associated File Share. Cause: Clicking on "diagnostic resources" in the initial error screen provides more detailed error information. In this example, the issue is caused by internal enterprise Policies or Automations (e.g., enterprise applications) that periodically or randomly scan storage account settings created by employees. If the settings are deemed non-compliant with security standards, they are automatically adjusted. For instance, the allowSharedKeyAccess parameter may be forcibly set to false, preventing both the Web App and the local development environment from connecting to the File Share under the Storage Account. Modification history for such settings can be checked via the Activity Log of the Storage Account (note that only the last 90 days of data are retained). Resolution: The proper approach is to work offline with the enterprise IT team to coordinate and request the necessary permissions. As a temporary workaround, modify the affected settings to Enable during testing periods and revert them to Disabled afterward. You can find the setting for allowSharedKeyAccess here. Note: Azure Storage Mount currently does not support access via Managed Identity. az cli command for Linux webjobs fail Symptom: Got "Operation returned an invalid status 'Unauthorized'" message from different platforms even in Azure CloudShell with latest az version Cause: After using "--debug --verbose" from the command I can see the actual error occurred on which REST API, for example, I'm using this command (az webapp webjob triggered): az webapp webjob triggered list --resource-group azure-appservice-ai --name openai-arm-app --debug --verbose Which represent that the operation has invoked under this API: /Microsoft.Web/sites/{app_name}/triggeredwebjobs (Web Apps - List Triggered Web Jobs) After I directly test that API from the official doc, I still get such the error, which means this preview feature is still under construction, and we cannot use it currently. Resolution: I found a related API endpoint via Azure Portal: /Microsoft.Web/sites/{app_name}/webjobs (Web Apps - List Web Jobs) After I directly test that API from the official doc, I can get the trigger list now. So I have modified the original command: az webapp webjob triggered list --resource-group azure-appservice-ai --name openai-arm-app To the following command (please note the differences between Linux/Mac and Windows commands). Make sure to replace <subscription_id>, <resourcegroup_name>, and <webapp_name> with your specific values. Commands for Linux or Mac: token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; response=$(curl -s -H "Authorization: Bearer $token" "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01") ; echo "$response" | jq Commands for Windows: $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv); $response = Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"} -Method GET ; $response | ConvertTo-Json -Depth 10 For "run" commands, due to the same issue when invoking the problematic API, so I also modify the operation. Commands for Linux or Mac: token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; curl -X POST -H "Authorization: Bearer $token" -H "Content-Type: application/json" -d '{}' "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" Commands for Windows: $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/029b4739-1f55-4cab-bf84-a9393f8ac8fe/resourceGroups/azure-appservice-ai/providers/Microsoft.Web/sites/openai-arm-app/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"; "Content-type" = "application/json"} -Method POST -Body '{}' Others Using Scikit-learn on Azure Web App 8. Conclusion Beyond simple embedding vector calculations, OpenAI's most notable strength is generative AI. You can provide instructions to the GPT model through natural language (as a prompt), clearly specifying the format you need in the instruction. You can then parse the returned content easily. While PaaS products are not ideal for heavy vector calculations, they are well-suited for acting as intermediaries to forward commands to generative AI. These outputs can even be used for various applications, such as patent infringement detection, plagiarism detection in research papers, or trending news analysis. I believe that in the future, we will see more similar applications on Azure Web Apps. 9. References Overview - OpenAI API News-Headlines-Dataset-For-Sarcasm-Detection Quickstart: Deploy a Python (Django, Flask, or FastAPI) web app to Azure - Azure App Service Configure a custom startup file for Python apps on Azure App Service on Linux - Python on Azure Mount Azure Storage as a local share - Azure App Service Deploy to Azure button - Azure Resource Manager Using Scikit-learn on Azure Web App862Views0likes0CommentsJDConf 2025: Announcing Keynote Speaker and Exciting Sessions on Java, Cloud, and AI
Microsoft JDConf 2025 is rapidly approaching and promises to be the must-attend event for Java developers, particularly those interested in the latest advancements in Java, Cloud and AI. This year, the conference will feature over 22 sessions and more than 10 hours of live streaming content for global audience, along with additional on-demand sessions available from April 9 to 10. The spotlight this year is on integrating AI into your development workflow with tools like Copilot, showcasing how these advancements are revolutionizing the coding landscape. Whether you are exploring application modernization, leveraging AI for intelligent apps, or optimizing Java deployments, JDConf has sessions for every interest. Code the future with AI Explore AI-driven Java innovation: Uncover the role of AI in enhancing Java application development on the cloud for greater efficiency and innovation. Livestream for all time zones: Live sessions scheduled to accommodate attendees from around the globe, ensuring no one misses out. Learn from Java experts and innovators: Discover the impact of diversity and open-source innovation in advancing the Java ecosystem. Global networking opportunity: Connect with Java professionals and community leaders worldwide to share knowledge and foster community growth. Free & accessible content: Enjoy all sessions without cost, available live and on-demand for ultimate flexibility. Earn rewards: Join the JDConf experience and earn Microsoft Rewards points. 🌟 RSVP now at JDConf.com !! ⭐ This year’s list of sessions Figure 1: Your quick guide to JDConf 2025: cheat sheet for the keynote and breakout sessions happening across three regions. Do not miss out on planning your perfect conference experience! Technical keynote: Code the future with Java & AI Amanda Silver, Microsoft | Josh Long, Broadcom | Lize Raes, Naboo.ai Join Amanda Silver, CVP and head of product, Microsoft Developer Division, as she takes the stage for the JDConf Opening Keynote, exploring how Java developers can harness the power of AI, cloud, and cutting-edge tools to accelerate development. From Visual Studio Code and GitHub Copilot to Cloud services, Amanda will showcase how Cloud and AI are transforming the developer experience, enabling teams to go from code to production faster than ever. She’ll also dive into the latest advancements in Java technologies, Microsoft's deep investments in the Java ecosystem, and the company's ongoing commitment to open-source innovation. Don't miss this opportunity to discover how Microsoft is empowering Java developers in the AI era! Session summaries by region Americas live stream - April 9, 8:30am – 12:30pm PDT Spring Boot: Bootiful Spring Boot: A DOGumentary by Josh Long will dive into Spring Boot 3.x and Java 21, exploring AI, modularity, and powerful optimizations like virtual threads, GraalVM, and AppCDS. AI Dev Experience: Boosting AI Developer Experience with Quarkus, LangChain4j, and Azure OpenAI by Daniel Oh will demonstrate how this trio streamlines development and powers intelligent apps. Spring AI: How to Build Agents with Spring AI by Adib Saikali will showcase building intelligent AI agents, covering key patterns like self-editing memory, task orchestration, & collaborative multi-agent systems. Jakarta EE 12: What Comes After Jakarta EE 11? Reza Rahman and Emily Jiang will share roadmap, contribution pathways, and key updates, including Security, Concurrency, Messaging, and new APIs. Deployment: Production Best Practices: Go from Dev to Delivered and Stay There by Mark Heckler will take Java apps from development to production with a focus on CI/CD, containerization, infrastructure as code, and cloud deployment. Cloud-native: Java Cloud-Native Shoot-Out: InstantOn vs CRaC vs Native Image by Yee-Kang Chang and Rich Hagarty will compare three emerging Java technologies; Liberty InstantOn, OpenJDK CRaC, and Native Image to determine which best supports fast start-up times and low resource usage in your cloud-native apps. AI-Driven Testing: Test Smarter, Not Harder: AI-Driven Test Development by Loiane Groner will demo how AI-powered tools like GitHub Copilot enhance TDD through automated test generation and improved test coverage, even for legacy code. Asia-Pacific live stream – April 10, 10:00am-1:30pm SGT LLMs integration: Building LLM Apps in Java with LangChain4j and Jakarta EE by Bazlur Rahman and Syed M Shaaf will demonstrates how to integrate large language models (LLMs) into Java apps, including techniques like retrieval-augmented generation (RAG) and embedding databases. Java Modernization: Modernize Java Apps Using GitHub Copilot Upgrade Assistant for Java by Nick Zhu will show how this tool can help modernize Java apps by automating refactoring, managing dependencies, and resolving version conflicts. Automated Refactoring: The State of AI in Large Scale Automated Refactoring by Jonathan Schneider will show how OpenRewrite’s Lossless Semantic Tree enhances AI-driven refactoring for accurate decision-making. Java Modernization: Cloud Migration of Java Applications Using Various Tools and Techniqueby Yoshio Terada will demo modernizing legacy apps with tools like VS Code, GitHub Copilot, and Azure Migrate. Java & AI: AI for Java Developers by Dan Vega will introduce AI for Java developers, covering machine learning, deep learning, and practical AI implementations such as chatbots, recommendation systems, and sentiment analysis. Hyperscale PaaS: Spring, Quarkus, Tomcat, JBoss EAP - Hyperscale PaaS for Any Java App by Haixia Cheng and Edward Burns will demo how to deploy any Java appson Azure App Service. Buildpacks: Paketo Buildpacks: The Best Way to Build Java Container Images? by Anthony Dahanne and David O'Sullivan will explore the benefits of buildpacks for Java containerization, comparing them with traditional Dockerfile-based approaches. Europe, Middle East and Africa - April 10, 9:00am – 12:30pm GMT Java 25: Explore The Hidden Gems of Java 25 with Mohamed Taman as he uncovers key Java SE features, updates, and fixes that will simplify migration to new Java and enhance your daily development workflow. GitHub Copilot: Use GitHub Copilot in your favorite Java IDEs by Julia Kordick and Brian Benz will show how to maximize productivity with GitHub Copilot’s latest features in IntelliJ, VS Code, and Eclipse. LangChain4j: AI-Powered Development: Hands-On Techniques for Immediate Impact by Lize Raes will explore AI tools like Cursor, Devin, and GitHub Workspace to help developers accelerate workflows and embrace AI-driven coding practices. Data and AI: Powering Spring AI with RAG and NoSQL by Theo van Kraay will demo how integrating Cosmos DB as vector store with Spring AI enables scalable, intelligent and high performing apps. Spring Security: Passkeys, One-Time Tokens: Passwordless Spring Security by Daniel Garnier-Moiroux dives into latest passwordless authentication methods in Spring Security with real-world implementation demos. Virtual Threads: Virtual Threads in Action with Jakarta EE Core Profile by Daniel Kec explores Helidon 4, the first Jakarta EE Core Profile runtime built on a pure Virtual Thread-based web server. Web apps: Simplifying Web App Development with HTMX and Hypermedia by Frederik Hahne shows how HTMX and modern template engines simplify Java web development by reducing reliance on complex single-page apps. Register and attend to earn rewards 🚀 Join the JDConf Experience and Earn Microsoft Rewards! 🚀 The first 300 attendees to check-in live for one of the JDConf - America, Europe or Asia - will receive 5,000 Microsoft Rewards points. How to Participate: Attendance Rewards: For your check-in to be counted you will need to do one of the following on the day of the event: Go to the JDConf Event details page on the Reactor website, Sign in with your Microsoft account (top right corner) and then check-in on the right-hand side, or Click the Join live stream link in the confirmation or reminder e-mail you receive to the Microsoft account e-mail address you registered with, or Click the link in the calendar reminder email, you will see the option to add the event to your calendar in your Microsoft account confirmation email. Points Distribution: Microsoft Rewards points will be added to the participants' Microsoft accounts within 60 days following the event. To earn points, you must use an email that is associated with a Microsoft account. You will receive an e-mail from the Microsoft Reactor team if you are eligible and earn the Microsoft Rewards. Points can be used towards many different rewards, check out Microsoft rewards to see what rewards are available in your region. Terms | Privacy RSVP now - engage, learn, and code the future with AI! Do not miss out – RSVP now and be part of the future of Java at JDConf 2025! We are calling all Java enthusiasts and developers around the globe to join us for a two-day event on April 9 and 10. This is more than just a conference. It is a chance to engage with the community, learn from the experts, and help drive Java technology forward. Get ready to dive into deep Java insights, connect with fellow developers, and discover the latest innovations that are shaping the world of Java. Let us gather to celebrate our passion for Java, share knowledge, and explore new possibilities together. Make sure you are there to push the boundaries of what Java can do. RSVP now at JDConf.com and let's make JDConf 2025 a milestone event for Java and its community. See you there! ⭐ RSVP now at JDConf.com 🌟532Views0likes0CommentsHow to set up a new WordPress website on Azure App Service
WordPress on Azure App Service combines the power of WordPress and Azure App Service to bring you a fully managed, scalable, and performant WordPress hosting solution. Let us learn how to create a new WordPress website on Azure App Service.19KViews2likes6CommentsLeveraging Azure Container Apps Labels for Environment-based Routing and Feature Testing
Azure Container Apps offers a powerful feature through labels and traffic splitting that can help developers easily manage multiple versions of an app, route traffic based on different environments, and enable controlled feature testing without disrupting live users. In this blog, we'll walk through a practical scenario where we deploy an experimental feature in a staging revision, test it with internal developers, and then switch the feature to production once it’s validated. We'll use Azure Container Apps labels and traffic splitting to achieve this seamless deployment process.489Views5likes1Comment