containers
130 TopicsUsing NVIDIA Triton Inference Server on Azure Container Apps
TOC Introduction to Triton System Architecture Architecture Focus of This Tutorial Setup Azure Resources File and Directory Structure ARM Template ARM Template From Azure Portal Testing Azure Container Apps Conclusion References 1. Introduction to Triton Triton Inference Server is an open-source, high-performance inferencing platform developed by NVIDIA to simplify and optimize AI model deployment. Designed for both cloud and edge environments, Triton enables developers to serve models from multiple deep learning frameworks, including TensorFlow, PyTorch, ONNX Runtime, TensorRT, and OpenVINO, using a single standardized interface. Its goal is to streamline AI inferencing while maximizing hardware utilization and scalability. A key feature of Triton is its support for multiple model execution modes, including dynamic batching, concurrent model execution, and multi-GPU inferencing. These capabilities allow organizations to efficiently serve AI models at scale, reducing latency and optimizing throughput. Triton also offers built-in support for HTTP/REST and gRPC endpoints, making it easy to integrate with various applications and workflows. Additionally, it provides model monitoring, logging, and GPU-accelerated inference optimization, enhancing performance across different hardware architectures. Triton is widely used in AI-powered applications such as autonomous vehicles, healthcare imaging, natural language processing, and recommendation systems. It integrates seamlessly with NVIDIA AI tools, including TensorRT for high-performance inference and DeepStream for video analytics. By providing a flexible and scalable deployment solution, Triton enables businesses and researchers to bring AI models into production with ease, ensuring efficient and reliable inferencing in real-world applications. 2. System Architecture Architecture Development Environment OS: Ubuntu Version: Ubuntu 18.04 Bionic Beaver Docker version: 26.1.3 Azure Resources Storage Account: SKU - General Purpose V2 Container Apps Environments: SKU - Consumption Container Apps: N/A Focus of This Tutorial This tutorial walks you through the following stages: Setting up Azure resources Publishing the project to Azure Testing the application Each of the mentioned aspects has numerous corresponding tools and solutions. The relevant information for this session is listed in the table below. Local OS Windows Linux Mac V How to setup Azure resources and deploy Portal (i.e., REST api) ARM Bicep Terraform V 3. Setup Azure Resources File and Directory Structure Please open a terminal and enter the following commands: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai After completing the execution, you should see the following directory structure: File and Path Purpose triton/tools/arm-template.json The ARM template to setup all the Azure resources related to this tutorial, including a Container Apps Environments, a Container Apps, and a Storage Account with the sample dataset. ARM Template We need to create the following resources or services: Manual Creation Required Resource/Service Container Apps Environments Yes Resource Container Apps Yes Resource Storage Account Yes Resource Blob Yes Service Deployment Script Yes Resource Letโs take a look at the triton/tools/arm-template.json file. Refer to the configuration section for all the resources. Since most of the configuration values donโt require changes, Iโve placed them in the variables section of the ARM template rather than the parameters section. This helps keep the configuration simpler. However, Iโd still like to briefly explain some of the more critical settings. As you can see, Iโve adopted a camelCase naming convention, which combines the [Resource Type] with [Setting Name and Hierarchy]. This makes it easier to understand where each setting will be used. The configurations in the diagram are sorted by resource name, but the following list is categorized by functionality for better clarity. Configuration Name Value Purpose storageAccountContainerName data-and-model [Purpose 1: Blob Container for Model Storage] Use this fixed name for the Blob Container. scriptPropertiesRetentionInterval P1D [Purpose 2: Script for Uploading Models to Blob Storage] No adjustments are needed. This script is designed to launch a one-time instance immediately after the Blob Container is created. It downloads sample model files and uploads them to the Blob Container. The Deployment Script resource will automatically be deleted after one day. caeNamePropertiesPublicNetworkAccess Enabled [Purpose 3: For Testing] ACA requires your local machine to perform tests; therefore, external access must be enabled. appPropertiesConfigurationIngressExternal true [Purpose 3: For Testing] Same as above. appPropertiesConfigurationIngressAllowInsecure true [Purpose 3: For Testing] Same as above. appPropertiesConfigurationIngressTargetPort 8000 [Purpose 3: For Testing] The Triton service container uses port 8000. appPropertiesTemplateContainers0Image nvcr.io/nvidia/tritonserver:22.04-py3 [Purpose 3: For Testing] The Triton service container utilizes this online resource. ARM Template From Azure Portal In addition to using az cli to invoke ARM Templates, if the JSON file is hosted on a public network URL, you can also load its configuration directly into the Azure Portal by following the method described in the article [Deploy to Azure button - Azure Resource Manager]. This is my example. Click Me After filling in all the required information, click Create. And we could have a test once the creation process is complete. 4. Testing Azure Container App In our local environment, use the following command to start a one-time Docker container. We will use NVIDIA's official test image and send a sample image from within it to the Triton service that was just deployed to Container Apps. # Replace XXX.YYY.ZZZ.azurecontainerapps.io with the actual FQDN of your app. There is no need to add https:// docker run --rm nvcr.io/nvidia/tritonserver:22.04-py3-sdk /workspace/install/bin/image_client -u XXX.YYY.ZZZ.azurecontainerapps.io -m densenet_onnx -c 3 -s INCEPTION /workspace/images/mug.jpg After sending the request, you should see the prediction results, indicating that the deployed Triton server service is functioning correctly. 5. Conclusion Beyond basic model hosting, Triton Inference Server's greatest strength lies in its ability to efficiently serve AI models at scale. It supports multiple deep learning frameworks, allowing seamless deployment of diverse models within a single infrastructure. With features like dynamic batching, multi-GPU execution, and optimized inference pipelines, Triton ensures high performance while reducing latency. While it may not replace custom-built inference solutions for highly specialized workloads, it excels as a standardized and scalable platform for deploying AI across cloud and edge environments. Its flexibility makes it ideal for applications such as real-time recommendation systems, autonomous systems, and large-scale AI-powered analytics. 6. References Quickstart โ NVIDIA Triton Inference Server Deploying an ONNX Model โ NVIDIA Triton Inference Server Model Repository โ NVIDIA Triton Inference Server Triton Tutorials โ NVIDIA Triton Inference Server274Views0likes0CommentsReference Architecture for a High Scale Moodle Environment on Azure
Introduction Moodle is an open-source learning platform that was developed in 1999 by Martin Dougiamas, a computer scientist and educator from Australia. Moodle stands for Modular Object-Oriented Dynamic Learning Environment, and it is written in PHP, a popular web programming language. Moodle aims to provide educators and learners with a flexible and customizable online environment for teaching and learning, where they can create and access courses, activities, resources, and assessments. Moodle also supports collaboration, communication, and feedback among users, as well as various plugins and integrations with other systems and tools. Moodle is widely used around the world by schools, universities, businesses, and other organizations, with over 100 million registered users and 250,000 registered sites as of 2020. Moodle is also supported by a large and active community of developers, educators, and users, who contribute to its development, documentation, translation, and support. [URL] is the official website of the Moodle project, where anyone can download the software, join the forums, access the documentation, participate in events, and find out more about Moodle. Goal The goal for this architecture is to have a Moodle environment that can handle 400k concurrent users and scale in and out its application resources according to usage. Using Azure managed services to minimize operational burden was a design premise because standard Moodle reference architectures are based on Virtual Machines that comes with a heavy operational cost. Challenges Being a monolith application, scaling Moodle in a modern cloud native environment is challenging. We choose to use Kubernetes as its computing provider due to the fact that it allow us to build a Moodle artifact in an immutable way that allows it to scale out and in when needed in a fast and automatic way and also recover from potential failures by simply recreating its Deployments without the need to maintain Virtual Machine resources, introducing the concept of pets vs cattle[1] to a scenario that at first glance wouldn't be feasible. Since Moodle is written in PHP it has no concept of database polling, creating a scenario where its underlying database is heavily impacted by new client requests, making it necessary to use an external database pooling solution that had to be custom tailored in order to handle the amount of connections for a heavy-traffic setup like this instead of using Azure Database for PostgreSQL's built-in pgbouncer. The same effect is also observed in its Redis implementation, where a custom Redis cluster had to be created, whereas using Azure Cache for Redis would incur prohibitive costs due to the way it is set up for a more general usage. 1 - https://learn.microsoft.com/en-us/dotnet/architecture/cloud-native/definition#the-cloud Architecture This architecture uses Azure managed (PaaS) components to minimize operational burden by using Azure Kubernetes Service to run Moodle, Azure Storage Account to host course content, Azure Database for PostgreSQL Flexible Server as its database and Azure Front Door to expose the application to the public as well as caching commonly used assets. The solution also leverages Azure Availability Zones to distribute its component across different zones in the region to optimize its availability. Provisioning the solution The provisioning has two parts: setting up the infrastructure and the application. The first part uses Terraform to deploy easily. The second part involves creating Moodle's database and configuring the application for optimal performance based on the templates, number of users, etc. and installing templates, courses, plugins etc. The following steps walk you through all tasks needed to have this job done. Clone the repository $ git clone https://github.com/Azure-Samples/moodle-high-scale Provision the infrastructure $ cd infra/ $ az login $ az group create --name moodle-high-scale --location <region> $ terraform init $ terraform plan -var moodle-environment=production $ terraform apply -var moodle-environment=production $ az aks get-credentials --name moodle-high-scale --resource-group moodle-high-scale Provision the Redis Cluster $ cd ../manifests/redis-cluster $ kubectl apply -f redis-configmap.yaml $ kubectl apply -f redis-cluster.yaml $ kubectl apply -f redis-service.yaml Wait for all the replicas to be running $ ./init.sh Type 'yes' when prompted. Deploy Moodle and its services Change image in moodle-service.yaml and also adjust the moodle data storage account name in the nfs-pv.yamlโฏ(see commented lines in the files) $ cd ../../images/moodle $ az acr build --registry moodlehighscale<suffix> -t moodle:v0.1 --file Dockerfile . $ cd ../../manifests $ kubectl apply -f pgbouncer-deployment.yaml $ kubectl apply -f nfs-pv.yaml $ kubectl apply -f nfs-pvc.yaml $ kubectl apply -f moodle-service.yaml $ kubectl -n moodle get svc โwatch Provision the frontend configuration that will be used to expose Moodle and its assets publicly $ cd ../frontend $ terraform init $ terraform plan $ terraform apply Approve the private endpoint connection request from Frontdoor in moodle-svc-pls resource. Private Link Services > moodle-svc-pls > Private Endpoint Connections > Select the request from Front Door and click on Approve. Install database $ kubectl -n moodle exec -it deployment/moodle-deployment -- /bin/bash $ php /var/www/html/admin/cli/install_database.php --adminuser=admin_user --adminpass=admin_pass --agree-license Deploy Moodle Cron Change image in moodle-cron.yaml $ cd ../manifests $ kubectl apply -f moodle-cron.yaml Your Moodle installation is now ready to use! Conclusion You can create a Moodle environment that is scalable and reliable in minutes with a very simple approach, without having to deal with the hassle of operating its parts that normally comes with standard Moodle installations.491Views7likes0CommentsEnhancing Security for Azure Container Apps with Aqua Security
Azure Container Apps (ACA) is a developer-first serverless platform that allows you to run scalable containerized workloads at any scale. Being serverless provides inherent security benefits by reducing the attack surface, but it also presents some unique challenges for any security solution. Hence, weโre happy to announce that our partner, Aqua has just certified Azure Container Apps for their suite of security solutions. Azure Container Apps: Built-In Security Features Due to its purpose-built nature ACA offers several built-in security features that help protect your containerized applications: Isolation: ACA runs your workload without the need for root access to the underlying host. Additionally, itโs trivial and requires minimal overhead to isolate different teams in their own environments without the need to painfully cordon off each team via Kubernetes namespaces. Network Security: ACA supports virtual network integration, allowing you to control inbound and outbound traffic to your applications on a both a per app basis as well as for an entire environment all at once. Additionally, we provide protection against common layer-7 vulnerabilities such as redirection attacks. Managed Identity: ACA integrates with Azure Active Directory, enabling secure access to other Azure services without managing credentials. While these features provide a solid foundation, securing containerized workloads requires a comprehensive approach that addresses the entire lifecycle of your applications. This is where Aquaโs suite of tools excels. Elevating ACA's Security Posture using Aqua Aqua Security is a certified security solution for ACA, offering a full-lifecycle approach to securing your containerized applications. Hereโs how Aqua enhances ACA's security capabilities: Supply Chain Security: Aqua scans container images for tampering and potential supply chain attacks, ensuring that only verified and secure images are deployed. Comprehensive Image Scanning: Aqua scans container images in Azure Container Registry (ACR) and CI/CD pipelines for vulnerabilities, misconfigurations, malware, and embedded secrets, enabling developers to address issues early. Image Assurance Policies: Aqua enforces policies to ensure that only compliant images are deployed, minimizing risks and ensuring adherence to security and compliance standards. Agentless Discovery and Scanning: Aqua automatically discovers and scans all running services and assets, providing broad visibility into your ACA workloads. Runtime Protection with MicroEnforcer: Aqua's MicroEnforcer provides non-invasive runtime security, detecting and preventing threats such as cryptocurrency mining, reverse shell execution, and unauthorized access. By leveraging Aqua's security solutions, organizations can confidently meet the most stringent security requirements for their ACA workloads. For more information on how to use Aqua's tooling with ACA, visit the Aqua blog: Securing Azure Container Apps505Views0likes0CommentsLeveraging Azure Container Apps Labels for Environment-based Routing and Feature Testing
Azure Container Apps offers a powerful feature through labels and traffic splitting that can help developers easily manage multiple versions of an app, route traffic based on different environments, and enable controlled feature testing without disrupting live users. In this blog, we'll walk through a practical scenario where we deploy an experimental feature in a staging revision, test it with internal developers, and then switch the feature to production once itโs validated. We'll use Azure Container Apps labels and traffic splitting to achieve this seamless deployment process.489Views5likes1CommentIntroducing Serverless GPUs on Azure Container Apps
We're excited to announce the public preview of Azure Container Apps Serverless GPUs accelerated by NVIDIA. This feature provides customers with NVIDIA A100 GPUs and NVIDIA T4 GPUs in a serverless environment, enabling effortless scaling and flexibility for real-time custom model inferencing and other machine learning tasks. Serverless GPUs accelerate the speed of your AI development team by allowing you to focus on your core AI code and less on managing infrastructure when using NVIDIA accelerated computing. They provide an excellent middle layer option between Azure AI Model Catalog's serverless APIs and hosting models on managed compute. It provides full data governance as your data never leaves the boundaries of your container while still providing a managed, serverless platform from which to build your applications. Serverless GPUs are designed to meet the growing demands of modern applications by providing powerful NVIDIA accelerated computing resources without the need for dedicated infrastructure management. "Azure Container Apps' serverless GPU offering is a leap forward for AI workloads. Serverless NVIDIA GPUs are well suited for a wide array of AI workloads from real-time inferencing scenarios with custom models to fine-tuning. NVIDIA is also working with Microsoft to bring NVIDIA NIM microservices to Azure Container Apps to optimize AI inference performance.โ - Dave Salvator, Director, Accelerated Computing Products, NVIDIA Key benefits of serverless GPUs Scale-to zero GPUs: Support for serverless scaling of NVIDIA A100 and T4 GPUs. Per-second billing: Pay only for the GPU compute you use. Built-in data governance: Your data never leaves the container boundary. Flexible compute options: Choose between NVIDIA A100 and T4 GPUs. Middle-layer for AI development: Bring your own model on a managed, serverless compute platform. Scenarios Whether you choose to use NVIDIA A100 or T4 GPUs will depend on the types of apps you're creating. The following are a couple example scenarios. For each scenario with serverless GPUs, you pay only for the compute you use with per-second billing, and your apps will automatically scale in and out from zero to meet the demand. NVIDIA T4 Real-time and batch inferencing: Using custom open-source models with fast startup times, automatic scaling, and a per-second billing model, serverless GPUs are ideal for dynamic applications that don't already have a serverless API in the model catalog. NVIDIA A100 Compute intensive machine learning scenarios: Significantly speed up applications that implement fine-tuned custom generative AI models, deep learning, or neural networks. High performance computing (HPC) and data analytics: Applications that require complex calculations or simulations, such as scientific computing and financial modeling as well as accelerated data processing and analysis among massive datasets. Get started with serverless GPUs Serverless GPUs are now available for workload profile environments in West US 3, Australia East, and Sweden Central regions with more regions to come. You will need to have quota enabled on your subscription in order to use serverless GPUs. By default, all Microsoft Enterprise Agreement customers will have one quota. If additional quota is needed, please request it here. Note: In order to achieve the best performance with serverless GPUs, use an Azure Container Registry (ACR) with artifact streaming enabled for your image tag. Follow steps here to enable artifact streaming on your ACR. From the portal, you can select to enable GPUs for your Consumption app in the container tab when creating your Container App or your Container App Job. You can also add a new consumption GPU workload profile to your existing Container App environment through the workload profiles UX in portal or through the CLI commands for managing workload profiles. Deploy a sample Stable Diffusion app To try out serverless GPUs, you can use the stable diffusion image which is provided as a quickstart during the container app create experience: In the container tab select the Use quickstart image box. In the quickstart image dropdown, select GPU hello world container. If you wish to pull the GPU container image into your own ACR to enable artifact streaming for improved performance, or if you wish to manually enter the image, you can find the image at mcr.microsoft.com/k8se/gpu-quickstart:latest. For full steps on using your own image with serverless GPUs, see the tutorial on using serverless GPUs in Azure Container Apps. Learn more about serverless GPUs With serverless GPUs, Azure Container Apps now simplifies the development of your AI applications by providing scale-to-zero compute, pay-as you go pricing, reduced infrastructure management, and more. To learn more, visit: Using serverless GPUs in Azure Container Apps (preview) | Microsoft Learn Tutorial: Generate images using serverless GPUs in Azure Container Apps (preview) | Microsoft Learn3.9KViews1like0CommentsNew Features in Azure Container Apps VS Code extension
๐ Install VS Code extension Summary of Major Changes New Managed Identity Support for connecting container apps to container registries. This is now the preferred method for securing these resources, provided you have sufficient privileges. New Container View: Introduced with several commands for easier editing of container images and environment variables. One-Click Deployment: Deploy to Container App... added to the top-level container app node. This supports deployments from a workspace project or container registry. To manage multiple applications in a workspace project or enable faster deployments with saved settings, use Deploy Project from Workspace. It can be accessed via the workspace view. Improved Activity Log Output: All major commands now include improved activity log outputs, making it easier to track and manage your activities. Quickstart Image for Container App Creation: The "Create container app..." command now initializes with a quickstart image, simplifying the setup process. New Commands and Enhancements Managed Identity support for new connections to container registries New command Deploy to Container App... found on the container app item. This one-click deploy command allows deploying from a workspace project or container registry while in single revision mode. New Container view under the container app item allows direct access to the container's image and environment variables. New command Edit Container Image... allows editing of container images without prompting to update environment variables. Environment Variable CRUD Commands: Multiple new commands for creating, reading, updating, and deleting environment variables. Convert Environment Variable to Secret: Quickly turn an environment variable into a container app secret with this new command. Changes and Improvements Command Create Container App... now always starts with a quickstart image. Renamed the Update Container Image... command to Edit Container.... This command is now found on the container item. When running Deploy Project from Workspace..., if remote environment variables conflict with saved settings, prompt for update. Add new envPath option useRemoteConfiguration. Deploying an image with the Docker extension now allows targeting specific revisions and containers. When deploying a new image to a container app, only show ingress prompt when more than the image tag is changed. Improved ACR selection dropdowns, providing better pick recommendations and sorting by resource group. Improved activity log outputs for major commands. Changed draft deploy prompt to be a quick pick instead of a pop-up window. We hope these new features and improvements will simplify deployments and make your Azure Container Apps experience even better. Stay tuned for more updates, and as always, we appreciate your feedback! Try out these new features today and let us know what you think! Your feedback is invaluable in helping us continue to improve and innovate. Azure Container Apps VS Code Extension Full changelog:533Views1like0CommentsPhantomJS PDF Generation on Azure Linux App Services
Local setup along with source code contents Create a sample express application using the following commands, or for sample express application please refer this link https://expressjs.com/en/starter/hello-world.html Please follow these instructions on your local machine to create an Express application using the PhantomJS library - Create a New Project Directory mkdir phanthomJS cd phanthomJS - Initialize the Project: Run "npm init -y" to create a package.json file. The '-y' flag automatically sets default values. npm init -y - Update the "package.json" with the following contents { "name": "express-pdf-generator", "version": "1.0.0", "description": "An Express application to generate PDFs from dynamic HTML templates.", "main": "index.js", "scripts": { "start": "node index.js" }, "keywords": ["express", "pdf", "dynamic-html-pdf", "nodejs"], "author": "Your Name", "license": "MIT", "dependencies": { "dynamic-html-pdf": "^1.0.4", "express": "^4.21.2" } } - Edit "index.js" with the following contents. This would help to set up an Express application for PDF generation using PhantomJS library const express = require('express'); const path = require('path'); const fs = require('fs'); const pdf = require('dynamic-html-pdf'); const app = express(); const port = 3000; app.get('/generate-pdf', async (req, res) => { // Example HTML template (you can replace this with your actual HTML) const htmlContent = ` <html> <head><title>Sample PDF</title></head> <body> <h1>Users List</h1> <ul> {{#data}} <li>{{name}}</li> {{/data}} </ul> </body> </html> `; // Example data (replace with your actual user data) const users = [ { name: 'John Doe' }, { name: 'Jane Smith' }, { name: 'Alice Johnson' } ]; // Define the options for the PDF generation (A4, portrait orientation) const options = { format: "A4", orientation: "portrait", }; // Define the document structure, including the template and context const document = { type: 'buffer', // 'file' or 'buffer' (buffer will send the result directly) template: htmlContent, // The HTML template to render the PDF context: { data: users // Users data to pass into the template }, path: "./output.pdf" // Optional: Path where the PDF will be saved }; try { // Generate the PDF as a buffer const pdfBuffer = await pdf.create(document, options); // Save the generated PDF buffer to a file const outputPath = path.join(__dirname, 'output.pdf'); fs.writeFileSync(outputPath, pdfBuffer); // Respond with the PDF file as a download res.download(outputPath, 'output.pdf', (err) => { if (err) { console.error('Error sending file:', err); } else { console.log('PDF sent to client successfully!'); } }); } catch (err) { console.error('Error generating PDF:', err); res.status(500).send('Error generating PDF'); } }); app.listen(port, () => { console.log(`Server running on http://localhost:${port}`); }); - Execute following commands to install dependencies and run the server npm install npm start - The application is running without any issues, and following is the expected output - Following the package installation, you will observe the package-lock.json file and the node_modules folder, and the current folder structure should show as below - Now, open your browser and access the API http://localhost:3000/generate-pdf - We are able to generate pdf file locally without any issues. Let's proceed with deploying to Azure App Service for Linux. App Service Node.js Environment Setup To get started in Azure, please proceed with setting up a Linux App Service using Node.js 18 or higher version. For this specific example, we will be utilizing Node.js 20 LTS. Ensure that you have the necessary permissions on the subscription to facilitate this setup. If you encounter any issues during the setup process, refer to the official documentation https://learn.microsoft.com/en-us/azure/app-service/quickstart-nodejs?tabs=linux&pivots=development-environment-azure-portal - After provisioning the app service, please enable App Service Logs for debugging - You may utilize any of the available deployment methods to deploy the code to the app service. - Since the application is configured to listen on port 3000, please add an App setting with PORT=3000 under Environment Variables. - Upon accessing this API "https://appservicename.azurewebsites.net/generate-pdf" you will be encountering an error in generating pdf file. - Please navigate to kudu "/newui" portal, to view the default_docker.log (Application logs) https://<AppServiceName>.scm.azurewebsites.net/newui/fileManager# 2025-01-20T09:51:05.196502989Z _____ 2025-01-20T09:51:05.196799610Z / _ \ __________ _________ ____ 2025-01-20T09:51:05.196809910Z / /_\ \\___ / | \_ __ \_/ __ \ 2025-01-20T09:51:05.196815111Z / | \/ /| | /| | \/\ ___/ 2025-01-20T09:51:05.196819411Z \____|__ /_____ \____/ |__| \___ > 2025-01-20T09:51:05.196824211Z \/ \/ \/ 2025-01-20T09:51:05.196829112Z A P P S E R V I C E O N L I N U X 2025-01-20T09:51:05.196833712Z 2025-01-20T09:51:05.196838312Z Documentation: http://aka.ms/webapp-linux 2025-01-20T09:51:05.196842913Z NodeJS quickstart: https://aka.ms/node-qs 2025-01-20T09:51:05.196847613Z NodeJS Version : v20.15.1 2025-01-20T09:51:05.196852113Z Note: Any data outside '/home' is not persisted 2025-01-20T09:51:05.196856814Z 2025-01-20T09:51:06.463155052Z Starting OpenBSD Secure Shell server: sshd. 2025-01-20T09:51:06.475714842Z WEBSITES_INCLUDE_CLOUD_CERTS is not set to true. 2025-01-20T09:51:06.534554812Z Starting periodic command scheduler: cron. 2025-01-20T09:51:06.619867957Z Could not find build manifest file at '/home/site/wwwroot/oryx-manifest.toml' 2025-01-20T09:51:06.619970764Z Could not find operation ID in manifest. Generating an operation id... 2025-01-20T09:51:06.620649212Z Build Operation ID: 0c834bdc-1dec-4f5f-b587-361501aa219e 2025-01-20T09:51:06.738420658Z Environment Variables for Application Insight's IPA Codeless Configuration exists.. 2025-01-20T09:51:06.741196654Z Writing output script to '/opt/startup/startup.sh' 2025-01-20T09:51:06.750827537Z Running #!/bin/sh 2025-01-20T09:51:06.750849338Z 2025-01-20T09:51:06.750856039Z # Enter the source directory to make sure the script runs where the user expects 2025-01-20T09:51:06.750861539Z cd "/home/site/wwwroot" 2025-01-20T09:51:06.750866839Z 2025-01-20T09:51:06.750918843Z export NODE_PATH=/usr/local/lib/node_modules:$NODE_PATH 2025-01-20T09:51:06.750924744Z if [ -z "$PORT" ]; then 2025-01-20T09:51:06.750930144Z export PORT=8080 2025-01-20T09:51:06.750935544Z fi 2025-01-20T09:51:06.750940645Z 2025-01-20T09:51:06.750945745Z npm start 2025-01-20T09:51:08.374110801Z npm info using npm@10.7.0 2025-01-20T09:51:08.375006866Z npm info using node@v20.15.1 2025-01-20T09:51:09.168578521Z 2025-01-20T09:51:09.168619324Z > express-pdf-generator@1.0.0 start 2025-01-20T09:51:09.168626425Z > node index.js 2025-01-20T09:51:09.168631825Z 2025-01-20T09:51:10.868109687Z html-pdf: Failed to load PhantomJS module. Error: Cannot find module 'phantomjs-prebuilt' 2025-01-20T09:51:10.868160791Z Require stack: 2025-01-20T09:51:10.868167491Z - /home/site/wwwroot/node_modules/html-pdf/lib/pdf.js 2025-01-20T09:51:10.868172892Z - /home/site/wwwroot/node_modules/html-pdf/lib/index.js 2025-01-20T09:51:10.868177892Z - /home/site/wwwroot/node_modules/dynamic-html-pdf/index.js 2025-01-20T09:51:10.868183493Z - /home/site/wwwroot/index.js 2025-01-20T09:51:10.868188793Z at Module._resolveFilename (node:internal/modules/cjs/loader:1145:15) 2025-01-20T09:51:10.868194093Z at Module._load (node:internal/modules/cjs/loader:986:27) 2025-01-20T09:51:10.868199394Z at Module.require (node:internal/modules/cjs/loader:1233:19) 2025-01-20T09:51:10.868204594Z at Module.patchedRequire [as require] (/agents/nodejs/node_modules/diagnostic-channel/dist/src/patchRequire.js:16:46) 2025-01-20T09:51:10.868209895Z at require (node:internal/modules/helpers:179:18) 2025-01-20T09:51:10.868215195Z at Object.<anonymous> (/home/site/wwwroot/node_modules/html-pdf/lib/pdf.js:7:19) 2025-01-20T09:51:10.868220995Z at Module._compile (node:internal/modules/cjs/loader:1358:14) 2025-01-20T09:51:10.868226196Z at Module._extensions..js (node:internal/modules/cjs/loader:1416:10) 2025-01-20T09:51:10.868231196Z at Module.load (node:internal/modules/cjs/loader:1208:32) 2025-01-20T09:51:10.868236196Z at Module._load (node:internal/modules/cjs/loader:1024:12) { 2025-01-20T09:51:10.868241297Z code: 'MODULE_NOT_FOUND', 2025-01-20T09:51:10.868246197Z requireStack: [ 2025-01-20T09:51:10.868251298Z '/home/site/wwwroot/node_modules/html-pdf/lib/pdf.js', 2025-01-20T09:51:10.868256598Z '/home/site/wwwroot/node_modules/html-pdf/lib/index.js', 2025-01-20T09:51:10.868261398Z '/home/site/wwwroot/node_modules/dynamic-html-pdf/index.js', 2025-01-20T09:51:10.868266199Z '/home/site/wwwroot/index.js' 2025-01-20T09:51:10.868270799Z ] 2025-01-20T09:51:10.868275699Z } 2025-01-20T09:51:10.884502480Z Server running on http://localhost:3000 2025-01-20T09:51:39.369462936Z Error generating PDF: AssertionError [ERR_ASSERTION]: html-pdf: Failed to load PhantomJS module. You have to set the path to the PhantomJS binary using 'options.phantomPath' 2025-01-20T09:51:39.369543943Z at new PDF (/home/site/wwwroot/node_modules/html-pdf/lib/pdf.js:40:3) 2025-01-20T09:51:39.369551843Z at Object.createPdf [as create] (/home/site/wwwroot/node_modules/html-pdf/lib/index.js:10:14) 2025-01-20T09:51:39.369557744Z at /home/site/wwwroot/node_modules/dynamic-html-pdf/index.js:36:30 2025-01-20T09:51:39.369563444Z at new Promise (<anonymous>) 2025-01-20T09:51:39.369579946Z at module.exports.create (/home/site/wwwroot/node_modules/dynamic-html-pdf/index.js:25:12) 2025-01-20T09:51:39.369586346Z at /home/site/wwwroot/index.js:48:37 2025-01-20T09:51:39.369591947Z at Layer.handle [as handle_request] (/home/site/wwwroot/node_modules/express/lib/router/layer.js:95:5) 2025-01-20T09:51:39.369608948Z at next (/home/site/wwwroot/node_modules/express/lib/router/route.js:149:13) 2025-01-20T09:51:39.369614948Z at Route.dispatch (/home/site/wwwroot/node_modules/express/lib/router/route.js:119:3) 2025-01-20T09:51:39.369620549Z at Layer.handle [as handle_request] (/home/site/wwwroot/node_modules/express/lib/router/layer.js:95:5) { 2025-01-20T09:51:39.369626249Z generatedMessage: false, 2025-01-20T09:51:39.369631650Z code: 'ERR_ASSERTION', 2025-01-20T09:51:39.369636950Z actual: undefined, 2025-01-20T09:51:39.369642151Z expected: true, 2025-01-20T09:51:39.369647351Z operator: '==' 2025-01-20T09:51:39.369652852Z } Basically, application failing with dependency error Error generating PDF: AssertionError [ERR_ASSERTION]: html-pdf: Failed to load PhantomJS module. You have to set the path to the PhantomJS binary using 'options.phantomPath' - To resolve this error, we need to install the dependency html-pdf globally and then create a symbolic link between the globally installed html-pdf package and the current project's node_modules folder. The following startup.sh file will help to achieve the same ##!/bin/sh cd /home/site/wwwroot apt-get update apt-get -y install libfontconfig1 npm install html-pdf -g npm install dynamic-html-pdf npm link html-pdf apt-get install bzip2 npm link phantomjs-prebuilt npm start - Now, place the startup.sh file with above contents in "/home" and update the Startup Command as "/home/startup.sh" as shown below The app service will now undergo an automatic restart due to the update in the startup command. - Test the setup by calling the API endpoint of the app service https://appservicename.azurewebsites.net/generate-pdf, and you will see that it generates a PDF file. **Deprecation Notice** - Be aware that PhantomJS uses some deprecated versions. Consider upgrading to the latest versions or alternative tools like puppeteer npm warn deprecated har-validator@5.1.5: this library is no longer supported npm warn deprecated phantomjs-prebuilt@2.1.16: this package is now deprecated npm warn deprecated uuid@3.4.0: Please upgrade to version 7 or higher. Older versions may use Math.random() in certain circumstances, which is known to be problematic. See https://v8.dev/blog/math-random for details. npm warn deprecated request@2.88.2: request has been deprecated, see https://github.com/request/request/issues/3142 npm info run phantomjs-prebuilt@2.1.16 install node_modules/phantomjs-prebuilt node install.js npm info run phantomjs-prebuilt@2.1.16 install { code: 0, signal: null } npm http fetch POST 200 https://registry.npmjs.org/-/npm/v1/security/advisories/bulk 110ms npm warn deprecated html-pdf@3.0.1: Please migrate your projects to a newer library like puppeteer npm info run phantomjs-prebuilt@2.1.16 install ../usr/local/lib/node_modules/phantomjs-prebuilt node install.js npm info run phantomjs-prebuilt@2.1.16 install { code: 0, signal: null } Reference articles https://github.com/marcbachmann/node-html-pdf/issues/677 https://github.com/marcbachmann/node-html-pdf/issues/437#issuecomment-467463285 https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#unsupported-frameworks Hope this information is helpful :)261Views5likes2CommentsGetting started with Azure Fleet Manager
Why? A solution to manage multiple Azure Kubernetes Service (AKS) clusters at scale. A secure and compliant solution to streamline operations, maintenance, improve performance, and ensure efficient resource utilization. Addresses the challenges of multi-cluster scenarios orchestrating cluster updates propagating Kubernetes resources balancing multi-cluster loads Pointers Orchestrates application updates and upgrades across multiple clusters Certain Kubernetes objects to deploy on all or certain set of clusters Clusters can be the same subscription or different subscription or even in different region but should be under the same tenant Upgrade group: Updates are applied in parallel Make sure member cluster should be in the running state before joining them to Fleet There are two fleet options available, with Hub and without the Hub Supports private clusters CRP is a cluster-scoped resource HOW? Run the below commands with Azure CLI Step 1: Add the fleet extension az extension add -n fleet Step 2: Create a fleet manager az fleet create --resource-group <name of the resource group> --name <name of the fleet> --location <region> --enable-hub --enable-private-cluster --enable-managed-identity --agent-subnet-id <subent ID> --vm-size <vm size> Step 3: An AKS cluster with 1 node gets created. Get into this Hub cluster az fleet get-credentials --resource-group <name of the resource group> --name <name of the fleet> Step 4: Add and view member clusters az fleet member create --resource-group <name of the resource group> --fleet-name <name of the fleet> --name <name of the membercluster> --member-cluster-id <resource ID of the member cluster> az fleet member list --resource-group <name of the resource group> --fleet-name <name of the fleet> -o table Sample CRP deployment kubectl label membercluster <name of the member cluster> <name of the label>=<value of the label> --overwrite apiVersion: placement.kubernetes-fleet.io/v1beta1 kind: ClusterResourcePlacement metadata: name: crp-asd-prod spec: policy: placementType: PickAll affinity: clusterAffinity: requiredDuringSchedulingIgnoredDuringExecution: clusterSelectorTerms: - labelSelector: matchLabels: crp: prod resourceSelectors: - group: "" kind: Namespace name: dev version: v1 - group: "" kind: Namespace name: qa version: v1 Utilization Best Practices Plan to integrate with your DevOps platform to manage the dynamic nature of workloads. The integration is extremely helpful and easy to adopt for the dev teams, as deployments were synced across other clusters. The service is a viable candidate for multi-cluster management, disaster recovery and migration strategy. Happy Learning ๐ Reference link: fleet/docs at main ยท Azure/fleet ยท GitHub277Views5likes0Comments