artificial intelligence
46 TopicsNeed inspirations? Real AI Apps stories by Azure customers to help you get started
In this blog, we present a tapestry of authentic stories from real Azure customers. You will read about how AI-empowered applications are revolutionizing enterprises and the myriad ways organizations choose to modernize their software, craft innovative experiences, and unveil new revenue streams. We hope that these stories inspire you to embark upon your own Azure AI journey. Before we begin, be sure to bookmark the newly unveiled Plan on Microsoft Learn—meticulously designed for developers and technical managers—to enhance your expertise on this subject. Inspiration 1: Transform customer service Intelligent apps today can offer a self-service natural language chat interface for customers to resolve service issues faster. They can route and divert calls, allowing agents to focus on the most complex cases. These solutions also enable customer service agents to quickly access contextual summaries of prior interactions offer real-time recommendations and generally enhance customer service productivity by automating repetitive tasks, such as logging interaction summaries. Prominent use cases across industries are self-service chatbots, the provision of real-time counsel to agents during customer engagements, the meticulous analysis and coaching of agents following each interaction, and the automation of summarizing customer dialogues. Below is a sample architecture for airline customer service and support. Azure Database for PostgresSQL. Azure Kubernetes Services hosts web UI and integrates with other components. In addition, this app uses RAG, with Azure AI Search as the retrieval system, and Azure OpenAI Service provides LLM capabilities, allowing customer service agents and customers to ask questions using natural language. Air India, the nation’s flagship carrier, updated its existing virtual assistant’s core natural language processing engine to the latest GPT models, using Azure OpenAI services. The new AI-based virtual assistant handles 97% of queries with full automation and saves millions of dollars on customer support costs. "We are on this mission of building a world-class airline with an Indian heart. To accomplish that goal, we are becoming an AI-infused company, and our collaboration with Microsoft is making that happen.” — Dr. Satya Ramaswamy, Chief Digital and Technology Officer, Air India In this customer case, the Azure-powered AI platform also supports Air India customers in other innovative ways. Travelers can save time by scanning visas and passports during web check-in, and then scan baggage tags to track their bags throughout their journeys. The platform’s voice recognition also enables analysis of live contact center conversations for quality assurance, training, and improvement. Inspiration #2: Personalize customer experience Organizations now can use AI models to present personalized content, products, or services to users based on multimodal user inputs from text, images, and speech, grounded on a deep understanding of their customer profiles. Common solutions we have seen include conversational shopping interfaces, image searches for products, product recommenders, and customized content delivery for each customer. In these cases, product discovery is improved through searching for data semantically, and as a result, personalized search and discovery improve engagement, customer satisfaction, and retention. Three areas are critical to consider when implementing such solutions. First, your development team should examine the ability to integrate multiple data types (e.g., user profiles, real-time inventory data, store sales data, and social data.) Second, during testing, ensure that pre-trained AI models can handle multi-modal inputs and can learn from user data to deliver personalized results. Lastly, your cloud administrator should implement scalability measures to meet variable user demands. ASOS, a global online fashion retailer, leveraged Azure AI Foundry to revolutionize its customer experience by creating an AI-powered virtual stylist that could engage with customers and help them discover new trends. "Having a conversational interface option gets us closer to our goals of fully engaging the customer and personalizing their experience by showing them the most relevant products at the most relevant time.” — Cliff Cohen, Chief Technology Officer, ASOS In this customer case, Azure AI Foundry enabled ASOS to rapidly develop and deploy their intelligent apps, integrating natural language processing and computer vision capabilities. Enabled ASOS to rapidly develop and deploy their intelligent app, integrating natural language processing and computer vision capabilities. This solution takes advantage of Azure’s ability to support cutting-edge AI applications in the retail sector, driving business growth and customer satisfaction. Inspiration #3: Accelerate product innovation Building customer-facing custom copilots has the promise to provide enhanced services to your customers. This is typically achieved through using AI to provide data-driven insights that facilitate personalized or unique customer interactions, to enable customer access to a wider range of information, while improving search queries and making data more accessible. You can check out a sample architecture for building your copilot below. DocuSign, a leader in e-signature solutions with 1.6 million global customers, pioneered an entirely new category of agreement management designed to streamline workflows and created Docusign Intelligent Agreement Management (IAM). The IAM platform uses sophisticated multi-database architecture to efficiently manage various aspects of agreement processing and management. At the heart of the IAM platform is Azure AI, which automates manual tasks and processes agreements using machine learning models. "We needed to transform how businesses worked with a new platform. With Docusign Intelligent Agreement Management, built with Microsoft Azure, we help our customers create, commit to, manage, and act on agreements in real-time.” — Kunal Mukerjee, VP, Technology Strategy and Architecture, Docusign The workflow begins with agreement data stored in an Azure SQL Database and is then transferred through an ingestion pipeline to Navigator, an intelligent agreements repository. In addition, the Azure SQL Database Hyperscale service tier serves as the primary transactional engine, providing virtually unlimited storage capacity and the ability to scale compute and storage resources independently. Inspiration #4: Optimize employee workflows With AI-powered apps, businesses can organize unstructured data to streamline document management and information, leverage natural language processing to create a conversational search experience for employees, provide more contextual information to increase workplace productivity and summarize data for further analysis. Increasingly we have seen solutions such as employee chatbots for HR, professional services assistants (legal/tax/audit), analytics and reporting agents, contact center agent assistants, and employee self-service and knowledge management (IT) centers. It’s essential to note that adequate prompt engineering training can improve employee queries, and your team should examine the capability of integrating copilot with other internal workloads; lastly, make sure your organization implements continuous innovation and delivery mechanisms to support new internal resources and optimize chatbot dialogs. Improving the lives of clinicians and patients Medigold Health, one of the United Kingdom’s leading occupational health service providers, migrated applications to Azure OpenAI Service, with Azure Cosmos DB for logging and Azure SQL Database for data storage, achieving the automation of clinician processes, including report generation, leading to a 58% rise in clinician retention and greater job satisfaction. With Azure App Service, Medigold Health was also able to quickly and efficiently deploy and manage web applications, enhancing the company’s ability to respond to client and clinician needs. "We knew with Microsoft and moving our AI workloads to Azure, we’d get the expert support, plus scalability, security, performance, and resource optimization we needed.” — Alex Goldsmith, CEO, Medigold Health Inspiration #5: Prevent fraud and detect anomalies Increasingly, organizations leverage AI to identify suspicious financial transactions, false account chargebacks, fraudulent insurance claims, digital theft, unauthorized account access or account takeover, network intrusions or malware attacks, and false product or content reviews. If your company can use similar designs, take a glance at a sample architecture for building an interactive fraud analysis app below. Azure Cosmos DB. Transactional data is available for analytics in real-time (HTAP) using Synapse Link. All the other financial transactions such as stock trading data, claims, and other documents are integrated with Microsoft Fabric using Azure Data Factory. This setup allows analysts to see real-time fraud alerts on a custom dashboard. Generative AI denoted here uses RAG, with Azure OpenAI Service of the LLM, and Azure AI Search as the retrieval system. Fighting financial crimes in the gaming world Kinectify, an anti-money laundering (AML) risk management technology company, built its scalable, robust, Microsoft Azure-powered AML platform with a seamless combination of Azure Cosmos DB, Azure AI Services, Azure Kubernetes Service, and the broader capabilities of Azure cloud services. "We needed to choose a platform that provided best-in-class security and compliance due to the sensitive data we require and one that also offered best-in-class services as we didn’t want to be an infrastructure hosting company. We chose Azure because of its scalability, security, and the immense support it offers in terms of infrastructure management.” — Michael Calvin, CTO, Kinectify With the new solutions in place, Kinectify detects 43% more suspicious activities achieves 96% faster decisions, and continues to champion handling a high volume of transactions reliably and identifying patterns, anomalies, and suspicious activity. Inspiration #6: Unlock organizational knowledge We have seen companies building intelligent apps to surface insights from vast amounts of data and make it accessible through natural language interactions. Teams will be able to analyze conversations for keywords to spot trends and better understand your customers. Common use cases can include knowledge extraction and organization, trend and sentiment analysis, curation of content summarization, automated reports, and research generation. Below is a sample architecture for enterprise search and knowledge mining. H&R Block, the trusted tax preparation company, envisioned using generative AI to create an easy, seamless process that answers filers’ tax questions, maintains safeguards to ensure accuracy, and minimizes the time to file. Valuing Microsoft’s leadership in security and AI and the longstanding collaboration between the two companies, H&R Block selected Azure AI Foundry and Azure OpenAI Service to build a new solution on the H&R Block platform to provide real-time, reliable tax filing assistance. By building an intelligent app that automates the extraction of key data from tax documents, H&R Block reduced the time and manual effort involved in document handling. The AI-driven solution significantly increased accuracy while speeding up the overall tax preparation process. "We conduct about 25 percent of our annual business in a matter of days.” — Aditya Thadani, Vice President, H&R Block Through Azure’s intelligent services, H&R Block modernized its operations, improving both productivity and client service and classifying more than 30 million tax documents a year. The solution has allowed the company to handle more clients with greater efficiency, providing a faster, more accurate tax filing experience. Inspiration #7: Automate document processing Document intelligence through AI applications helps human counterparts classify, extract, summarize, and gain deeper insights with natural language prompts. When adopting this approach, organizations are recommended to also consider prioritizing the identification of tasks to be automated, and streamline employee access to historical data, as well as refine downstream workload to leverage summarized data. Here is a sample architecture for large document summarization. Volve Group, one of the world’s leading manufacturers of trucks, buses, construction equipment, and marine and industrial engines, streamlined invoice and claims processing, saving over 10,000 manual hours with the help of Microsoft Azure AI services and Azure AI Document Intelligence. "We chose Microsoft Azure AI primarily because of the advanced capabilities offered, especially with AI Document Intelligence.” — Malladi Kumara Datta, RPA Product Owner, Volvo Group Since launch, the company has saved 10,000 manual hours—about 850-plus manual hours per month. Inspiration #8: Accelerate content delivery Using generative AI, your new applications can automate the creation of web or mobile content, such as product descriptions for online catalogs or visual campaign assets based on marketing narratives, accelerating time to market. It also helps you enable faster iteration and A/B testing to identify the best descriptions that resonate with customers. This pattern generates text or image content based on conversational user input. It combines the capabilities of Image Generation and Text Generation, and the content generated may be personalized to the user, data may be read from a variety of data sources, including Storage Account, Azure Cosmos DB, Azure Database for PostgreSQL, orAzure SQL. JATO Dynamics, a global supplier of automotive business intelligence operating in more than 50 countries, developed Sales Link with Azure OpenAI Service, which now helps dealerships quickly produce tailored content by combining market data and vehicle information, saving customers 32 hours per month. "Data processed through Azure OpenAI Service remains within Azure. This is critical for maintaining the privacy and security of dealer data and the trust of their customers.” — Derek Varner, Head of Software Engineering, JATO Dynamics In addition to Azure OpenAI, JATO Dynamics used Azure Cosmos DB to manage data from millions of transactions across 55 car brands. The database service also empowers scalability and quick access to vehicle and dealer transaction data, providing a reliable foundation for Sales Link. Closing thoughts From innovative solutions to heartwarming successes, it’s clear that a community of AI pioneers is transforming business and customer experiences. Let’s continue to push boundaries, embrace creativity, and celebrate every achievement along the way. Here’s to many more stories of success and innovation! Want to be certified as an Azure AI Engineer? Start preparing with this Microsoft Curated Learning Plan.1.4KViews2likes4CommentsTransform Insurance Industry Workflows Using Generative AI Models and Azure Services
This article highlights an innovative automated solution designed to transform the processing of insurance claim forms for the insurance industry. Previously, underwriters were limited to handling just two to three claims per day, significantly hampering operational efficiency. With the implementation of this solution, companies have achieved a remarkable 60% increase in daily claim processing capacity. Built on Azure services, this architecture revolutionizes the management of claim forms submitted via email by automating critical tasks such as data extraction, classification, summarization, evaluation, and storage. Leveraging the power of AI and machine learning, this solution ensures faster, more accurate claim evaluations, enabling insurance companies to make informed decisions efficiently. The result is enhanced operational scalability, improved customer satisfaction, and a streamlined claims process. Scenario In the insurance industry, claim forms often arrive as email attachments, requiring manual processing to classify, extract, and validate information before it can be stored for analysis and reporting. This solution automates the process by leveraging Azure services to classify, extract, and summarize information from Insurance claim forms. Using Responsible AI evaluation, it ensures the performance of Large Language Models (LLMs) meets high standards. The data is then stored for further analysis and visualization in Power BI, where underwriters can access consumable reports. Architecture Diagram Components Azure Logic Apps: Automates workflows and integrates apps, data, and services. Used here to process emails, extract PDF attachments, and initiate workflows with an Outlook connector for attachment, metadata, and email content extraction. Azure Blob Storage: Stores unstructured data at scale. Used to save insurance claim forms in PDF and metadata/email content in TXT formats. Azure Functions: Serverless compute for event-driven code. Orchestrates workflows across services. Azure Document Intelligence: AI-powered data extraction from documents. Classifies and extracts structured content from ACCORD forms. Azure OpenAI: Provides advanced language models. Summarizes email content for high-level insights. LLM Evaluation Module (Azure AI SDK): Enhances Azure OpenAI summaries by evaluating and refining output quality. Azure AI Foundry: Manages Azure OpenAI deployments and evaluates LLM performance using Responsible AI metrics. Azure Cosmos DB: Globally distributed NoSQL database. Stores JSON outputs from Azure OpenAI and Document Intelligence. Microsoft Power BI: Visualizes Cosmos DB data with interactive reports for underwriters. Workflow Description The workflow for processing claims efficiently leverages a series of Azure services to automate, structure, and analyze data, ensuring a fast, accurate, and scalable claims management system. 1. Email Processing with Azure Logic Apps The process begins with a pre-designed Azure Logic Apps workflow, which automates the intake of PDF claim forms received as email attachments from policyholders. By using prebuilt Outlook connectors, it extracts key details like sender information, email content, metadata, and attachments, organizing the data for smooth claims processing. This automation reduces manual effort, accelerates claim intake, and minimizes data capture errors. 2. Secure Data Storage in Azure Blob Storage Once emails are processed, the necessary PDF attachments, email content, and email metadata are stored securely in Azure Blob Storage. This centralized, scalable repository ensures easy access to raw claim data for subsequent processing. Azure Blob’s structured storage supports efficient file retrieval during later stages, while its scalability can handle growing claim volumes, ensuring data integrity and accessibility throughout the entire claims processing lifecycle. 3. Workflow Orchestration with Azure Functions The entire processing workflow is managed by Azure Functions, which orchestrates serverless tasks such as document classification, data extraction, summarization, and LLM evaluation. This modular architecture enables independent updates and optimizations, ensuring scalability and easier maintenance. Azure Functions streamlines operations, improving the overall efficiency of the claims processing system. a. Document Classification: The next step uses Azure Document Intelligence to classify documents with a custom pretrained model, identifying insurance claim forms. This step ensures the correct extraction methods are applied, reducing misclassification and errors, and eliminating much of the need for manual review. The ability to customize the model also adapts to changes in document formats, ensuring accuracy and efficiency in later processes. b. Content Extraction: Once the insurance form is properly classified, Azure Document Intelligence extracts specific data points from the PDF claim forms, such as claim numbers and policyholder details. The automated extraction process saves time, reduces manual data entry, and improves accuracy, ensuring essential data is available for downstream processing. This capability also helps in organizing the information for efficient claim tracking and report generation. c. Document Intelligence Output Processing: The results are extracted in JSON format and then parsed and organized for storage in Azure Cosmos DB, ensuring that all relevant data is systematically stored for future use. d. Summarizing Content with Azure OpenAI: Once data is extracted, Azure OpenAI generates summaries of email content, highlighting key claim submission details. These summaries make it easier for underwriters and decision-makers to quickly understand the essential points without sifting through extensive raw data. e. Quality Evaluation with LLM Evaluation SDK: After summarization, the quality of the generated content is evaluated using the LLM Evaluation Module in the Azure AI SDK. This evaluation ensures that the content meets accuracy and relevance standards, maintaining high-quality benchmarks and upholding responsible AI practices. Insights from this evaluation guide the refinement and improvement of models used in the workflow. f. LLM Performance Dashboard with Azure AI Foundry: Continuous monitoring of the workflow’s quality metrics is done via the evaluation dashboard in Azure AI Foundry. Key performance indicators like Groundedness, fluency, coherence, and relevance are tracked, ensuring high standards are maintained. This regular monitoring helps quickly identify performance issues and informs model optimizations, supporting the efficiency of the claims processing system. g. Summarization Output Processing: After evaluation, the results from the OpenAI summarization output are parsed and stored in Cosmos DB, ensuring that all relevant data is saved in a structured format for easy access and retrieval. 4. Storing Data in Azure Cosmos DB The structured data, including parsed JSON outputs and summaries, is stored in Azure Cosmos DB, a fully managed, globally distributed NoSQL database. This solution ensures processed claim data is easily accessible for further analysis and reporting. Cosmos DB’s scalability can accommodate increasing claim volumes, while its low-latency access makes it ideal for high-demand environments. Its flexible data model also allows seamless integration with other services and applications, improving the overall efficiency of the claims processing system. 5. Data Visualization with Microsoft Power BI The final step in the workflow involves visualizing the stored data using Microsoft Power BI. This powerful business analytics tool enables underwriters and other stakeholders to create interactive reports and dashboards, providing actionable insights from processed claim data. Power BI’s intuitive interface allows users to explore data in depth, facilitating quick, data-driven decisions. By incorporating Power BI, the insurance company can effectively leverage stored data to drive business outcomes and continuously improve the claims management process. Related Use cases: Healthcare - Patient Intake and Medical Claims Processing: Automating the extraction and processing of patient intake forms and medical claims for faster reimbursement and improved patient care analysis. See the following article for more information on how to implement a solution like this. Financial Services - Loan and Mortgage Application Processing: Streamlining loan application reviews by automatically extracting and summarizing financial data for quicker decision-making. Retail - Supplier Invoice and Purchase Order Processing: Automating invoice and purchase order processing for faster supplier payment approvals and improved financial tracking. Legal contract and Document Review: Automating the classification and extraction of key clauses from legal contracts to enhance compliance and reduce manual review time. See the following article for more information on how to implement a solution like this. Government - Tax Filing and Documentation Processing: Automating the classification and extraction of tax filing data to ensure compliance and improve audit efficiency. To find solution ideas and reference architectures for Azure based solutions curated by Microsoft, go to the Azure Architecture Center and search with keywords like “retail”, “legal”, “healthcare”, etc. You’ll find hundreds of industry-related solutions that can help jumpstart your design process. Contributors: This article is maintained by Microsoft. It was originally written by the following contributors. Principal authors: Manasa Ramalinga| Principal Cloud Solution Architect – US Customer Success Oscar Shimabukuro Kiyan| Senior Cloud Solution Architect – US Customer Success1.4KViews2likes1CommentAzure AI Foundry, GitHub Copilot, Fabric and more to Analyze usage stats from Utility Invoices
Overview With the introduction of Azure AI Foundry, integrating various AI services to streamline AI solution development and deployment of Agentic AI Workflow solutions like multi-modal, multi-model, dynamic & interactive Agents etc. has become more efficient. The platform offers a range of AI services, including Document Intelligence for extracting data from documents, natural language processing and robust machine learning capabilities, and more. Microsoft Fabric further enhances this ecosystem by providing robust data storage, analytics, and data science tools, enabling seamless data management and analysis. Additionally, Copilot and GitHub Copilot assist developers by offering AI-powered code suggestions and automating repetitive coding tasks, significantly boosting productivity and efficiency. Objectives In this use case, we will use monthly electricity bills from the utilities' website for a year and analyze them using Azure AI services within Azure AI Foundry. The electricity bills is simply an easy start but we could apply it to any other format really. Like say, W-2, I-9, 1099, ISO, EHR etc. By leveraging the Foundry's workflow capabilities, we will streamline the development stages step by step. Initially, we will use Document Intelligence to extract key data such as usage in kilowatts (KW), billed consumption, and other necessary information from each PDF file. This data will then be stored in Microsoft Fabric, where we will utilize its analytics and data science capabilities to process and analyze the information. We will also include a bit of processing steps to include Azure Functions to utilize GitHub Copilot in VS Code. Finally, we will create a Power BI dashboard in Fabric to visually display the analysis, providing insights into electricity usage trends and billing patterns over the year. Utility Invoice sample Building the solution Depicted in the picture are the key Azure and Copilot Services we will use to build the solution. Set up Azure AI Foundry Create a new project in Azure AI Foundry. Add Document Intelligence to your project. You can do this directly within the Foundry portal. Extract documents through Doc Intel Download the PDF files of the power bills and upload them to Azure Blob storage. I used Document Intelligence Studio to create a new project and Train custom models using the files from the Blob storage. Next, in your Azure AI Foundry project, add the Document Intelligence resource by providing the Endpoint URL and Keys. Data Extraction Use Azure Document Intelligence to extract required information from the PDF files. From the resource page in the Doc Intel service in the portal, copy the Endpoint URL and Keys. We will need these to connect the application to the Document Intelligence API. Next, let’s integrate doc intel with the project. In the Azure AI Foundry project, add the Document Intelligence resource by providing the Endpoint URL and Keys. Configure the settings as needed to start using doc intel for extracting data from the PDF documents. We can stay within the Azure AI Foundry portal for most of these steps, but for more advanced configurations, we might need to use the Document Intelligence Studio. GitHub Copilot in VS Code for Azure Functions For processing portions of the output from Doc Intel, what better way to create the Azure Function than in VS Code, especially with the help of GitHub Copilot. Let’s start by installing the Azure Functions extension in VS Code, then create a new function project. GitHub Copilot can assist in writing the code to process the JSON received. Additionally, we can get Copilot to help generate unit tests to ensure the function works correctly. We could use Copilot to explain the code and the tests it generates. Finally, we seamlessly integrate the generated code and unit tests into the Functions app code file, all within VS Code. Notice how we can prompt GitHub Copilot from step 1 of Creating the Workspace to inserting the generated code into the Python file for the Azure Function to testing it and all the way to deploying the Function. Store and Analyze information in Fabric There are many options for storing and analyzing JSON data in Fabric. Lakehouse, Data Warehouse, SQL Database, Power BI Datamart. As our dataset is small, let’s choose either SQL DB or PBI Datamart. PBI Datamart is great for smaller datasets and direct integration with PBI for dashboarding while SQL DB is good for moderate data volumes and supports transactional & analytical workloads. To insert the JSON values derived in the Azure Functions App either called from Logic Apps or directly from the AI Foundry through the API calls into Fabric, let’s explore two approaches. Using REST API and the other Using Functions with Azure SQL DB. Using REST API – Fabric provides APIs that we can call directly from our Function to insert records using HTTP client in the Function’s Python code to send POST requests to the Fabric API endpoints with our JSON data. Using Functions with Azure SQL DB – we can connect it directly from our Function using the SQL client in the Function to execute SQL INSERT statements to add records to the database. While we are at it, we could even get GitHub Copilot to write up the Unit Tests. Here’s a sample: Visualization in Fabric Power BI Let's start with creating visualizations in Fabric using the web version of Power BI for our report, UtilitiesBillAnalysisDashboard. You could use the PBI Desktop version too. Open the PBI Service and navigate to the workspace where you want to create your report. Click on "New" and select "Dataset" to add a new data source. Choose "SQL Server" from the list of data sources and enter "UtilityBillsServer" as the server name and "UtilityBillsDB" as the DB name to establish the connection. Once connected, navigate to the Navigator pane where we can select the table "tblElectricity" and the columns. I’ve shown these in the pictures below. For a clustered column (or bar) chart, let us choose the columns that contain our categorical data (e.g., month, year) and numerical data (e.g., kWh usage, billed amounts). After loading the data into PBI, drag the desired fields into the Values and Axis areas of the clustered column chart visualization. Customize the chart by adjusting the formatting options to enhance readability and insights. We now visualize our data in PBI within Fabric. We may need to do custom sort of the Month column. Let’s do this in the Data view. Select the table and create a new column with the following formula. This will create a custom sort column that we will use as ‘Sum of MonthNumber’ in ascending order. Other visualizations possibilities: Other Possibilities Agents with Custom Copilot Studio Next, you could leverage a custom Copilot to provide personalized energy usage recommendations based on historical data. Start by integrating the Copilot with your existing data pipeline in Azure AI Foundry. The Copilot can analyze electricity consumption patterns stored in your Fabric SQL DB and use ML models to identify optimization opportunities. For instance, it could suggest energy-efficient appliances, optimal usage times, or tips to reduce consumption. These recommendations can be visualized in PBI where users can track progress over time. To implement this, you would need to set up an API endpoint for the Copilot to access the data, train the ML models using Python in VS Code (let GitHub Copilot help you here… you will love it), and deploy the models to Azure using CLI / PowerShell / Bicep / Terraform / ARM or the Azure portal. Finally, connect the Copilot to PBI to visualize the personalized recommendations. Additionally, you could explore using Azure AI Agents for automated anomaly detection and alerts. This agent could monitor electricity bill data for unusual patterns and send notifications when anomalies are detected. Yet another idea would be to implement predictive maintenance for electrical systems, where an AI agent uses predictive analytics to forecast maintenance needs based on the data collected, helping to reduce downtime and improve system reliability. Summary We have built a solution that leveraged the seamless integration of pioneering AI technologies with Microsoft’s end-to-end platform. By leveraging Azure AI Foundry, we have developed a solution that uses Document Intelligence to scan electricity bills, stores the data in Fabric SQL DB, and processes it with Python in Azure Functions in VS Code, assisted by GitHub Copilot. The resulting insights are visualized in Power BI within Fabric. Additionally, we explored potential enhancements using Azure AI Agents and Custom Copilots, showcasing the ease of implementation and the transformative possibilities. Finally, speaking of possibilities – With Gen AI, the only limit is our imagination! Additional resources Explore Azure AI Foundry Start using the Azure AI Foundry SDK Review the Azure AI Foundry documentation and Call Azure Logic Apps as functions using Azure OpenAI Assistants Take the Azure AI Learn courses Learn more about Azure AI Services Document Intelligence: Azure AI Doc Intel GitHub Copilot examples: What can GitHub Copilot do – Examples Explore Microsoft Fabric: Microsoft Fabric Documentation See what you can connect with Azure Logic Apps: Azure Logic Apps Connectors About the Author Pradyumna (Prad) Harish is a Technology leader in the GSI Partner Organization at Microsoft. He has 26 years of experience in Product Engineering, Partner Development, Presales, and Delivery. Responsible for revenue growth through Cloud, AI, Cognitive Services, ML, Data & Analytics, Integration, DevOps, Open Source Software, Enterprise Architecture, IoT, Digital strategies and other innovative areas for business generation and transformation; achieving revenue targets via extensive experience in managing global functions, global accounts, products, and solution architects across over 26 countries.1KViews3likes1CommentSecurely Integrating Azure API Management with Azure OpenAI via Application Gateway
Introduction As organizations increasingly integrate AI into their applications, securing access to Azure OpenAI services becomes a critical priority. By default, Azure OpenAI can be exposed over the public internet, posing potential security risks. To mitigate these risks, enterprises often restrict OpenAI access using Private Endpoints, ensuring that traffic remains within their Azure Virtual Network (VNET) and preventing direct internet exposure. However, restricting OpenAI to a private endpoint introduces challenges when external applications, such as those hosted in AWS or on-premises environments, need to securely interact with OpenAI APIs. This is where Azure API Management (APIM) plays a crucial role. By deploying APIM within an internal VNET, it acts as a secure proxy between external applications and the OpenAI service, allowing controlled access while keeping OpenAI private. To further enhance security and accessibility, Azure Application Gateway (App Gateway) can be placed in front of APIM. This setup enables secure, policy-driven access by managing traffic flow, applying Web Application Firewall (WAF) rules, and enforcing SSL termination if needed. What This Blog Covers This blog provides a technical deep dive into setting up a fully secure architecture that integrates Azure OpenAI with APIM, Private Endpoints, and Application Gateway. Specifically, we will walk through: Configuring Azure OpenAI with a Private Endpoint to restrict public access and ensure communication remains within a secure network. Deploying APIM in an Internal VNET, allowing it to securely communicate with OpenAI while being inaccessible from the public internet. Setting up Application Gateway to expose APIM securely, allowing controlled external access with enhanced security. Configuring VNET, Subnets, and Network Security Groups (NSGs) to enforce network segmentation, traffic control, and security best practices. By the end of this guide, you will have a production-ready, enterprise-grade setup that ensures: End-to-end private connectivity for Azure OpenAI through APIM. Secure external access via Application Gateway while keeping OpenAI hidden from the internet. Granular network control using VNET, Subnets, and NSGs. This architecture provides a scalable and secure solution for enterprises needing to expose OpenAI securely without compromising privacy, performance, or compliance. Prerequisites Before diving into the integration of Azure API Management (APIM) with Azure OpenAI in a secure, private setup, ensure you have the following in place: 1. Azure Subscription & Required Permissions An active Azure Subscription with the ability to create resources. Contributor or Owner access to deploy Virtual Networks (VNETs), Subnets, Network Security Groups (NSGs), Private Endpoints, APIM, and Application Gateway. 2. Networking Setup Knowledge Familiarity with Azure Virtual Network (VNET) concepts, Subnets, and NSGs is helpful, as we will be designing a controlled network environment. 3. Required Azure Services The following services are needed for this integration: Azure Virtual Network (VNET) – To establish a private, secure network. Subnets & NSGs – For network segmentation and traffic control. Azure OpenAI Service – Deployed in a region that supports private endpoints. Azure API Management (APIM) – Deployed in an Internal VNET mode to act as a secure API proxy. Azure Private Endpoint – To restrict Azure OpenAI access to a private network. Azure Application Gateway – To expose APIM securely with load balancing and optional Web Application Firewall (WAF). 4. Networking and DNS Requirements Private DNS Zone: Required to resolve private endpoints within the VNET. Custom DNS Configuration: If using a custom DNS server, ensure proper forwarding rules are in place. Firewall/NSG Rules: Ensure necessary inbound and outbound rules allow communication between services. 5. Azure CLI or PowerShell (Optional, but Recommended) Azure CLI (az commands) or Azure PowerShell for efficient resource deployment. Once you have these prerequisites in place, we can proceed with designing the secure architecture for integrating Azure OpenAI with APIM using Private Endpoints and Application Gateway. Architecture Overview The architecture ensures secure and private connectivity between external users and Azure OpenAI while preventing direct public access to OpenAI’s APIs. It uses Azure API Management (APIM) in an Internal VNET, an Azure Private Endpoint for OpenAI, and an Application Gateway for controlled public exposure. Key Components & Flow User Requests External users access the API via a public endpoint exposed by Azure Application Gateway. The request passes through App Gateway before reaching APIM, ensuring security and traffic control. Azure API Management (APIM) – Internal VNET Mode APIM is deployed in Internal VNET mode, meaning it does not have a public endpoint. APIM serves as a proxy between external applications and Azure OpenAI, ensuring request validation, rate limiting, and security enforcement. The Management Plane of APIM still requires a public IP for admin operations, but the Data Plane (API traffic) remains fully private. Azure Private Endpoint for OpenAI APIM cannot access Azure OpenAI publicly since OpenAI is secured with a Private Endpoint. A Private Endpoint allows APIM to securely connect to Azure OpenAI within the same VNET, preventing internet exposure. This ensures that only APIM within the internal network can send requests to OpenAI. Managed Identity Authentication APIM uses a Managed Identity to authenticate securely with Azure OpenAI. This eliminates the need for hardcoded API keys and improves security by using Azure Role-Based Access Control (RBAC). Application Gateway for External Access Since APIM is not publicly accessible, an Azure Application Gateway (App Gateway) is placed in front of it. App Gateway acts as a reverse proxy that securely exposes APIM to the public while enforcing: SSL termination for secure HTTPS connections. Web Application Firewall (WAF) for protection against threats. Load balancing if multiple APIM instances exist. Network Segmentation & Security VNET & Subnets: APIM, OpenAI Private Endpoint, and App Gateway are deployed in separate subnets within an Azure Virtual Network (VNET). NSGs (Network Security Groups): Strict inbound and outbound rules ensure that only allowed traffic flows between components. Private DNS: Required to resolve Private Endpoint addresses inside the VNET. Security Enhancements No direct internet access to Azure OpenAI, ensuring full privacy. Controlled API exposure via App Gateway, securing public requests. Managed Identity for authentication, eliminating hardcoded credentials. Private Endpoint enforcement, blocking unwanted access from external sources. This architecture ensures that Azure OpenAI remains secure, APIM acts as a controlled gateway, and external users can access APIs safely through App Gateway. Azure CLI Script for VNet, Subnets, and NSG Configuration # Variables RESOURCE_GROUP="apim-openai-rg" LOCATION="eastus" VNET_NAME="apim-vnet" VNET_ADDRESS_PREFIX="10.0.0.0/16" # Subnets APP_GATEWAY_SUBNET="app-gateway-subnet" APP_GATEWAY_SUBNET_PREFIX="10.0.1.0/24" APIM_SUBNET="apim-subnet" APIM_SUBNET_PREFIX="10.0.2.0/24" OPENAI_PE_SUBNET="openai-pe-subnet" OPENAI_PE_SUBNET_PREFIX="10.0.3.0/24" # NSGs APP_GATEWAY_NSG="app-gateway-nsg" APIM_NSG="apim-nsg" OPENAI_PE_NSG="openai-pe-nsg" # Step 1: Create Resource Group az group create --name $RESOURCE_GROUP --location $LOCATION # Step 2: Create Virtual Network az network vnet create \ --resource-group $RESOURCE_GROUP \ --name $VNET_NAME \ --address-prefix $VNET_ADDRESS_PREFIX \ --subnet-name $APP_GATEWAY_SUBNET \ --subnet-prefix $APP_GATEWAY_SUBNET_PREFIX # Step 3: Create Additional Subnets (APIM & OpenAI Private Endpoint) az network vnet subnet create \ --resource-group $RESOURCE_GROUP \ --vnet-name $VNET_NAME \ --name $APIM_SUBNET \ --address-prefix $APIM_SUBNET_PREFIX az network vnet subnet create \ --resource-group $RESOURCE_GROUP \ --vnet-name $VNET_NAME \ --name $OPENAI_PE_SUBNET \ --address-prefix $OPENAI_PE_SUBNET_PREFIX # Step 4: Create NSGs az network nsg create --resource-group $RESOURCE_GROUP --name $APP_GATEWAY_NSG az network nsg create --resource-group $RESOURCE_GROUP --name $APIM_NSG az network nsg create --resource-group $RESOURCE_GROUP --name $OPENAI_PE_NSG # Step 5: Add NSG Rules for APIM (Allow 3443 for APIM Internal VNet) az network nsg rule create \ --resource-group $RESOURCE_GROUP \ --nsg-name $APIM_NSG \ --name AllowAPIMInbound3443 \ --priority 120 \ --direction Inbound \ --access Allow \ --protocol Tcp \ --source-address-prefixes ApiManagement \ --destination-address-prefixes VirtualNetwork \ --destination-port-ranges 3443 # Step 6: Associate NSGs with Subnets az network vnet subnet update \ --resource-group $RESOURCE_GROUP \ --vnet-name $VNET_NAME \ --name $APP_GATEWAY_SUBNET \ --network-security-group $APP_GATEWAY_NSG az network vnet subnet update \ --resource-group $RESOURCE_GROUP \ --vnet-name $VNET_NAME \ --name $APIM_SUBNET \ --network-security-group $APIM_NSG az network vnet subnet update \ --resource-group $RESOURCE_GROUP \ --vnet-name $VNET_NAME \ --name $OPENAI_PE_SUBNET \ --network-security-group $OPENAI_PE_NSG # Step 7: Configure Service Endpoints for APIM Subnet az network vnet subnet update \ --resource-group $RESOURCE_GROUP \ --vnet-name $VNET_NAME \ --name $APIM_SUBNET \ --service-endpoints Microsoft.EventHub Microsoft.KeyVault Microsoft.ServiceBus Microsoft.Sql Microsoft.Storage Microsoft.AzureActiveDirectory Microsoft.CognitiveServices Microsoft.Web Creating an Azure Open AI with private endpoint # Create an Azure OpenAI Resource az cognitiveservices account create \ --name $AOAI_NAME \ --resource-group $RESOURCE_GROUP \ --kind OpenAI \ --sku S0 \ --location $LOCATION \ --yes \ --custom-domain $AOAI_NAME #Create a Private Endpoint az network private-endpoint create \ --name $PRIVATE_ENDPOINT_NAME \ --resource-group $RESOURCE_GROUP \ --vnet-name $VNET_NAME \ --subnet $SUBNET_NAME \ --private-connection-resource-id $(az cognitiveservices account show --name $AOAI_NAME --resource-group $RESOURCE_GROUP --query id -o tsv) \ --group-id account \ --connection-name "${PRIVATE_ENDPOINT_NAME}-connection" # Create a Private DNS Zone az network private-dns zone create \ --resource-group $RESOURCE_GROUP \ --name $PRIVATE_DNS_ZONE_NAME # Link Private DNS Zone to VNet az network private-dns link vnet create \ --resource-group $RESOURCE_GROUP \ --zone-name $PRIVATE_DNS_ZONE_NAME \ --name "myDNSLink" \ --virtual-network $VNET_NAME \ --registration-enabled false # Retrieve the Private IP Address from the Private Endpoint PRIVATE_IP=$(az network private-endpoint show \ --name $PRIVATE_ENDPOINT_NAME \ --resource-group $RESOURCE_GROUP \ --query "customDnsConfigs[0].ipAddresses[0]" -o tsv) # Create a DNS Record for Azure OpenAI az network private-dns record-set a add-record \ --resource-group $RESOURCE_GROUP \ --zone-name $PRIVATE_DNS_ZONE_NAME \ --record-set-name $AOAI_NAME \ --ipv4-address $PRIVATE_IP # Disable Public Network Access az cognitiveservices account update \ --name $AOAI_NAME \ --resource-group $RESOURCE_GROUP \ --public-network-access Disabled Provisioning the Azure APIM instance to an internal VNet Please follow the link to provision: Deploy Azure API Management instance to internal VNet | Microsoft Learn Create an API for AOAI in APIM Please follow the link : Import an Azure OpenAI API as REST API - Azure API Management | Microsoft Learn Configure Azure Application Gateway with Azure APIM Please follow the link : Use API Management in a virtual network with Azure Application Gateway - Azure API Management | Microsoft Learn Conclusion Securing Azure OpenAI with private endpoints, APIM, and Application Gateway ensures a robust, enterprise-grade architecture that balances security, accessibility, and performance. By leveraging private endpoints, Azure OpenAI remains shielded from public exposure, while APIM acts as a controlled gateway for managing external API access. The addition of Application Gateway provides an extra security layer with SSL termination, WAF protection, and traffic management. With this setup, organizations can: ✔ Ensure end-to-end private connectivity for Azure OpenAI. ✔ Enable secure external access via APIM and Application Gateway. ✔ Enforce strict network segmentation with VNETs, Subnets, NSGs, and Private DNS. ✔ Strengthen security with Managed Identity authentication and controlled API exposure. By following this guide, you now have a scalable, production-ready solution to securely integrate Azure OpenAI with external applications, whether they reside in AWS, on-premises, or other cloud environments. Implement these best practices to maintain compliance, minimize security risks, and enhance the reliability of your AI-powered applications.1.1KViews3likes0CommentsGetting started with the NetApp Connector for Microsoft M365 Copilot and Azure NetApp Files
Imagine a world where your on-premises and enterprise cloud files seamlessly integrate with Microsoft Copilot unleashing AI on your Azure NetApp Files enterprise data, and making your workday smoother and more efficient. Welcome to the future with the NetApp Connector for Microsoft Copilot!1.6KViews1like0CommentsDemystifying Azure OpenAI Networking for Secure Chatbot Deployment
Embark on a technical exploration of Azure's networking features for building secure chatbots. In this article, we'll dive deep into the practical aspects of Azure's networking capabilities and their crucial role in ensuring the security of your OpenAI deployments. With real-world use cases and step-by-step instructions, you'll gain practical insights into optimizing Azure and OpenAI for your projects.27KViews6likes9CommentsAI for Operations
Solutions idea This solution series shows some examples of how Azure OpenAI and its LLM models can be used on Operations and FinOps issues. With a view to the use of models linked to the Enterprise Scale Landing Zone, the solutions shown, which are available on a dedicated GitHub, are designed to be deployed within a dedicated subscription, in the examples called ‘OpenAI-CoreIntegration’. The examples we are going to list are: SQL BPA AI Enhanced Azure Update Manager AI Enhanced Azure Cost Management AI Enhanced Azure AI Anomalies Detection Azure OpenAI Smart Doc Creator Enterprise Scale AI for Operations Landing Zone Design Architecture SQL BPA AI Enhanced Architecture This LogApp is an example of integrating ARC SQL practices assessment results with OpenAI, creating an HTML report and CSV file send via Email with OpenAI comment of Severity High and/or Medium results based on the actual Microsoft Documentation. Dataflow Initial Trigger Type: Recurrence Configuration: Frequency: Weekly Day: Monday Time: 9:00 AM Time Zone: W. Europe Standard Time Description: The Logic App is triggered weekly to gather data for SQL Best Practice Assessments. Step 1: Data Query Action: Run_query_and_list_results Description: Executes a Log Analytics query to retrieve SQL assessment results from monitored resources. Output: A dataset containing issues classified by severity (High/Medium). Step 2: Variable Initialization Actions: Initialize_variable_CSV: Initializes an empty array to store CSV results. Open_AI_API_Key: Sets up the API key for Azure OpenAI service. HelpLinkContent: Prepares a variable to store useful links. Description: Configures necessary variables for subsequent steps. Step 3: Process Results Action: For_eachSQLResult Description: Processes the query results with the following sub-steps: Condition: Checks if the severity is High or Medium. OpenAI Processing: Sends structured prompts to the GPT-4 model for recommendations on identified issues. Parses the JSON response to extract specific insights. CSV Composition: Creates an array containing detailed results. Step 4: Report Generation Actions: Create_CSV_table: Converts processed data into a CSV format. Create_HTML_table: Generates an HTML table from the data. ComposeMailMessage: Prepares an HTML email message containing the results and a link to the report. Description: Formats the data for sharing. Step 5: Saving and Sharing Actions: Create_file: Saves the HTML report to OneDrive. Send_an_email_(V2): Sends an email with the reports attached (HTML and CSV). Post_message_in_a_chat_or_channel: Shares the results in a Teams channel. Description: Distributes the reports to defined recipients. Components Azure OpenAI service is a platform provided by Microsoft that offers access to powerful language models developed by OpenAI, including GPT-4, GPT-4o, GPT-4o mini, and others. The service is used in this scenario for all the natural language understanding and generating communication to the customers. Azure Logic Apps is a cloud platform where you can create and run automated workflows with little to no code. Azure Logic Apps Managed Identities allow to authenticate to any resource that supports Microsoft Entra authentication, including your own applications. Azure Bing Web Search enables safe, ad-free, location-aware search results, surfacing relevant information from billions of web documents. Help your users find what they're looking for from the world-wide-web by harnessing Bing's ability to comb billions of webpages, images, videos, and news with a single API call. Azure ARC SQL Server enabled by Azure Arc extends Azure services to SQL Server instances hosted outside of Azure: in your data center, in edge site locations like retail stores, or any public cloud or hosting provider. SQL Best Practices Assessment feature provides a mechanism to evaluate the configuration of your SQL Server instance. Azure Monitor is a comprehensive monitoring solution for collecting, analyzing, and responding to monitoring data from your cloud and on-premises environments. Azure Kusto Queryis a powerful tool to explore your data and discover patterns, identify anomalies and outliers, create statistical modeling, and more Potential use cases SQL BPA AI Enhanced exploits the capabilities of the SQL Best Practice Assessment service based on Azure ARC SQL Server. The collected data can be used for the generation of customised tables. The solution is designed for customers who want to enrich their Assessment information with Generative Artificial Intelligence. Azure Update Manager AI Enhanced Architecture This LogApp solution example retrieves data from the Azure Update Manager service and returns an output processed by generative artificial intelligence. Dataflow Initial Trigger Type: Recurrence Trigger Frequency: Monthly Time Zone: W. Europe Standard Time Triggers the Logic App at the beginning of every month. Step 1: Initialize API Key Action: Initialize Variable Variable Name: Api-Key Step 2: Fetch Update Status Action: HTTP Request URI: https://management.azure.com/providers/Microsoft.ResourceGraph/resources Query: Retrieves resources related to patch assessments using patchassessmentresources. Step 3: Parse Update Status Action: Parse JSON Content: Response body from the HTTP request. Schema: Extracts details such as VM Name, Patch Name, Patch Properties, etc. Step 4: Process Updates For Each: Body('Parse_JSON')?['data'] Iterates through each item in the parsed update data. Condition: If Patch Name is not null and contains "KB": Action: Format Item Parses individual update items for VM Name, Patch Name, and additional properties. Action: Send to Azure OpenAI Description: Sends structured prompts to the GPT-4 model Headers: Content-Type: application/json api-key: @variables('Api-Key') Body: Prompts Azure OpenAI to generate a report for each virtual machine and patch, formatted in Italian. Action: Parse OpenAI Response Extracts and formats the response generated by Azure OpenAI. Action: Append to Summary and CSV Adds the OpenAI-generated response to the Updated Summary array. Appends patch details to the CSV array. Step 5: Finalize Report Action: Create Reports (I, II, III) Formats and cleans the Updated Summary variable to remove unwanted characters. Action: Compose HTML Email Content Constructs an HTML email with the following: Report summary generated using OpenAI. Disclaimer about possible formatting anomalies. Company logo embedded. Step 6: Generate CSV Table Action: Converts the CSV array into a CSV format for attachment. Step 7: Send E-Mail Action: Send Email Recipient: user@microsoft.com Subject: Security Update Assessment Body: HTML content with report summary. Attachment: Name: SmartUpdate_<timestamp>.csv Content: CSV table of update details. Components Azure OpenAI service is a platform provided by Microsoft that offers access to powerful language models developed by OpenAI, including GPT-4, GPT-4o, GPT-4o mini, and others. The service is used in this scenario for all the natural language understanding and generating communication to the customers. Azure Logic Apps is a cloud platform where you can create and run automated workflows with little to no code. Azure Logic Apps Managed Identities allow to authenticate to any resource that supports Microsoft Entra authentication, including your own applications. Azure Update Manager is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your machines in Azure and on-premises/on other cloud platforms (connected by Azure Arc) from a single pane of management. You can also use Update Manager to make real-time updates or schedule them within a defined maintenance window. Azure Arc Server lets you manage Windows and Linux physical servers and virtual machines hosted outside of Azure, on your corporate network, or other cloud provider. Potential use cases Azure Update Manager AI Enhanced is an example of a solution designed for all those situations where the IT department needs to manage and automate the telling of information in a readable format on the status of updates to its infrastructure thanks to an output managed by generative artificial intelligence Azure Cost Management AI Enhanced Architecture This LogApp solution retrieves consumption data from the Azure environment and generates a general and detailed cost trend report on a scheduled basis. Dataflow Initial Trigger Type: Manual HTTP Trigger The Logic App is triggered manually using an HTTP request. Step 1: Set Current Date and Old Date Action: Set Actual Date Current date is initialized to @utcNow('yyyy-MM-dd'). Example Value: 2024-11-22. Action: Set Actual Date -30 Old date is set to 30 days before the current date. Example Value: 2024-10-23. Action: Set old date -30 Sets the variable currentdate to 30 days prior to the old date. Example Value: 2024-09-23. Action: Set old date -60 Sets the variable olddate to 60 days before the current date. Example Value: 2024-08-23. Step 2: Query Cost Data Action: Query last 30 days Queries Azure Cost Management for the last 30 days. Example Data Returned:json{ "properties": { "rows": [ ["Virtual Machines", 5000], ["Databases", 7000], ["Storage", 3000] ] } } Copia codice Action: Query -60 -30 days Queries Azure Cost Management for 30 to 60 days ago. Example Data Returned:json{ "properties": { "rows": [ ["Virtual Machines", 4800], ["Databases", 6800], ["Storage", 3050] ] } } Copia codice Step 3: Download Detailed Reports Action: Download_report_actual_month Generates and retrieves a detailed cost report for the current month. Action: Download_report_last_month Generates and retrieves a detailed cost report for the previous month. Step 4: Process and Store Reports Action: Actual_Month_Report Parses the JSON from the current month's report. Retrieves blob download links for the detailed report. Action: Last_Month_Report Parses the JSON from the last month's report. Retrieves blob download links for the detailed report. Action: Create_ActualMonthDownload and Create_LastMonthDownload Initializes variables to store download links. Action: Get_Actual_Month_Download_Link and Get_Last_Month_Download_Link Iterates through blob data and assigns the download link variables. Step 5: Generate Questions for OpenAI Action: Set_Question Prepares the first question for Azure OpenAI: "Describe the key differences between the previous and current month's costs, and create a bullet-point list detailing these differences in Euros." Action: Set_Second_Question Prepares a second question for Azure OpenAI: "Briefly describe in Italian the major cost differences between the two months, rounding the amounts to Euros." Step 6: Send Questions to Azure OpenAI Action: Passo result to OpenAI Sends the first question to OpenAI for generating detailed insights. Action: Get Description from OpenAI Sends the second question to OpenAI for a brief summary in Italian. Step 8: Process OpenAI Responses Action: Parse_JSON and Parse_JSON_Second_Question Parses the JSON response from OpenAI for both questions. Retrieves the content of the generated insights. Action: For_each_Description Iterates through OpenAI's responses and assigns the description to a variable DescriptionOutput. Step 9: Compose and send E-Mail Action: Compose_Email Composes an HTML email including: Key insights from OpenAI. Links to download the detailed reports. Example Email Content: Azure automated cost control system: - Increase of €200 in Virtual Machines. - Reduction of €50 in Storage. Download details: - Current month: [Download Report] - Previous month: [Download Report]. Action: Send_an_email_(V2) Sends the composed email. Components Azure OpenAI service is a platform provided by Microsoft that offers access to powerful language models developed by OpenAI, including GPT-4, GPT-4o, GPT-4o mini, and others. The service is used in this scenario for all the natural language understanding and generating communication to the customers. Azure Logic Apps is a cloud platform where you can create and run automated workflows with little to no code. Azure Logic Apps Managed Identities allow to authenticate to any resource that supports Microsoft Entra authentication, including your own applications. Potential use cases Azure Cost Management AI Enhanced is an example of a solution designed for those who need to programme the generation of reports related to FinOps topics with the possibility to customise the output and send the results via e-mail or perform a customised upload. Azure AI Anomalies Detection Architecture This LogApp solution leverages Azure Monitor's native machine learning capabilities to retrieve anomalous data within application logs. These will then be analysed by OpenAI. Dataflow Initial Trigger Type: Recurrence Trigger Frequency: Monthly Time Zone: W. Europe Standard Time Triggers the Logic App at the beginning of every month. Step 1: Initialize API Key Action: Initialize Variable Variable Name: Api-Key Step 2: Fetch Update Status Action: HTTP Request URI: https://management.azure.com/providers/Microsoft.ResourceGraph/resources Query: Retrieves resources related to patch assessments using patchassessmentresources. Step 3: Parse Update Status Action: Parse JSON Content: Response body from the HTTP request. Schema: Extracts details such as VM Name, Patch Name, Patch Properties, etc. Step 4: Process Updates For Each: @body('Parse_JSON')?['data'] Iterates through each item in the parsed update data. Condition: If Patch Name is not null and contains "KB": Action: Format Item Parses individual update items for VM Name, Patch Name, and additional properties. Action: Send to Azure OpenAI Description: Sends structured prompts to the GPT-4 model. Headers: Content-Type: application/json api-key: @variables('Api-Key') Body: Prompts Azure OpenAI to generate a report for each virtual machine and patch, formatted in Italian. Action: Parse OpenAI Response Extracts and formats the response generated by Azure OpenAI. Action: Append to Summary and CSV Adds the OpenAI-generated response to the Updated Summary array. Appends patch details to the CSV array. Step 5: Finalize Report Action: Create Reports (I, II, III) Formats and cleans the Updated Summary variable to remove unwanted characters. Action: Compose HTML Email Content Constructs an HTML email with the following: Report summary generated using OpenAI. Disclaimer about possible formatting anomalies. Company logo embedded. Step 6: Generate CSV Table Action: Converts the CSV array into a CSV format for attachment. Step 7: Send Notifications Action: Send Email Recipient: user@microsoft.com Subject: Security Update Assessment Body: HTML content with report summary. Attachment: Name: SmartUpdate_<timestamp>.csv Content: CSV table of update details. Components Azure OpenAI service is a platform provided by Microsoft that offers access to powerful language models developed by OpenAI, including GPT-4, GPT-4o, GPT-4o mini, and others. The service is used in this scenario for all the natural language understanding and generating communication to the customers. Azure Logic Apps is a cloud platform where you can create and run automated workflows with little to no code. Azure Logic Apps Managed Identities allow to authenticate to any resource that supports Microsoft Entra authentication, including your own applications. Azure Monitor is a comprehensive monitoring solution for collecting, analyzing, and responding to monitoring data from your cloud and on-premises environments. Azure Kusto Queryis a powerful tool to explore your data and discover patterns, identify anomalies and outliers, create statistical modeling, and more Potential use cases Azure AI Anomalies Detection is an example of a solution that exploits the Machine Learning capabilities of Azure Monitor to diagnose anomalies within application logs that will then be analysed by Azure OpenAI. The solution can be customized based on Customer requirements. Azure OpenAI Smart Doc Creator Architecture This Function App solution leverages the Azure OpenAI LLM Generative AI to create a docx file based on the Azure architectural information of a specific workload (Azure Metadata based). The function exploits the 'OpenAI multi-agent' concept. Dataflow Step 1: Logging and Configuration Setup Initialize Logging: Advanced logging is set up to provide debug-level insights. Format includes timestamps, log levels, and messages. Retrieve OpenAI Endpoint: QUESTION_ENDPOINT is retrieved from environment variables. Logging confirms the endpoint retrieval. Step 2: Authentication Managed Identity Authentication: The ManagedIdentityCredential class is used for secure Azure authentication. The SubscriptionClient is initialized to access Azure subscriptions. Retrieves a token for Azure Cognitive Services (https://cognitiveservices.azure.com/.default). Step 3: Flattening Dictionaries Function: flatten_dict Transforms nested dictionaries into a flat structure. Handles nested lists and dictionaries recursively. Used for preparing metadata for storage in CSV. Step 4: Resource Tag Filtering Functions: get_resources_by_tag_in_subscription: Filters resources in a subscription based on a tag key and value. get_resource_groups_by_tag_in_subscription: Identifies resource groups with matching tags. Purpose: Retrieve Azure resources and resource groups tagged with specific key-value pairs. Step 5: Resource Metadata Retrieval Functions: get_all_resources: Aggregates resources and resource groups across all accessible subscriptions. get_resources_in_resource_group_in_subscription: Retrieves resources from specific resource groups. get_latest_api_version: Determines the most recent API version for a given resource type. get_resource_metadata: Retrieves detailed metadata for individual resources using the latest API version. Purpose: Collect comprehensive resource details for further processing. Step 6: Documentation Generation Function: generate_infra_config Processes metadata through OpenAI to generate documentation. OpenAI generates detailed and human-readable descriptions for Azure resources. Multi-stage review process: Initial draft by OpenAI. Feedback loop with ArchitecturalReviewer and DocCreator for refinement. Final content is saved to architecture.txt. Step 7: Workload Overview Function: generate_workload_overview Reads from the generated CSV file to create a summary of the workload. Sends resource list to OpenAI for generating a high-level overview. Step 8: Conversion to DOCX Function: txt_to_docx Creates a Word document (Output.docx) with: Section 1: "Workload Overview" (generated summary). Section 2: "Workload Details" (detailed resource metadata). Adds structured headings and page breaks. Step 9: Temporary Files Cleanup Function: cleanup_files Deletes temporary files: architecture.txt resources_with_expanded_metadata.csv Output.docx Ensures no residual files remain after execution. Step 10: CSV Metadata Export Function: save_resources_with_expanded_metadata_to_csv Aggregates and flattens resource metadata. Saves details to resources_with_expanded_metadata.csv. Includes unique keys derived from all metadata fields. Step 11: Architectural Review Process Functions: ArchitecturalReviewer: Reviews and suggests improvements to documentation. DocCreator: Incorporates reviewer suggestions into the documentation. Purpose: Iterative refinement for high-quality documentation. Step 12: HTTP Trigger Function Function: smartdocs Accepts HTTP requests with tag_key and tag_value parameters. Orchestrates the entire workflow: Resource discovery. Metadata retrieval. Documentation generation. File cleanup. Responds with success or error messages. Components Azure OpenAI service is a platform provided by Microsoft that offers access to powerful language models developed by OpenAI, including GPT-4, GPT-4o, GPT-4o mini, and others. The service is used in this scenario for all the natural language understanding and generating communication to the customers. Azure Functions is a serverless solution that allows you to write less code, maintain less infrastructure, and save on costs. Instead of worrying about deploying and maintaining servers, the cloud infrastructure provides all the up-to-date resources needed to keep your applications running. Azure Function App Managed Identities allow to authenticate to any resource that supports Microsoft Entra authentication, including your own applications. Azure libraries for Python (SDK) are the open-source Azure libraries for Python designed to simplify the provisioning, management and utilisation of Azure resources from Python application code. Potential use cases The Azure OpenAI Smart Doc Creator Function App, like all proposed solutions, can be modified to suit your needs. It can be of practical help when there is a need to obtain all the configurations, in terms of metadata, of the resources and services that make up a workload. Contributors Principal author: Tommaso Sacco | Cloud Solutions Architect Simone Verza | Cloud Solution Architect Extended Contribution: Saverio Lorenzini | Senior Cloud Solution Architect Andrea De Gregorio | Technical Specialist Gianluca De Rossi | Technical Specialist Special Thanks: Carmelo Ferrara | Director CSA Marco Crippa | Sr CSA Manager2KViews3likes3CommentsBuilding scalable and persistent AI applications with LangChain, Instaclustr, and Azure NetApp Files
Discover the powerful combination of LangChain and LangGraph for building stateful AI applications and unlock the benefits of using a managed-database service like NetApp® Instaclustr® backed by Azure NetApp Files for seamless data persistence and scalability.1.3KViews0likes0CommentsAnnouncing comprehensive guidance for AI adoption and architecture
The pace of AI innovation is moving incredibly fast with new models and solutions emerging regularly. To meet the pace of technological advancements, organizations are striving to meet the demand for scalable, efficient AI solutions. The rapidity of change places enormous pressure on organizations to scale quickly while also ensuring reliability, security, performance and cost-efficiency needs are met along the way. According to Rand Research, over 80% of early AI adoptions fail because customers miss critical steps in preparing their organizations to consider all aspects of building and running AI workloads. Microsoft is committed to helping organizations successfully navigate this journey of cloud and AI transformation. Over the past 18 months, Microsoft has published design patterns, baseline reference architectures, application landing zones, and a variety of Azure service guides for Azure OpenAI workloads. We have also developed specific financial best practices, as well as pricing and cost management features to make it easier to optimize AI investments. This guidance and features have been pulled together within Azure Essentials. Azure Essentials brings together curated best practices and product experiences from customers and partners along with reference architectures, skilling, tools, and resources into a single destination to help you maximize the value of your cloud investments. The Azure Essentials resource kit includes detailed guidance tailored to specific use cases and business scenarios including achieving secure migration, activating your data for AI innovation, and building and modernizing AI applications. As you prepare to adopt AI at scale, the guidance within the Azure Essentials resource kit helps you become AI-ready. This week, we are excited to announce industry-leading guidance for AI adoption and architectural design. This guidance ties together all of the content from the past 18 months into a comprehensive methodical approach that sets up the organization for AI success, while ensuring that AI workloads are well-architected. Through thousands of customer engagements focused on AI adoption, teams of Microsoft cloud solution architects, product engineers and content developers have developed specific guidance for the Microsoft Cloud Adoption Framework for Azure (CAF) and Microsoft Azure Well-Architected Framework (WAF). As a result, all of the recommendations and best practices are based on customer-proven experience that future customers can count on. New: Cloud Adoption Framework (CAF) – AI scenario The AI scenario within the Cloud Adoption Framework provides prescriptive guidance that prepares organizations to adopt AI at scale. Over the past nine months, we’ve had over 100 Microsoft’s solution architects contribute their AI adoption knowledge to this guidance. The result of this collaboration is a roadmap comprised of checklists that are segmented for “Startups” or “Enterprises”. These checklists make it possible to start your adoption at any phase, while also double-checking that you haven’t missed anything along the way. One hallmark of this guidance is the technology strategy decision tree. It provides very succinct and consumable logic to decide which AI technology works best for your specific AI strategy. To see the full tree, click on this link. Most (if not all) customers want to implement and adopt AI responsibly. The ramifications and risk of not doing so are just too costly. Thus, the CAF methodologies have also been adapted to Responsible AI principles so organizations can build an AI foundation that supports the design, governance, and ongoing management of responsible AI workloads. It helps users with everything from developing an adoption strategy through managing AI workloads in production. NEW: Well-Architected Framework (WAF) – AI workloads The AI workload guidance within the Azure Well-Architected Framework is a new set of best practices that allows AI architects to meet the functional and non-functional requirements for reliability, security, performance efficiency, operational excellence, and cost optimization. Designed to instill confidence in workload teams to make intelligent decisions when designing their AI workloads, the new WAF guidance for AI workloads takes a broader view covering architectural considerations at all levels of the stack, including infrastructure, data layers, and application logic. Thus, you’ll find guidance about each of the WAF pillars blended into all levels. The WAF AI enhancements we’re announcing this week build upon the Azure Well-Architected Framework refresh we launched last year. We’ve added checklists and tradeoffs to all pillars which helps make the guidance more actionable for workload teams, including solution architects, DevOps engineers, and data scientists. And the WAF components are more actionable through workload designs, reference architectures, assessments, Azure Advisor recommendations, and Azure service guides. The WAF AI workload guidance also covers both traditional machine learning and generative AI architectures – ensuring comprehensive support for your AI projects. Prepare to scale your AI adoption We are confident that this comprehensive guidance will support your organization in building and deploying AI solutions responsibly and effectively. Stay tuned for more updates and resources to help you on your AI adoption journey. The CAF and WAF AI adoption and architecture guidance makes it possible to adopt AI at scale while fully aligning to Trustworthy AI principles. This guidance is also embedded within Azure Essentials which provides detailed step-by-step guidance through the AI adoption journey, thus providing organizations with a clear path to maximize the value of their AI investment. As you prepare to become AI-ready, these are some great resources to get you started. Access the Cloud Adoption Framework for AI scenario documentation to get the guidance you need to ensure you’re ready to adopt AI at scale. Leverage the Azure Well-Architected Framework for AI workloads documentation to obtain the necessary guidance to securely design, build and manage your AI workloads. Discover comprehensive skilling with free, self-paced Azure AI Plans on Learn to further develop your Azure adoption skills so you can begin your AI adoption journey with confidence. Learn more about Azure Innovate and Azure Migrate and Modernize and Azure Essentials to understand how they can help you accelerate AI adoption and drive innovation in your business. Ready to take action? Connect with Microsoft Azure sales or reach out to a qualified partner. If you have a Unified Contract with Microsoft Support, there are multiple engagements opportunities that are based on CAF and WAF to help you accelerate your Azure and AI deployments.5.2KViews3likes2CommentsAI Studio End-to-End Baseline Reference Implementation
Discover the Future of AI Deployment with Azure AI Studio’s Baseline Reference Implementation Azure AI Studio is reshaping the landscape of cloud AI integration with its commitment to operational excellence and strategic alignment with core business objectives. We are thrilled to introduce Azure AI Studio’s end-to-end baseline reference implementation—a streamlined architecture crafted for seamless, scalable, and secure AI cloud deployments. Embark on a journey to deploy sophisticated AI workloads with confidence, supported by Azure AI Studio's robust baseline architecture. Whether it's hosting interactive AI playgrounds, constructing complex AI workflows with Promptflow, or ensuring resilient and secure deployments within Azure's managed network environment, this implementation is your blueprint for success. Embrace a new era of AI innovation where security and scalability converge with organizational compliance and governance. Join us in deploying tomorrow's AI solutions, today.3.9KViews5likes0Comments