azure openai service
22 TopicsThe Future of AI: Customizing AI agents with the Semantic Kernel agent framework
The blog post Customizing AI agents with the Semantic Kernel agent framework discusses the capabilities of the Semantic Kernel SDK, an open-source tool developed by Microsoft for creating AI agents and multi-agent systems. It highlights the benefits of using single-purpose agents within a multi-agent system to achieve more complex workflows with improved efficiency. The Semantic Kernel SDK offers features like telemetry, hooks, and filters to ensure secure and responsible AI solutions, making it a versatile tool for both simple and complex AI projects.235Views2likes0CommentsThe Future of AI: Reduce AI Provisioning Effort - Jumpstart your solutions with AI App Templates
In the previous post, we introduced Contoso Chat – an open-source RAG-based retail chat sample for Azure AI Foundry, that serves as both an AI App template (for builders) and the basis for a hands-on workshop (for learners). And we briefly talked about five stages in the developer workflow (provision, setup, ideate, evaluate, deploy) that take them from the initial prompt to a deployed product. But how can that sample help you build your app? The answer lies in developer tools and AI App templates that jumpstart productivity by giving you a fast start and a solid foundation to build on. In this post, we answer that question with a closer look at Azure AI App templates - what they are, and how we can jumpstart our productivity with a reuse-and-extend approach that builds on open-source samples for core application architectures.235Views0likes0CommentsThe Future of AI: Harnessing AI for E-commerce - personalized shopping agents
Explore the development of personalized shopping agents that enhance user experience by providing tailored product recommendations based on uploaded images. Leveraging Azure AI Foundry, these agents analyze images for apparel recognition and generate intelligent product recommendations, creating a seamless and intuitive shopping experience for retail customers.375Views5likes2CommentsThe Future of AI: Power Your Agents with Azure Logic Apps
Building intelligent applications no longer requires complex coding. With advancements in technology, you can now create agents using cloud-based tools to automate workflows, connect to various services, and integrate business processes across hybrid environments without writing any code.2KViews2likes1CommentUse generative AI to extract structured data out of emails
One thing we regularly hear from clients is that they receive information that are key to their business such as order requests via email in an unstructured format and sometimes there are structured information within the body of those emails in a variety of table formats. In today’s fast-paced digital world, businesses need a way to automatically extract, structure, and integrate this information into their existing applications. Whether it’s leveraging AI-powered document processing, natural language processing (NLP), or intelligent automation, the right approach can transform email-based orders into structured, actionable data. In this blog, we’ll explore one such scenario where AI can be leveraged to extract information in tabular format that has been provided within an email. The emails contextually belong to a specific domain, but the tables are not with consistent headers or shapes. Sometimes in the body of one email there could be multiple tables. The problem Statement Extract tabular information with varying table formats from emails The typical approach to this problem involves rule-based processing, where individual tables are extracted and merged based on predefined logic. However, given the variety of email formats from hundreds or even thousands of different senders, maintaining such rule-based logic becomes increasingly complex and difficult to manage. A more optimal solution is leveraging the cognitive capabilities of generative AI, which can dynamically adapt to different table structures, column names, and formatting variations—eliminating the need for constant rule updates while improving accuracy and scalability. To create this sample code, I used below email with test data, with two tables with inconsistent column names. It is going to provide some upcoming trainings information. Please note the difference between the column headers: Hi there, Regarding the upcoming trainings, this is the list: Event Date Description of Event Length Grade 2025-01-21 Digital environments 20 hours 5 2025-03-01 AI for Industry A 10 hours 3 and some further events in below list Date Subject Duration Grade 2025-01-21 Digital environments 2 2 days 1 2025-03-01 AI for Industry B 2 weeks 4 These sessions are designed to be interactive and informative, so your timely participation is crucial. Please make sure to log in or arrive on time to avoid missing key insights. If you have any questions or need assistance, feel free to reach out. Looking forward to seeing you there! Thanks, Azadeh These are the two tables within the email, and we need to extract one consistent table format with all the rows from these two tables. Table 1 Event Date Description of Event Length Grade 2025-01-21 Digital environments 20 hours 5 2025-03-01 AI for Industry A 10 hours 3 Table 2 Date Subject Duration Grade 2025-01-21 Digital environments 2 2 days 1 2025-03-01 AI for Industry B 2 weeks 4 To extract the tabular data into one single table in json format, I am using python with below libraries installed in my environment: pandas beautifulsoup4 openai lxml The Code I use azure OpenAI service with a gpt 4o deployment. Below code is just one way of solving this type of problem and can be customized or improved to fit to other similar problems. I have provided some guidelines about merging the tables and column names similarity in the user prompt. This sample code is using an email message that is saved in 'eml' format in a local path, but the email library has other capabilities to help you connect to a mailbox and get the emails. import email import pandas as pd from bs4 import BeautifulSoup import os from openai import AzureOpenAI endpoint = os.getenv("ENDPOINT_URL", "https://....myendpointurl....openai.azure.com/") deployment = os.getenv("DEPLOYMENT_NAME", "gpt-4o") subscription_key = os.getenv("AZURE_OPENAI_API_KEY", "myapikey) # Initialize Azure OpenAI Service client with key-based authentication client = AzureOpenAI( azure_endpoint=endpoint, api_key=subscription_key, api_version="2024-05-01-preview", ) # Process email content with GPT-4 def extract_information(email_body, client): soup = BeautifulSoup(email_body, "html.parser") body = soup.get_text() print(body) #Prepare the chat prompt chat_prompt = [ { "role": "system", "content": [ { "type": "text", "text": "You are an AI assistant that is expert in extracting structured data from emails." } ] }, { "role": "user", "content": [ { "type": "text", "text": f"Extract the required information from the following email and format it as JSON and consolidate the tables using the common column names. For example the columns length and duration are the same and the columns Event and Subject are the same:\n\n{body}" } ] } ] messages = chat_prompt # Generate the completion completion = client.chat.completions.create( model=deployment, messages=messages, max_tokens=800, temperature=0.1, top_p=0.95, frequency_penalty=0, presence_penalty=0, stop=None, stream=False ) return completion.choices[0].message.content email_file_name = r'...path to your file....\Test Email with Tables.eml' with open(email_file_name, "r") as f: msg = email.message_from_file(f) email_body = "" for part in msg.walk(): if part.get_content_type() == "text/plain": email_body = part.get_payload(decode=True).decode() elif part.get_content_type() == "text/html": email_body = part.get_payload(decode=True).decode() extracted_info = extract_information(email_body, client) print(extracted_info) The output is: ``` [ { "Event": "Digital environments", "Date": "2025-01-21", "Length": "20 hours", "Grade": 5 }, { "Event": "AI for Industry A", "Date": "2025-03-01", "Length": "10 hours", "Grade": 3 }, { "Event": "Digital environments 2", "Date": "2025-01-21", "Length": "2 days", "Grade": 1 }, { "Event": "AI for Industry B", "Date": "2025-03-01", "Length": "2 weeks", "Grade": 4 } ] ``` Key points in the code: Read an email and extract the body Use a gen AI model with the right instructions prompt to complete the task Gen AI will follow the instructions and create a combined consistent table Get the output in the right format, e.g. 'json' I hope you find this blog post helpful, and you can apply it to your use case/domain. Or you can simply get the idea of how to use generative AI to solve a problem, instead of building layers of custom logic.1KViews7likes1CommentThe Future Of AI: Deconstructing Contoso Chat - Learning GenAIOps in practice
How can AI engineers build applied knowledge for GenAIOps practices? By deconstructing working samples! In this multi-part series, we deconstruct Contoso Chat (a RAG-based retail copilot sample) and use it to learn the tools and workflows to streamline out end-to-end developer journey using Azure AI Foundry.619Views0likes0CommentsAnnouncing Model Fine-Tuning Collaborations: Weights & Biases, Scale AI, Gretel and Statsig
As AI continues to transform industries, the ability to fine-tune models and customize them for specific use cases has become more critical than ever. Fine-tuning can enable companies to align models with their unique business goals, ensuring that AI solutions deliver results with greater precision However, organizations face several hurdles in their model customization journey: Lack of end-to-end tooling: Organizations struggle with fine-tuning foundation models due to complex processes, and the absence of tracking and evaluation tools for modifications. Data scarcity and quality: Limited access to large, high-quality datasets, along with privacy issues and high costs, complicate model training and fine-tuning. Shortage of fine-tuning expertise and pre-trained models: Many companies lack specialized knowledge and access to refined models for fine-tuning. Insufficient experimentation tools: A lack of tools for ongoing experimentation in production limits optimization of key variables like model diversity and operational efficiency. To address these challenges, Azure AI Foundry is pleased to announce new collaborations with Weights & Biases, Scale AI, Gretel and Statsig to streamline the process of model fine-tuning and experimentation through advanced tools, synthetic data and specialized expertise. Weights & Biases integration with Azure OpenAI Service: Making end-to-end fine-tuning accessible with tooling The integration of Weights & Biases with Azure OpenAI Service offers a comprehensive end-to-end solution for enterprises aiming to fine-tune foundation models such as GPT-4, GPT-4o, and GPT-4o mini. This collaboration provides a seamless connection between Azure OpenAI Service and Weights and Biases Models which offers powerful capabilities for experiment tracking, visualization, model management, and collaboration. With the integration, users can also utilize Weights and Biases Weave to evaluate, monitor, and iterate on the performance of their fine-tuned models powered AI applications in real-time. Azure's scalable infrastructure allows organizations to handle the computational demands of fine-tuning, while Weights and Biases offers robust capabilities for fine-tuning experimentation and evaluation of LLM-powered applications. Whether optimizing GPT-4o for complex reasoning tasks or using the lightweight GPT-4o mini for real-time applications, the integration simplifies the customization of models to meet enterprise-specific needs. This collaboration addresses the growing demand for tailored AI models in industries such as retail and finance, where fine-tuning can significantly improve customer service chatbots or complex financial analysis. Azure Open AI Service and Weights & Biases integration is now available in public preview. For further details on Azure OpenAI Service and Weights & Biases integration including real-world use-cases and a demo, refer to the blog here. Scale AI and Azure Collaboration: Confidently Implement Agentic GenAI Solutions in Production Scale AI collaborates with Azure AI Foundry to offer advanced fine-tuning and model customization for enterprise use cases. It enhances the performance of Azure AI Foundry models by providing high-quality data transformation, fine-tuning and customization services, end-to-end solution development and specialized Generative AI expertise. This collaboration helps improve the performance of AI-driven applications and Azure AI services such as Azure AI Agent in Azure AI Foundry, while reducing production time and driving business impact. "Scale is excited to partner with Azure to help our customers transform their proprietary data into real business value with end-to-end GenAI Solutions, including model fine-tuning and customization in Azure." Vijay Karunamurthy, Field CTO, Scale AI Checkout a demo in BRK116 session showcasing how Scale AI’s fine-tuned models can improve agents in Azure AI Foundry and Copilot Studio. In the coming months, Scale AI will offer fine-tuning services for Azure AI Agents in Azure AI Foundry. For more details, please refer to this blog and start transforming your AI initiatives by exploring Scale AI on the Azure Marketplace. Gretel and Azure OpenAI Service Collaboration: Revolutionizing data pipeline for custom AI models Azure AI Foundry is collaborating with Gretel, a pioneer in synthetic data and privacy technology, to remove data bottlenecks and bring advanced AI development capabilities to our customers. Gretel's platform enables Azure users to generate high-quality datasets for ML and AI through multiple approaches - from prompts and seed examples to differential privacy-preserved synthetic data. This technology helps organizations overcome key challenges in AI development including data availability, privacy requirements, and high development costs with support for structured, unstructured, and hybrid text data formats. Through this collaboration, customers can seamlessly generate datasets tailored to their specific use cases and industry needs using Gretel, then use them directly in Azure OpenAI Service for fine-tuning. This integration greatly reduces both costs and time compared to traditional data labeling methods, while maintaining strong privacy and compliance standards. The collaboration enables new use cases for Azure AI Foundry customers who can now easily use synthetic data generated by Gretel for training and fine-tuning models. Some of the new use cases include cost-effective improvements for Small Language Models (SLMs), improved reasoning abilities of Large Language Models (LLMs), and scalable data generation from limited real-world examples. This value is already being realized by leading enterprises. “EY is leveraging the privacy-protected synthetic data to fine-tune Azure OpenAI Service models in the financial domain," said John Thompson, Global Client Technology AI Lead at EY. "Using this technology with differential privacy guarantees, we generate highly accurate synthetic datasets—within 1% of real data accuracy—that safeguard sensitive financial information and prevent PII exposure. This approach ensures model safety through privacy attack simulations and robust data quality reporting. With this integration, we can safely fine-tune models for our specific financial use cases while upholding the highest compliance and regulatory standards.” The Gretel integration with Azure OpenAI Service is available now through Gretel SDK. Explore this blog describing a finance industry case study and checkout details in technical documentation for fine-tuning Azure OpenAI Service models with synthetic data from Gretel. Visit this page to learn more Statsig and Azure Collaboration: Enabling Experimentation in AI Applications Statsig is a platform for feature management and experimentation that helps teams manage releases, run powerful experiments, and measure the performance of their products. Statsig and Azure AI Foundry are collaborating to enable customers to easily configure and run experiments (A/B tests) in Azure AI-powered applications, using Statsig SDKs in Python, NodeJS and .NET. With these Statsig SDKs, customers can manage the configuration of their AI applications, manage the release of new configurations, run A/B tests to optimize model and application performance, and automatically collect metrics at the model and application level. Please check out this page to learn more about the collaboration and get detailed documentation here. Conclusion The new collaborations between Azure and Weights & Biases, Scale AI, Gretel and Statsig represent a significant step forward in simplifying the process of AI model customization. These collaborations aim to address the common pain points associated with fine-tuning models, including lack of end-to-end tooling, data scarcity and privacy concerns, lack of expertise and experimentation tooling. Through these collaborations, Azure AI Foundry will empower organizations to fine-tune and customize models more efficiently, ultimately enabling faster, more accurate AI deployments. Whether it’s through better model tracking, access to synthetic data, or scalable data preparation services, these collaborations will help businesses unlock the full potential of AI.2.7KViews3likes1CommentIgnite 2024: Streamlining AI Development with an Enhanced User Interface, Accessibility, and Learning Experiences in Azure AI Foundry portal
Announcing Azure AI Foundry, a unified platform that simplifies AI development and management. The platform portal (formerly Azure AI Studio) features a revamped user interface, enhanced model catalog, new management center, improved accessibility and learning, making it easier than ever for Developers and IT Admins to design, customize, and manage AI apps and agents efficiently.5.1KViews2likes0CommentsAccelerate enterprise GenAI application development with tracing in Azure AI Foundry
We are excited to announce the public preview of tracing in Azure AI Foundry, a powerful capability designed to enhance monitoring and debugging capabilities for your machine learning models and applications. Tracing allows you to gain deeper insights into the performance and behavior of your models, to help ensure they operate efficiently and effectively. Enable comprehensive monitoring and analysis of your application's execution Tracing allows you to trace application processes from input to output, review intermediate results, and measure execution times. Additionally, detailed logs for each function call in your workflow are accessible. You can inspect parameters, metrics, and outputs of each AI model used, for easier debugging and optimization of your application. The Azure AI Foundry SDK supports tracing to various endpoints including local viewers (Prompty trace viewer and Aspire dashboard), Azure AI Foundry, and Azure Monitor Application Insights. This flexibility helps you integrate tracing with any application, facilitating testing, evaluation, and deployment across different orchestrations and existing GenAI frameworks. Key Capabilities Basic debugging In situations where your application encounters an error, the trace functionality becomes extremely useful. It allows you to delve into the function causing the error, assess the frequency of exceptions, and troubleshoot using the provided exception message and stack trace. Detailed execution logs Tracing captures detailed traces of your model's execution, including data preprocessing, feature extraction, model inference, and post-processing steps. These details provide valuable insights into the inner workings of your models, helping you identify bottlenecks and optimize performance. For example, understanding the call flow of an application is crucial for complex AI systems where multiple components and services interact. By enabling tracing, developers can identify bottlenecks, understand dependencies, and optimize the flow for better performance. Performance metrics In addition to execution logs, tracing collects key performance metrics, such as latency and token utilization. These metrics allow you to monitor the efficiency of your models and make data-driven decisions to improve their performance. Building monitoring dashboards with the data collected from tracing can provide real-time visibility into the system's health. These dashboards can track key performance indicators (KPIs), provide alerts on anomalies, and help ensure that the AI services are running as expected. Error tracking Tracing helps you identify and troubleshoot errors in your models by capturing detailed error logs. Whether it's a data preprocessing issue or a model inference error, tracing provides the information you need to diagnose and fix problems quickly. This is particularly useful for capturing runtime exceptions, such as rate-limiting, which are critical for maintaining the reliability of your applications. Evaluations and user feedback You can attach evaluations metrics and user feedback to traces via online evaluation capabilities in Azure AI Foundry. Online evaluation allows you to incorporate real-world performance data and user insights into your monitoring process, to assess whether your models meet the desired quality standards. The Azure AI Foundry SDK simplifies the process of downstream evaluation, facilitating continuous improvement and validation of AI models against real-world data. Additionally, capturing user evaluations and interactions can provide insights into how users are engaging with the AI features, to inform user-centric improvements. Visualize Traces Azure AI Foundry provides robust tools for visualizing traces, both for local debugging and production-level monitoring. You can use these tools to gain a better understanding of your model's behavior and performance. The visualization capabilities include: Local debugging: Visualize traces during development to identify and resolve issues early, helping ensure that models are optimized before deployment. Visualize the data via Azure AI Foundry portal and Azure Monitor: In the post-deployment phase, developers often want to delve deeper into their applications' performance to optimize it further. For instance, you might want to monitor your GenAI application's performance, usage, and costs. In this scenario, the trace data for each request, the aggregated metrics, and user feedback become vital. Tracing seamlessly integrates with Azure Monitor, allowing you to visualize and analyze your model's performance metrics and logs using a customizable dashboard in Azure Monitor Application Insights. This integration provides a holistic view of your model's health and performance, enabling you to make informed decisions. Getting Started To start using tracing in Azure AI Foundry and Azure Monitor, follow these simple steps: Log Traces: Enable Tracing via Azure AI SDK for enabling tracing on Model inference API. Configure Logging: Set up the logging configuration to capture the desired level of detail for your model's execution. Enable Tracing in AI Studio: In your Azure AI Project, navigate to the Tracing and enable the feature for your models. Monitor and Analyze: Use Azure Monitor to visualize and analyze the collected logs and metrics, gaining insights into your model's performance. Find detailed guidance in our documentation: Overview of tracing capabilities in Azure AI Foundry Learn how to implement and use tracing with the Azure AI Foundry SDK Visualize your traces Build production-ready GenAI apps with Azure AI Foundry Want to learn about more ways to build and monitor enterprise-ready GenAI applications? Here are other exciting announcements from Microsoft Ignite to support your GenAIOps workflows: New ways to evaluate generative AI outputs for quality and safety New ways to monitor performance with Azure AI Foundry and Azure Monitor Whether you’re joining in person or online, we can’t wait to see you at Microsoft Ignite 2024. We’ll share the latest from Azure AI and go deeper into best practices for GenAIOps with these sessions: Microsoft Ignite Keynote Multi-agentic GenAIOps from prototype to production with dev tools Azure AI and the dev toolchain you need to infuse AI in all your apps1.4KViews0likes0Comments