azure ai foundry sdk
8 TopicsThe Future of AI: Customizing AI agents with the Semantic Kernel agent framework
The blog post Customizing AI agents with the Semantic Kernel agent framework discusses the capabilities of the Semantic Kernel SDK, an open-source tool developed by Microsoft for creating AI agents and multi-agent systems. It highlights the benefits of using single-purpose agents within a multi-agent system to achieve more complex workflows with improved efficiency. The Semantic Kernel SDK offers features like telemetry, hooks, and filters to ensure secure and responsible AI solutions, making it a versatile tool for both simple and complex AI projects.179Views1like0CommentsThe Future of AI: Reduce AI Provisioning Effort - Jumpstart your solutions with AI App Templates
In the previous post, we introduced Contoso Chat – an open-source RAG-based retail chat sample for Azure AI Foundry, that serves as both an AI App template (for builders) and the basis for a hands-on workshop (for learners). And we briefly talked about five stages in the developer workflow (provision, setup, ideate, evaluate, deploy) that take them from the initial prompt to a deployed product. But how can that sample help you build your app? The answer lies in developer tools and AI App templates that jumpstart productivity by giving you a fast start and a solid foundation to build on. In this post, we answer that question with a closer look at Azure AI App templates - what they are, and how we can jumpstart our productivity with a reuse-and-extend approach that builds on open-source samples for core application architectures.235Views0likes0CommentsThe Future of AI: Harnessing AI for E-commerce - personalized shopping agents
Explore the development of personalized shopping agents that enhance user experience by providing tailored product recommendations based on uploaded images. Leveraging Azure AI Foundry, these agents analyze images for apparel recognition and generate intelligent product recommendations, creating a seamless and intuitive shopping experience for retail customers.374Views5likes2CommentsThe Future of AI: Power Your Agents with Azure Logic Apps
Building intelligent applications no longer requires complex coding. With advancements in technology, you can now create agents using cloud-based tools to automate workflows, connect to various services, and integrate business processes across hybrid environments without writing any code.2KViews2likes1CommentThe Future Of AI: Deconstructing Contoso Chat - Learning GenAIOps in practice
How can AI engineers build applied knowledge for GenAIOps practices? By deconstructing working samples! In this multi-part series, we deconstruct Contoso Chat (a RAG-based retail copilot sample) and use it to learn the tools and workflows to streamline out end-to-end developer journey using Azure AI Foundry.616Views0likes0CommentsNew evaluation tools for multimodal apps, benchmarking, CI/CD integration and more
If not designed carefully, GenAI applications can produce outputs that have errors, lack grounding in verifiable data, or are simply irrelevant or incoherent, resulting in poor customer experiences and attrition. Even worse, an application’s outputs could perpetuate bias, promote misinformation, or expose organizations to malicious attacks. By conducting proactive risk evaluations throughout the GenAIOps lifecycle, organizations can better-understand and mitigate risks to achieve more secure, safe, and trustworthy customer experiences. Whether you’re evaluating and comparing models at the start of an AI project or running a final evaluation of your application to demonstrate production-readiness, every evaluation has these key components: the evaluation target, whether a base model or an application in development or in production, it’s the thing you’re trying to assess, the evaluation data, comprised of inputs and generated outputs that form the basis of evaluation, and evaluators, or metrics, that help measure and compare performance in a consistent, interpretable way. Today, we’re excited to announce enhancements across these key components, making evaluations in Azure AI Foundry even more comprehensive and accessible for a broad set of generative AI use cases. Here’s a quick summary before we dive into details: Simplify model selection with enhanced benchmarks and model evaluations We’ve enhanced the model benchmarking experience in Azure AI Foundry, adding new performance metrics (e.g. latency, estimated cost, and throughput) and generation quality metrics. This allows users to compare base models across diverse criteria, to better understand potential trade-offs. Evaluate and compare base models using your own private data. This capability simplifies the model selection process by allowing organizations to compare how different models behave in real-world settings and assess which models align best with their unique requirements. Drive robust, measurable insights with new and advanced evaluators New risk and safety evaluations for image and multimodal content provide an out-of-the-box way to assess the frequency and severity of harmful content in generative AI interactions containing imagery. These evaluations can help inform targeted mitigations and demonstrate production-readiness. Evaluations for quality metrics are now generally available for text-based generative AI models and apps. Using either no-code and/or code-first experiences, users can assess generative AI models and applications for key quality attributes such as groundedness, coherence, recall, and fluency. Operationalize evaluations as part of your GenAIOps A new Python API allows developers to run built-in and custom text-based evaluations remotely in the cloud, streamlining the evaluation process at scale with the convenience of easy CI/CD integration. GitHub Actions for GenAI evaluations enable developers to use GitHub Actions to run automated evaluations of their models and applications, for faster experimentation and iteration within their coding environment. In related news, continuous online evaluations of generated outputs are now available, allowing teams to monitor and improve AI applications in production. Additionally, as applications transition from development to production, developers will soon have the capability to document and share evaluation results along with other key information about their fine-tuned models or applications through AI reports. With these expanded capabilities, cross-functional teams are empowered to iterate, launch, and govern their GenAI applications with greater observability and confidence. New benchmarking experience in Azure AI Foundry Picture this: You’re a developer exploring the Azure AI model catalog, trying to find the right fit for your use case. You use search filters, explore available models, and read the model cards to identify strong contenders, but you’re still not sure which model to choose. Why? Selecting the optimal model for an application isn't just about learning as much as you can about each individual model. Organizations need to understand and compare performance from multiple angles—accuracy, relevance, coherence, cost, and computational efficiency—to understand the trade-offs. Now, an enhanced benchmarking experience enables developers to view comprehensive, detailed performance data for models in the Azure AI model catalog while also allowing for direct comparison across multiple models. This provides developers with a clearer picture of each model’s relative performance across critical performance metrics to identify models that meet business requirements. Azure AI Foundry supports four categories of metrics to facilitate robust comparisons: Quality: Assess the accuracy, groundedness, coherence, and relevance of each model’s output. Cost: Assess estimated costs associated with deploying and running the models. Latency: Assess the response times for each model to understand speed and responsiveness. Throughput: Assess the number of tasks each model can process within a specific time frame, to gauge scalability and efficiency. Learn more in our documentation. Evaluate and compare models using your own data Once you have compared various models using benchmarks on public data, you might still be wondering which model will perform best for your specific use case. At this point, it would be more helpful to compare each model using your own test dataset that reflects the inputs and outputs typical of your intended use case. We’re excited to provide developers with an easier way to do just that. Now, developers can easily evaluate and compare both base models and fine-tuned models from within the Azure AI Foundry portal. This is also helpful when comparing base models to fine-tuned models, to see the impact of your training data. With this update, developers can assess models using their own test data and pre-built quality and safety evaluators, for easier side-by-side model comparisons and data-driven decisions when building GenAI applications. Key components of this update, now available in public preview, include: A new entry point in the Azure AI model catalog to guide users through model evaluation. Expanded support for Azure OpenAI Service and Models as a Service (Maas) models, so developers can evaluate these models and user-defined prompts directly within Azure AI Foundry portal. Simplified evaluation setup wizard, so both experienced GenAI developers and those new to GenAI can navigate and evaluate models with ease. New tool for real-time test data generation, helping developers rapidly create sample data for evaluation purposes. Enhanced evaluation results page to help developers visualize and quickly grasp the tradeoffs between various evaluation metrics. Learn more in our documentation. Evaluate for risk and safety in image and multimodal content Risk and safety evaluations for images and multimodal content is now available in public preview in Azure AI Foundry. These evaluations can help organizations assess the frequency and severity of harmful content in human and AI-generated outputs to prioritize relevant risk mitigations. For example, these evaluations can help assess content risks in cases where 1) text inputs yield image outputs, 2) a combination of image and text inputs produce text outputs, and 3) images containing text (like memes) generate text and/or image outputs. Azure AI Foundry provides AI-assisted evaluators to streamline these evaluations at scale, where each evaluator functions like a grading assistant, using consistent and predefined grading instructions to assess large datasets of inputs and outputs across specific target metrics. Today, organizations can use these evaluations to assess generated outputs for hateful or unfair, violent, sexual, and self-harm-related content, as well as protected materials that may present infringement risks. These evaluators use a large multimodal language model hosted by Microsoft to not only grade the test datasets but also provide explanations for the evaluation results so they are interpretable and actionable. Making evaluations actionable is essential. Evaluation insights can help organizations compare base models and fine-tuned models to see which models are a better fit for their application. Or, they can help inform proactive steps to mitigate risk, such as activating image and multimodal content filters in Azure AI Content Safety to detect and block harmful content in real-time. After making changes, users can re-run an evaluation and compare the new scores to their baseline results side-by-side to understand the impact of their work and demonstrate production readiness for stakeholders. Learn more in our documentation. Evaluate GenAI models and applications for quality We’re excited to announce the general availability of quality evaluators for GenAI in Azure AI Foundry, accessible through the code-first Azure AI Foundry SDK experience and no-code Azure AI Foundry portal. These evaluators provide a scalable way to assess models and applications against key performance and quality metrics. This update also includes improvements to pre-existing AI-assisted metrics as well as explanations for evaluation results to help ensure they are interpretable and actionable. Generally available evaluators include: AI-assisted evaluators (these require an Azure OpenAI deployment to assist the evaluation), which are commonly used for retrieval augmented generation (RAG) and business and creative writing scenarios: • Groundedness • Retrieval • Relevance • Coherence • Fluency • Similarity Natural Language Processing (NLP) evaluators, which support assessments for the accuracy, precision, and recall of generative AI: • F1 score • ROUGE score • BLEU score • GLEU score • METEOR score Learn more in our documentation. Announcing a Python API for remote evaluation Previously, developers could only run local evaluations on their own machines when using the Azure AI Foundry SDK. Now, we're providing developers with a new, simplified Python API to run remote evaluations in the cloud. This API supports both built-in and custom prompt-based evaluators, allowing for scalable evaluation runs, seamless integration into CI/CD pipelines, and a more streamlined evaluation workflow. Plus, remote evaluation means developers don’t need to manage their own infrastructure for orchestrating evaluations. Instead, they can offload the task to Azure. Learn more in our documentation. GitHub Actions for GenAI evaluations are now available Given trade-offs between business impact, risk and cost, you need to be able to continuously evaluate your AI applications and run A/B experiments at scale. We are significantly simplifying this process with GitHub Actions that can be integrated seamlessly into existing CI/CD workflows in GitHub. With these actions, you can now run automated evaluations after each commit, using the Azure AI Foundry SDK to assess your applications for metrics such as groundedness, coherence, and fluency. First announced at GitHub Universe in October, these capabilities are now available in public preview. GitHub Actions for online A/B experimentation are available to try in private preview. These enable developers to seamlessly and automatically run A/B experiments comparing different models, prompts, and/or general UX changes to an AI application after deploying to production as part of a CD workflow. Analysis via out-of-the-box model monitoring metrics and custom metrics is seamless, with results posted back directly to GitHub. To participate in the private preview please sign up here. Build production-ready GenAI apps with Azure AI Foundry Want to learn about more ways to build trustworthy AI applications? Here are other exciting announcements from Microsoft Ignite to support your GenAIOps and governance workflows: Explore tracing and debugging capabilities to drive continuous improvement Monitor and improve GenAI apps in production Document and share evaluation results with business stakeholders Whether you’re joining in person or online, we can’t wait to see you at Microsoft Ignite 2024. We’ll share the latest from Azure AI and go deeper into best practices for evaluations and trustworthy AI in these sessions: Microsoft Ignite Keynote Trustworthy AI: Future trends and best practices Trustworthy AI: Advanced risk evaluation and mitigation Azure AI and the dev toolchain you need to infuse AI in all your apps Simulate, evaluate, and improve GenAI outputs with Azure AI Foundry _________ Please note: This article was edited on Dec 30, 2024 to reflect the availability of risk and safety evaluations for images in public preview in Azure AI Foundry. This feature was previously announced as "coming soon" at Microsoft Ignite.3.5KViews0likes0CommentsIgnite 2024: Streamlining AI Development with an Enhanced User Interface, Accessibility, and Learning Experiences in Azure AI Foundry portal
Announcing Azure AI Foundry, a unified platform that simplifies AI development and management. The platform portal (formerly Azure AI Studio) features a revamped user interface, enhanced model catalog, new management center, improved accessibility and learning, making it easier than ever for Developers and IT Admins to design, customize, and manage AI apps and agents efficiently.5.1KViews2likes0CommentsContinuously monitor your GenAI application with Azure AI Foundry and Azure Monitor
Now, Azure AI Foundry and Azure Monitor seamlessly integrate to enable ongoing, comprehensive monitoring of your GenAI application's performance from various perspectives, including token usage, operational metrics (e.g. latency and request count), and the quality and safety of generated outputs. With online evaluation, now available in public preview, you can continuously assess your application's outputs, regardless of its deployment or orchestration framework, using built-in or custom evaluation metrics. This approach can help organizations identify and address security, quality, and safety issues in both pre-production and post-production phases of the enterprise GenAIOps lifecycle. Additionally, online evaluations integrate seamlessly with new tracing capabilities in Azure AI Foundry, now available in public preview, as well as Azure Monitor Application Insights. Tying it all together, Azure Monitor enables you to create custom monitoring dashboards, visualize evaluation results over time, and set up alerts for advanced monitoring and incident response. Let’s dive into how all these monitoring capabilities fit together to help you be successful when building enterprise-ready GenAI applications. Observability and the enterprise GenAIOps lifecycle The generative AI operations (GenAIOps) lifecycle is a dynamic development process that spans all the way from ideation to operationalization. It involves choosing the right base model(s) for your application, testing and making changes to the flow, and deploying your application to production. Throughout this process, you can evaluate your application’s performance iteratively and continuously. This practice can help you identify and mitigate issues early and optimize performance as you go, helping ensure your application performs as expected. You can use the built-in evaluation capabilities in Azure AI Foundry, which now include remote evaluation and continuous online evaluation, to support end-to-end observability into your app’s performance throughout the GenAIOps lifecycle. Online evaluation can be used in many different application development scenarios, including: Automated testing of application variants. Integration into DevOps CI/CD pipelines. Regularly assessing an application’s responses for key quality metrics (e.g. groundedness, coherence, recall). Quickly responding to risky or inappropriate outputs that may arise during real-world use (e.g. containing violent, hateful, or sexual content) Production application monitoring and observability with Azure Monitor Application Insights. Now, let explore how you can use tracing for your application to begin your observability journey. Gain deeper insight into your GenAI application's processes with tracing Tracing enables comprehensive monitoring and deeper analysis of your GenAI application's execution. This functionality allows you to trace the process from input to output, review intermediate results, and measure execution times. Additionally, detailed logs for each function call in your workflow are accessible. You can inspect parameters, metrics, and outputs of each AI model utilized, which facilitates debugging and optimization of your application while providing deeper insights into the functioning and outputs of the AI models. The Azure AI Foundry SDK supports tracing to various endpoints, including local viewers, Azure AI Foundry, and Azure Monitor Application Insights. Learn more about new tracing capabilities in Azure AI Foundry. Continuously measure the quality and safety of generated outputs with online evaluation With online evaluation, now available in public preview, you can continuously evaluate your collected trace data for troubleshooting, monitoring, and debugging purposes. Online evaluation with Azure AI Foundry offers the following capabilities: Integration between Azure AI services and Azure Monitor Application Insights Monitor any deployed application, agnostic of deployment method or orchestration framework Support for trace data logged via the Azure AI Foundry SDK or a logging API of your choice Support for built-in and custom evaluation metrics via the Azure AI Foundry SDK Can be used to monitor your application during all stages of the GenAIOps lifecycle To get started with online evaluation, please review the documentation and code samples. Monitor your app in production with Azure AI Foundry and Azure Monitor Azure Monitor Application Insights excels in application performance monitoring (APM) for live web applications, providing many experiences to help enhance the performance, reliability, and quality of your applications. Once you’ve started collecting data for your GenAI application, you can access an out-of-the-box dashboard view to help you get started with monitoring key metrics for your application directly from your Azure AI project. Insights are surfaced to you via an Azure Monitor workbook that is linked to your Azure AI project, helping you quickly observe trends for key metrics, such as token consumption, user feedback, and evaluations. You can customize this workbook and add tiles for additional metrics or insights based on your business needs. You can also share it with your team so they can get the latest insights as well. Build enterprise-ready GenAI apps with Azure AI Foundry Ready to learn more? Here are other exciting announcements from Microsoft Ignite to support your GenAIOps workflows: New tracing and debugging capabilities to drive continuous improvement New ways to evaluate models and applications in pre-production New ways to document and share evaluation results with business stakeholders Whether you’re joining in person or online, we can’t wait to see you at Microsoft Ignite 2024. We’ll share the latest from Azure AI and go deeper into best practices for GenAIOps with these breakout sessions: Multi-agentic GenAIOps from prototype to production with dev tools Trustworthy AI: Advanced risk evaluation and mitigation Azure AI and the dev toolchain you need to infuse AI in all your apps1.9KViews0likes0Comments