Blog Post

AI - AI Platform Blog
8 MIN READ

New evaluation tools for multimodal apps, benchmarking, CI/CD integration and more

mesameki's avatar
mesameki
Icon for Microsoft rankMicrosoft
Nov 19, 2024

Evaluations are an essential component of the GenAIOps lifecycle, helping organizations build confidence in GenAI models and applications throughout development and in production.

If not designed carefully, GenAI applications can produce outputs that have errors, lack grounding in verifiable data, or are simply irrelevant or incoherent, resulting in poor customer experiences and attrition. Even worse, an application’s outputs could perpetuate bias, promote misinformation, or expose organizations to malicious attacks. By conducting proactive risk evaluations throughout the GenAIOps lifecycle, organizations can better-understand and mitigate risks to achieve more secure, safe, and trustworthy customer experiences.

Whether you’re evaluating and comparing models at the start of an AI project or running a final evaluation of your application to demonstrate production-readiness, every evaluation has these key components:

  • the evaluation target, whether a base model or an application in development or in production, it’s the thing you’re trying to assess,
  • the evaluation data, comprised of inputs and generated outputs that form the basis of evaluation,
  • and evaluators, or metrics, that help measure and compare performance in a consistent, interpretable way.

Today, we’re excited to announce enhancements across these key components, making evaluations in Azure AI Foundry even more comprehensive and accessible for a broad set of generative AI use cases. Here’s a quick summary before we dive into details:

Simplify model selection with enhanced benchmarks and model evaluations
Drive robust, measurable insights with new and advanced evaluators
Operationalize evaluations as part of your GenAIOps
  • A new Python API allows developers to run built-in and custom text-based evaluations remotely in the cloud, streamlining the evaluation process at scale with the convenience of easy CI/CD integration.
  • GitHub Actions for GenAI evaluations enable developers to use GitHub Actions to run automated evaluations of their models and applications, for faster experimentation and iteration within their coding environment.

In related news, continuous online evaluations of generated outputs are now available, allowing teams to monitor and improve AI applications in production. Additionally, as applications transition from development to production, developers will soon have the capability to document and share evaluation results along with other key information about their fine-tuned models or applications through AI reports.

With these expanded capabilities, cross-functional teams are empowered to iterate, launch, and govern their GenAI applications with greater observability and confidence.

New benchmarking experience in Azure AI Foundry

Picture this: You’re a developer exploring the Azure AI model catalog, trying to find the right fit for your use case. You use search filters, explore available models, and read the model cards to identify strong contenders, but you’re still not sure which model to choose. Why? Selecting the optimal model for an application isn't just about learning as much as you can about each individual model. Organizations need to understand and compare performance from multiple angles—accuracy, relevance, coherence, cost, and computational efficiency—to understand the trade-offs.

Now, an enhanced benchmarking experience enables developers to view comprehensive, detailed performance data for models in the Azure AI model catalog while also allowing for direct comparison across multiple models. This provides developers with a clearer picture of each model’s relative performance across critical performance metrics to identify models that meet business requirements.

Azure AI Foundry supports four categories of metrics to facilitate robust comparisons:

  1. Quality: Assess the accuracy, groundedness, coherence, and relevance of each model’s output.
  2. Cost: Assess estimated costs associated with deploying and running the models.
  3. Latency: Assess the response times for each model to understand speed and responsiveness.
  4. Throughput: Assess the number of tasks each model can process within a specific time frame, to gauge scalability and efficiency.

Learn more in our documentation.

Explore model benchmarks within the Azure AI Foundry portalVisually compare model performance across multiple metrics within the Azure AI Foundry portal

Evaluate and compare models using your own data

Once you have compared various models using benchmarks on public data, you might still be wondering which model will perform best for your specific use case. At this point, it would be more helpful to compare each model using your own test dataset that reflects the inputs and outputs typical of your intended use case. We’re excited to provide developers with an easier way to do just that.

Now, developers can easily evaluate and compare both base models and fine-tuned models from within the Azure AI Foundry portal. This is also helpful when comparing base models to fine-tuned models, to see the impact of your training data. With this update, developers can assess models using their own test data and pre-built quality and safety evaluators, for easier side-by-side model comparisons and data-driven decisions when building GenAI applications.

Key components of this update, now available in public preview, include:

  • A new entry point in the Azure AI model catalog to guide users through model evaluation.
  • Expanded support for Azure OpenAI Service and Models as a Service (Maas) models, so developers can evaluate these models and user-defined prompts directly within Azure AI Foundry portal.
  • Simplified evaluation setup wizard, so both experienced GenAI developers and those new to GenAI can navigate and evaluate models with ease.
  • New tool for real-time test data generation, helping developers rapidly create sample data for evaluation purposes.
  • Enhanced evaluation results page to help developers visualize and quickly grasp the tradeoffs between various evaluation metrics.

Learn more in our documentation.

Generate sample data or use your own data to evaluate a model within Azure AI Foundry portalCompare model evaluation results side-by-side within Azure AI Foundry portal

Evaluate for risk and safety in image and multimodal content 

Risk and safety evaluations for images and multimodal content is now available in public preview in Azure AI Foundry. These evaluations can help organizations assess the frequency and severity of harmful content in human and AI-generated outputs to prioritize relevant risk mitigations. For example, these evaluations can help assess content risks in cases where 1) text inputs yield image outputs, 2) a combination of image and text inputs produce text outputs, and 3) images containing text (like memes) generate text and/or image outputs.

Azure AI Foundry provides AI-assisted evaluators to streamline these evaluations at scale, where each evaluator functions like a grading assistant, using consistent and predefined grading instructions to assess large datasets of inputs and outputs across specific target metrics. Today, organizations can use these evaluations to assess generated outputs for hateful or unfair, violent, sexual, and self-harm-related content, as well as protected materials that may present infringement risks. These evaluators use a large multimodal language model hosted by Microsoft to not only grade the test datasets but also provide explanations for the evaluation results so they are interpretable and actionable. 

Making evaluations actionable is essential. Evaluation insights can help organizations compare base models and fine-tuned models to see which models are a better fit for their application. Or, they can help inform proactive steps to mitigate risk, such as activating image and multimodal content filters in Azure AI Content Safety to detect and block harmful content in real-time. After making changes, users can re-run an evaluation and compare the new scores to their baseline results side-by-side to understand the impact of their work and demonstrate production readiness for stakeholders.

Learn more in our documentation.

Evaluate GenAI models and applications for quality

We’re excited to announce the general availability of quality evaluators for GenAI in Azure AI Foundry, accessible through the code-first Azure AI Foundry SDK experience and no-code Azure AI Foundry portal. These evaluators provide a scalable way to assess models and applications against key performance and quality metrics. This update also includes improvements to pre-existing AI-assisted metrics as well as explanations for evaluation results to help ensure they are interpretable and actionable. Generally available evaluators include:

AI-assisted evaluators (these require an Azure OpenAI deployment to assist the evaluation), which are commonly used for retrieval augmented generation (RAG) and business and creative writing scenarios:
•    Groundedness
•    Retrieval
•    Relevance
•    Coherence
•    Fluency
•    Similarity

Natural Language Processing (NLP) evaluators, which support assessments for the accuracy, precision, and recall of generative AI:
•    F1 score
•    ROUGE score
•    BLEU score
•    GLEU score
•    METEOR score

Learn more in our documentation.

Announcing a Python API for remote evaluation

Previously, developers could only run local evaluations on their own machines when using the Azure AI Foundry SDK. Now, we're providing developers with a new, simplified Python API to run remote evaluations in the cloud. This API supports both built-in and custom prompt-based evaluators, allowing for scalable evaluation runs, seamless integration into CI/CD pipelines, and a more streamlined evaluation workflow. Plus, remote evaluation means developers don’t need to manage their own infrastructure for orchestrating evaluations. Instead, they can offload the task to Azure.

Learn more in our documentation.

GitHub Actions for GenAI evaluations are now available

Given trade-offs between business impact, risk and cost, you need to be able to continuously evaluate your AI applications and run A/B experiments at scale. We are significantly simplifying this process with GitHub Actions that can be integrated seamlessly into existing CI/CD workflows in GitHub. With these actions, you can now run automated evaluations after each commit, using the Azure AI Foundry SDK to assess your applications for metrics such as groundedness, coherence, and fluency. First announced at GitHub Universe in October, these capabilities are now available in public preview.

GitHub Actions for online A/B experimentation are available to try in private preview. These enable developers to seamlessly and automatically run A/B experiments comparing different models, prompts, and/or general UX changes to an AI application after deploying to production as part of a CD workflow. Analysis via out-of-the-box model monitoring metrics and custom metrics is seamless, with results posted back directly to GitHub. To participate in the private preview please sign up here.

Build production-ready GenAI apps with Azure AI Foundry

Want to learn about more ways to build trustworthy AI applications? Here are other exciting announcements from Microsoft Ignite to support your GenAIOps and governance workflows:

Whether you’re joining in person or online, we can’t wait to see you at Microsoft Ignite 2024. We’ll share the latest from Azure AI and go deeper into best practices for evaluations and trustworthy AI in these sessions:

 

_________

Please note: This article was edited on Dec 30, 2024 to reflect the availability of risk and safety evaluations for images in public preview in Azure AI Foundry. This feature was previously announced as "coming soon" at Microsoft Ignite.

 

Updated Dec 30, 2024
Version 4.0
No CommentsBe the first to comment