Blog Post

AI - AI Platform Blog
4 MIN READ

AI reports: Improve AI governance and GenAIOps with consistent documentation

AlexSutton's avatar
AlexSutton
Icon for Microsoft rankMicrosoft
Nov 19, 2024

AI governance is a critical capability in the era of AI, as organizations scale GenAI adoption against a backdrop of increasing regulation. To effectively manage and govern AI risks, organizations must enable consistent information-sharing between the teams developing and using AI and the governance, risk, and compliance (GRC) roles responsible for ensuring AI deployments are trustworthy, safe, and compliant.

AI reports are designed to help organizations improve cross-functional observability, collaboration, and governance when developing, deploying, and operating generative AI applications and fine-tuned or custom models. These reports support AI governance best practices by helping developers document the purpose of their AI model or application, its features, potential risks or harms, and applied mitigations, so that cross-functional teams can track and assess production-readiness throughout the AI development lifecycle and then monitor it in production. Starting in December, AI reports will be available in private preview in a US and EU Azure region for Azure AI Foundry customers.

To request access to the private preview of AI reports, please complete the Interest Form.

Furthermore, we are excited to announce new collaborations with Credo AI and Saidot to support customers’ end-to-end AI governance. By integrating the best of Azure AI with innovative and industry-leading AI governance solutions, we hope to provide our customers with choice and help empower greater cross-functional collaboration to align AI solutions with their own principles and regulatory requirements. 

Building on learnings at Microsoft

Microsoft’s approach for governing generative AI applications builds on our Responsible AI Standard and the National Institute of Standards and Technology’s AI Risk Management Framework. This approach requires teams to map, measure, and manage risks for generative applications throughout their development cycle. A core asset of the first—and iterative—map phase is the Responsible AI Impact Assessment. These assessments help identify potential risks and their associated harms, as well as mitigations to address them. As development of an AI system progresses, additional iterations can help development teams document their progress in risk mitigation and allow experts to review the evaluations and mitigations and make further recommendations or requirements before products are launched. Post-deployment, these assessments become a source of truth for ongoing governance and audits, and help guide how to monitor the application in production.

You can learn more about Microsoft’s approach to AI governance in our Responsible AI Transparency Report and find a Responsible AI Impact Assessment Guide and example template on our website.

How AI reports support AI impact assessments and GenAIOps

AI reports can help organizations govern their GenAI models and applications by making it easier for developers to provide the information needed for cross-functional teams to assess production-readiness throughout the GenAIOps lifecycle.

Developers will be able to assemble key project details, such as the intended business use case, potential risks and harms, model card, model endpoint configuration, content safety filter settings, and evaluation results into a unified AI report from within their development environment. Teams can then publish these reports to a central dashboard in the Azure AI Foundry portal, where business leaders can track, review, update, and assess reports from across their organization. Users can also export AI reports in PDF and industry-standard SPDX 3.0 AI BOM formats, for integration into existing GRC workflows. These reports can then be used by the development team, their business leaders, and AI, data, and other risk professionals to determine if an AI model or application is fit for purpose and ready for production as part of their AI impact assessment processes.

Being versioned assets, AI reports can also help organizations build a consistent bridge across experimentation, evaluation, and GenAIOps by documenting what metrics were evaluated, what will be monitored in production, and the thresholds that will be used to flag an issue for incident response. For even greater control, organizations can choose to implement a release gate or policy as part of their GenAIOps that validates whether an AI report has been reviewed and approved for production.

Key benefits of these capabilities include:

  • Observability: Provide cross-functional teams with a shared view of AI models and applications in development, in review, and in production, including how these projects perform in key quality and safety evaluations.
  • Collaboration: Enable consistent information-sharing between GRC, development, and operational teams using a consistent and extensible AI report template, accelerating feedback loops and minimizing non-coding time for developers.
  • Governance: Facilitate responsible AI development across the GenAIOps lifecycle, reinforcing consistent standards, practices, and accountability as projects evolve or expand over time.
Preview of the AI reports dashboard in Azure AI Foundry

Build production-ready GenAI apps with Azure AI Foundry

If you are interested in testing AI reports and providing feedback to the product team, please request access to the private preview by completing the Interest Form.

Want to learn more about building trustworthy GenAI applications with Azure AI? Here’s more guidance and exciting announcements to support your GenAIOps and governance workflows from Microsoft Ignite:

Whether you’re joining in person or online, we can’t wait to see you at Microsoft Ignite 2024. We’ll share the latest from Azure AI and go deeper into capabilities that support trustworthy AI with these sessions:

We’ll also be available for questions in the Connection Hub on Level 3, where you can find “ask the expert” stations for Azure AI and Trustworthy AI.

Updated Nov 18, 2024
Version 1.0
No CommentsBe the first to comment