Blog Post

AI - AI Platform Blog
4 MIN READ

The Future Of AI: Deconstructing Contoso Chat - Learning GenAIOps in practice

nitya's avatar
nitya
Icon for Microsoft rankMicrosoft
Feb 04, 2025

The Future of AI blog series is an evolving collection of posts from the AI Futures team in collaboration with subject matter experts across Microsoft. In this series, we explore tools and technologies that will drive the next generation of AI. Explore more at: https://aka.ms/the-future-of-ai

GenAIOps: From Principles to Practice 

Earlier in the Future of AI series, we learned about the paradigm shift in generative AI Ops that will drive the next generation of innovative AI solutions. The article stressed the value of platforms like Azure AI Foundry that can support a seamless end-to-end developer experience tailored to every stage of the generative AI application lifecycle, as shown. 

But how can we go from seeing this big picture to gaining an applied understanding of how this works for our application scenarios or requirements? The answer lies in open-source AI templates that provide functional samples for popular use cases, that can be deployed with a single command. We can now deconstruct the sample to see how the components are connected, then reconstruct it to get a customizable sandbox for our own scenarios. 

In this blog post, we kickstart a multi-part journey into deconstructing Contoso Chat - a custom retail copilot sample that uses a retrieval augmented generation (RAG) pattern to ground AI responses in your data. The sample is a signature template for Azure AI Foundry and the basis for an advanced level workshop on the Microsoft AI Tour

Introducing Contoso Chat 

Contoso Chat is a retail copilot designed for use with Contoso Outdoors, a fictitious enterprise retail website that sells hiking and camping gear to adventure-seekers. With this integration, customers on the retailer's website can now use natural language to ask questions about the retailer’s products as shown below. 

 
The custom copilot returns responses that are grounded in the retailer’s own product and customer data and meet desired quality and safety criteria for responsible AI use. Referecing the GenAIOps figure, we can see this involves model customization using a Retrieval Augmented Generation (RAG) design pattern, that we implement in Contoso Chat using the application architecture below.

The custom copilot is hosted in Azure Container Apps (ACA), exposing an API endpoint that can be accessed from authenticated third party applications. Incoming requests are forwarded to the chat application which orchestrates interactions with Azure OpenAI Services (for chat and embedding models), Azure AI Search (for product retrieval) and Cosmos DB (for customer data) to generate the final response returned to the user. 

Deconstructing Contoso Chat 

So how do we go about deconstructing this application from the given sample? The answer lies in our use of Azure AI app templates – pre-built task-specific solutions with open-source codebases (like Contoso Chat) that you can now deploy to Azure with a single command to give you a functioning application as a sandbox for further exploration. You can now explore and customize the codebase to understand the tools and workflows involved, then redeploy with changes using the same command for rapid iteration.

Let's take a look at what this involves in practice. The figure shows the end-to-end developer workflow involved in building the Contoso Chat solution in a series of stages. 

  • Provision – resource and application with the Azure Developer CLI. 
  • Setup – get a prebuilt development environment with GitHub Codespaces. 
  • Ideate – get a “playground” in your IDE with Prompty and the Azure AI Foundry SDK 
  • Evaluate – use custom evaluators to assess the prototype for quality & safety.  
  • Deploy – your application to Azure Container Apps for a hosted API endpoint.

Each stage highlights the key task accomplished along with recommended developer tools to streamline execution. Deconstructing the sample now involves three steps to get started:

  1. Setup Dev Environment: Fork the repository and launch GitHub Codespaces to get a prebuilt dev environment with all required tools and dependencies pre-installed.
  2. Provision Infrastructure: Authenticate with Azure from your dev environment, then use a single command (azd up) to provision the resources and deploy the application to Azure. 
  3. Iterate-Evaluate-(Re)Deploy: Use this "application sandbox" to understand the tools and process for ideation and evaluation, then customize it with your data and requirements to get an applied understanding of the workflow in action.

What's Next

Look for future posts in this series where we will dive deeper into each stage and put the spotlight on the developer tools and Azure AI Foundry features that streamline execution there. By the end of the series, you should have an actionable understanding of GenAIOps application lifecycles, and a live sandbox you can experiment with to build your RAG-based generative AI applications code-first on Azure AI Foundry. Bookmark this page and revisit it later to view links to the next posts in the series.

Related Resources

Want to get a head-start on exploration? Here are four resources that can help!

  1. Contoso Chat repository - Browse the README for a self-guided quickstart.
  2. Microsoft AI Tour repository - Register for instructor-led workshops at tour stops.
  3. AI app templates gallery - Discover other AI solution templates to deconstruct. 
  4. Azure AI Foundry - Discover models and services tailored to your use case! 

The Future of AI blog series is an evolving collection of posts from the AI Futures team in collaboration with subject matter experts across Microsoft. In this series, we explore tools and technologies that will drive the next generation of AI.
Explore more at: https://aka.ms/the-future-of-ai

Updated Feb 04, 2025
Version 1.0
No CommentsBe the first to comment