rag
22 TopicsCreate your own QA RAG Chatbot with LangChain.js + Azure OpenAI Service
Demo: Mpesa for Business Setup QA RAG Application In this tutorial we are going to build a Question-Answering RAG Chat Web App. We utilize Node.js and HTML, CSS, JS. We also incorporate Langchain.js + Azure OpenAI + MongoDB Vector Store (MongoDB Search Index). Get a quick look below. Note: Documents and illustrations shared here are for demo purposes only and Microsoft or its products are not part of Mpesa. The content demonstrated here should be used for educational purposes only. Additionally, all views shared here are solely mine. What you will need: An active Azure subscription, get Azure for Student for free or get started with Azure for 12 months free. VS Code Basic knowledge in JavaScript (not a must) Access to Azure OpenAI, click here if you don't have access. Create a MongoDB account (You can also use Azure Cosmos DB vector store) Setting Up the Project In order to build this project, you will have to fork this repository and clone it. GitHub Repository link: https://github.com/tiprock-network/azure-qa-rag-mpesa . Follow the steps highlighted in the README.md to setup the project under Setting Up the Node.js Application. Create Resources that you Need In order to do this, you will need to have Azure CLI or Azure Developer CLI installed in your computer. Go ahead and follow the steps indicated in the README.md to create Azure resources under Azure Resources Set Up with Azure CLI. You might want to use Azure CLI to login in differently use a code. Here's how you can do this. Instead of using az login. You can do az login --use-code-device OR you would prefer using Azure Developer CLI and execute this command instead azd auth login --use-device-code Remember to update the .env file with the values you have used to name Azure OpenAI instance, Azure models and even the API Keys you have obtained while creating your resources. Setting Up MongoDB After accessing you MongoDB account get the URI link to your database and add it to the .env file along with your database name and vector store collection name you specified while creating your indexes for a vector search. Running the Project In order to run this Node.js project you will need to start the project using the following command. npm run dev The Vector Store The vector store used in this project is MongoDB store where the word embeddings were stored in MongoDB. From the embeddings model instance we created on Azure AI Foundry we are able to create embeddings that can be stored in a vector store. The following code below shows our embeddings model instance. //create new embedding model instance const azOpenEmbedding = new AzureOpenAIEmbeddings({ azureADTokenProvider, azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME, azureOpenAIApiEmbeddingsDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_EMBEDDING_NAME, azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION, azureOpenAIBasePath: "https://eastus2.api.cognitive.microsoft.com/openai/deployments" }); The code in uploadDoc.js offers a simple way to do embeddings and store them to MongoDB. In this approach the text from the documents is loaded using the PDFLoader from Langchain community. The following code demonstrates how the embeddings are stored in the vector store. // Call the function and handle the result with await const storeToCosmosVectorStore = async () => { try { const documents = await returnSplittedContent() //create store instance const store = await MongoDBAtlasVectorSearch.fromDocuments( documents, azOpenEmbedding, { collection: vectorCollection, indexName: "myrag_index", textKey: "text", embeddingKey: "embedding", } ) if(!store){ console.log('Something wrong happened while creating store or getting store!') return false } console.log('Done creating/getting and uploading to store.') return true } catch (e) { console.log(`This error occurred: ${e}`) return false } } In this setup, Question Answering (QA) is achieved by integrating Azure OpenAI’s GPT-4o with MongoDB Vector Search through LangChain.js. The system processes user queries via an LLM (Large Language Model), which retrieves relevant information from a vectorized database, ensuring contextual and accurate responses. Azure OpenAI Embeddings convert text into dense vector representations, enabling semantic search within MongoDB. The LangChain RunnableSequence structures the retrieval and response generation workflow, while the StringOutputParser ensures proper text formatting. The most relevant code snippets to include are: AzureChatOpenAI instantiation, MongoDB connection setup, and the API endpoint handling QA queries using vector search and embeddings. There are some code snippets below to explain major parts of the code. Azure AI Chat Completion Model This is the model used in this implementation of RAG, where we use it as the model for chat completion. Below is a code snippet for it. const llm = new AzureChatOpenAI({ azTokenProvider, azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME, azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME, azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION }) Using a Runnable Sequence to give out Chat Output This shows how a runnable sequence can be used to give out a response given the particular output format/ output parser added on to the chain. //Stream response app.post(`${process.env.BASE_URL}/az-openai/runnable-sequence/stream/chat`, async (req,res) => { //check for human message const { chatMsg } = req.body if(!chatMsg) return res.status(201).json({ message:'Hey, you didn\'t send anything.' }) //put the code in an error-handler try{ //create a prompt template format template const prompt = ChatPromptTemplate.fromMessages( [ ["system", `You are a French-to-English translator that detects if a message isn't in French. If it's not, you respond, "This is not French." Otherwise, you translate it to English.`], ["human", `${chatMsg}`] ] ) //runnable chain const chain = RunnableSequence.from([prompt, llm, outPutParser]) //chain result let result_stream = await chain.stream() //set response headers res.setHeader('Content-Type','application/json') res.setHeader('Transfer-Encoding','chunked') //create readable stream const readable = Readable.from(result_stream) res.status(201).write(`{"message": "Successful translation.", "response": "`); readable.on('data', (chunk) => { // Convert chunk to string and write it res.write(`${chunk}`); }); readable.on('end', () => { // Close the JSON response properly res.write('" }'); res.end(); }); readable.on('error', (err) => { console.error("Stream error:", err); res.status(500).json({ message: "Translation failed.", error: err.message }); }); }catch(e){ //deliver a 500 error response return res.status(500).json( { message:'Failed to send request.', error:e } ) } }) To run the front end of the code, go to your BASE_URL with the port given. This enables you to run the chatbot above and achieve similar results. The chatbot is basically HTML+CSS+JS. Where JavaScript is mainly used with fetch API to get a response. Thanks for reading. I hope you play around with the code and learn some new things. Additional Reads Introduction to LangChain.js Create an FAQ Bot on Azure Build a basic chat app in Python using Azure AI Foundry SDK43Views0likes0CommentsMake your own private ChatGPT
Introduction Creating your own private ChatGPT allows you to leverage AI capabilities while ensuring data privacy and security. This guide walks you through building a secure, customized chatbot using tools like Azure OpenAI, Cosmos DB and Azure App service. Why Build a Private ChatGPT? With the rise of AI-driven applications, organizations, people often face challenges related to data privacy, customization, and integration. Building a private ChatGPT addresses these concerns by: Maintaining Data Privacy: Keep sensitive information within your infrastructure. Customizing Responses: Tailor the chatbot’s behavior and language to suit your requirements. Ensuring Security: Leverage enterprise-grade security protocols. Avoiding Data Sharing: Prevent your data from being used to train external models. If organizations do not take these measures their data may go into future model training and can leak your sensitive data to public. Eg: Chatgpt collects personal data mentioned in their privacy policy Prerequisites Before you begin, ensure you have: Access to Azure OpenAI Service. A development environment set up with Python. Basic knowledge of FastAPI and MongoDB. An Azure account with necessary permissions. If you do not have Azure subscription, try Azure for students for FREE. Step 1: Set Up Azure OpenAI Log in to the Azure Portal and create an Azure OpenAI resource. Deploy a model, such as GPT-4o (multimodal), and note down the endpoint and API key. Note there is also an option of keyless authentication. Configure permissions to control access. Step 2: Use Chatgpt like app sample You can select any repository to be as base template for your app, in this I will be using the third option AOAIchat. It is developed by me. GitHub - mckaywrigley/chatbot-ui: AI chat for any model. Azure-Samples/azure-search-openai-demo: A sample app for the Retrieval-Augmented Generation pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences. sourabhkv/AOAIchat: Azure OpenAI chat This architecture diagram represents a typical flow for a private ChatGPT application with the following components: App UX (User Interface): This is the front-end application (mobile, web, or desktop) where users interact with the chatbot. It sends the user's input (prompt) and displays the AI's responses. App Service: Acts as the backend application, handling user requests and coordinating with other services. Functions: Receives user inputs and prepares them for processing by the Azure OpenAI service. Streams AI responses back to the App UX. Reads from and writes to Cosmos DB to manage chat history. Azure OpenAI Service: This is the core AI service, processing the user input and generating responses using models like GPT-4o. The App Service sends the user input (along with context) to this service and receives the AI-generated responses. Cosmos DB: A NoSQL database used to store and manage chat history. Operations: Writes user messages and AI-generated responses for future reference or analysis. Reads chat history to provide context for AI responses, enabling more intelligent and contextual conversations. Data Flow: User inputs are sent from the App UX to the App Service. The App Service forwards the input (with additional context, if needed) to Azure OpenAI. Azure OpenAI generates a response, which is streamed back to the App UX via the App Service. The App Service writes user inputs and AI responses to Cosmos DB for persistence. This architecture ensures scalability, secure data handling, and the ability to provide contextual responses by integrating database and AI services. What can you do with my template? AOAIchat supports personal, enterprise chat enabled by RAG People can enable RAG mode if they want to search within their database, else it behaves like normal ChatGPT. It supports multimodality, (supports image, text input) also depends on model deployed in Azure AI foundry. Step 3: Deploy to Azure Deploy a Cosmos DB account in nearest region Deploy Azure OpenAI model (gpt-4o, gpt-4o-mini recommended) Deploy Azure App service, try using container I would recommend B1plan to your nearest region, select docker registry sourabhkv/aoaichatdb:0.1 startup command uvicorn app:app --host 0.0.0.0 --port 80 After app service starts, put all environment variables The application requires the following environment variables to be set for proper configuration: Environment Variable Description AZURE_OPENAI_ENDPOINT The endpoint for Azure OpenAI API. AZURE_OPENAI_API_KEY API key for accessing Azure OpenAI. DEPLOYMENT_NAME Azure OpenAI deployment name. API_VERSION API version for Azure OpenAI. MAX_TOKENS Maximum tokens for API responses. MONGO_DETAILS MongoDB connection string. AZURE_OPENAI_ENDPOINT=<your_azure_openai_endpoint> AZURE_OPENAI_API_KEY=<your_azure_openai_api_key> DEPLOYMENT_NAME=<your_deployment_name> API_VERSION=<your_api_version> MAX_TOKENS=<max_tokens> MONGO_DETAILS=<your_mongo_connection_string> Optional feature: implement authentication to secure access. Within app service select Authentication and select service providers. I went with Entra based authentication with single tenant. There is option of multi-tenant, personal accounts as well. Restart App service and within 2 minutes your private ChatGPT is ready. Pricing Pricing may depend on the plan you have deployed resources and region. Check Azure calculator for price estimation. My estimate for pricing I deployed all my resources in Sweden central Cosmos DB config - Cosmos DB for MongoDB (RU) serverless config with single write master, 2 GB transactional storage, 2 backup plan (FREE) ~ 0.75$ Azure OpenAI service - plan S0, model gpt-4o-mini global deployment, Input 20000 tokens, Output 10000 tokens ~ 9.00$ App service plan - OS Linux, Tier B1, instance count 1 ~13.14$ Total monthly cost = 22.89$ This price may vary in future, in region I calculated my configuration in Azure calculator Governance Azure OpenAI provides content filters to block any kind of input that violates responsible AI practices. Categories include Hate and Fairness Sexual Violence Self-harm User Prompt Attacks (direct and indirect) The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Azure OpenAI Service includes default safety settings applied to all models set as medium. Content filters can be modified to different level depending on use case. It supports RAG, I have provided detailed solution for it in my GitHub. Practical implementation GE Aerospace, in partnership with Microsoft and Accenture, has launched a company-wide generative AI platform, leveraging Microsoft Azure and Azure OpenAI Service. This solution aims to transform asset tracking and compliance in aviation, enabling quick access to maintenance records and reducing manual processing time from days to minutes. It supports informed decision-making by providing insights into aircraft leasing, compliance gaps, and asset health. For enterprises implementing private ChatGPT solutions, this illustrates the potential of generative AI for streamlining document-intensive processes while ensuring data security and compliance through cloud-based infrastructure like Azure. GE Aerospace Launches Company-wide Generative AI Platform for Employees | GE Aerospace News Build your own private ChatGPT style app with enterprise-ready architecture - By Microsoft Mechanics How to make private ChatGPT for FREE? It can be FREE if all of the setup is running locally on your hardware. Cosmos DB <-> MongoDB. Azure OpenAI <-> Ollama / LM studio Refer this NOTE : I have used gpt-4o, gpt-4o-mini these values are hardcoded in webpage, if you are using other models, you might have to change them in index.html. App Service <-> Local machine Register for Github models to access API for FREE. Note: GitHub models have rate limit for different models. Useful links sourabhkv/AOAIchat: Azure OpenAI chat What is RAG? Get started with Azure OpenAI API Chat with Azure OpenAI models using your own data4.9KViews1like0CommentsTiny But Mighty: Unleashing the Power of Small Language Models 🚀
While Large Language Models (LLMs) like GPT-4 dominate headlines with their extensive capabilities, they often come at the cost of high computational requirements and complexity. For developers and organizations looking to implement AI solutions on edge devices or with limited resources, Small Language Models (SLMs) are emerging as a practical alternative. SLMs are not just "smaller" versions of their larger counterparts—they're designed to be faster, more efficient, and adaptable for specific tasks. With fewer parameters and lower computational needs, SLMs open the door to deploying AI on mobile devices, IoT systems, and edge environments without compromising performance. What You Stand to Learn 🧠 Introduction to Microsoft's AI Ecosystem Discover Microsoft's end-to-end AI development tools, from Azure AI Services to ONNX Runtime, enabling efficient and secure deployment of AI models across cloud and edge environments. The Advantages of SLMs over LLMs SLMs are game-changers for edge AI applications, providing faster training and inference times, reduced energy costs, and scalability across diverse devices. Hands-On with Phi-3 and ONNX Runtime Experience live demonstrations of SLMs in action with tools like Phi-3 and ONNX Runtime, showcasing how to fine-tune and deploy models on mobile devices, IoT, and hybrid cloud environments. Responsible AI Practices Understand how to safeguard your AI applications with Microsoft's Responsible AI toolkit, ensuring ethical and trustworthy deployments. Watch the Full Session 👨💻 📅 Date: December 12, 2024 ⏰ Time: 4 PM GMT | 5 PM CEST | 8 AM PT | 11 AM ET | 7 PM EAT A session packed with live demos, practical examples, and Q&A opportunities. Register NOW | Events | Microsoft Reactor Agenda 🔍 Introduction (5 min) A brief overview of the session and its focus on SLMs and LLMs. Microsoft AI Tooling (5 min) Explore the latest tools like Azure AI Services, Azure Machine Learning, and Responsible AI Tooling. How to Choose the Right Model (10 min) Key considerations such as performance, customizability, and ethical implications. Comparing SLMs vs LLMs (10 min) The strengths, weaknesses, and best use cases for both Small and Large Language Models. Deploying Models at the Edge (10 min) Insights into optimizing AI for mobile, IoT, and edge devices. Q&A Addressing participant questions about AI development and deployment.357Views2likes0CommentsBuilding Intelligent Applications with Local RAG in .NET and Phi-3: A Hands-On Guide
Let's learn how to do Retrieval Augmented Generation (RAG) using local resources in .NET! In this post, we’ll show you how to combine the Phi-3 language model, Local Embeddings, and Semantic Kernel to create a RAG scenario.15KViews5likes13CommentsOptimizing Retrieval for RAG Apps: Vector Search and Hybrid Techniques
In this blog we are going to dive into optimizing our search strategy with Hybrid search techniques. Common practices for implementing the retrieval step in retrieval-augmented generation (RAG) applications are; Keyword search Vector Search Hybrid search (Keyword + Vector) Hybrid + Semantic ranker7.8KViews3likes0CommentsWhy Should Business Adopt RAG and migrate from LLMs?
In this blog we are going to discuss the importance of migrating your product or startup project from LLMS to RAG. Adopting RAG empowers businesses to leverage external knowledge, enhance accuracy, and create more robust AI applications. It’s a strategic move toward building intelligent systems that bridge the gap between generative capabilities and authoritative information. Below are topics in this blog. Brief History of AI What are Large Language Models (LLMS). Limitation of LLMS. How can we incorporate domain knowledge. What is Retrieval Augmented Generation (RAG). What is Robust retrieval for RAG Apps. Once we are done with these concepts, I hope to convince you to adopt RAG in your project.3.4KViews2likes0Comments