azure
2949 TopicsCreate your own QA RAG Chatbot with LangChain.js + Azure OpenAI Service
Demo: Mpesa for Business Setup QA RAG Application In this tutorial we are going to build a Question-Answering RAG Chat Web App. We utilize Node.js and HTML, CSS, JS. We also incorporate Langchain.js + Azure OpenAI + MongoDB Vector Store (MongoDB Search Index). Get a quick look below. Note: Documents and illustrations shared here are for demo purposes only and Microsoft or its products are not part of Mpesa. The content demonstrated here should be used for educational purposes only. Additionally, all views shared here are solely mine. What you will need: An active Azure subscription, get Azure for Student for free or get started with Azure for 12 months free. VS Code Basic knowledge in JavaScript (not a must) Access to Azure OpenAI, click here if you don't have access. Create a MongoDB account (You can also use Azure Cosmos DB vector store) Setting Up the Project In order to build this project, you will have to fork this repository and clone it. GitHub Repository link: https://github.com/tiprock-network/azure-qa-rag-mpesa . Follow the steps highlighted in the README.md to setup the project under Setting Up the Node.js Application. Create Resources that you Need In order to do this, you will need to have Azure CLI or Azure Developer CLI installed in your computer. Go ahead and follow the steps indicated in the README.md to create Azure resources under Azure Resources Set Up with Azure CLI. You might want to use Azure CLI to login in differently use a code. Here's how you can do this. Instead of using az login. You can do az login --use-code-device OR you would prefer using Azure Developer CLI and execute this command instead azd auth login --use-device-code Remember to update the .env file with the values you have used to name Azure OpenAI instance, Azure models and even the API Keys you have obtained while creating your resources. Setting Up MongoDB After accessing you MongoDB account get the URI link to your database and add it to the .env file along with your database name and vector store collection name you specified while creating your indexes for a vector search. Running the Project In order to run this Node.js project you will need to start the project using the following command. npm run dev The Vector Store The vector store used in this project is MongoDB store where the word embeddings were stored in MongoDB. From the embeddings model instance we created on Azure AI Foundry we are able to create embeddings that can be stored in a vector store. The following code below shows our embeddings model instance. //create new embedding model instance const azOpenEmbedding = new AzureOpenAIEmbeddings({ azureADTokenProvider, azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME, azureOpenAIApiEmbeddingsDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_EMBEDDING_NAME, azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION, azureOpenAIBasePath: "https://eastus2.api.cognitive.microsoft.com/openai/deployments" }); The code in uploadDoc.js offers a simple way to do embeddings and store them to MongoDB. In this approach the text from the documents is loaded using the PDFLoader from Langchain community. The following code demonstrates how the embeddings are stored in the vector store. // Call the function and handle the result with await const storeToCosmosVectorStore = async () => { try { const documents = await returnSplittedContent() //create store instance const store = await MongoDBAtlasVectorSearch.fromDocuments( documents, azOpenEmbedding, { collection: vectorCollection, indexName: "myrag_index", textKey: "text", embeddingKey: "embedding", } ) if(!store){ console.log('Something wrong happened while creating store or getting store!') return false } console.log('Done creating/getting and uploading to store.') return true } catch (e) { console.log(`This error occurred: ${e}`) return false } } In this setup, Question Answering (QA) is achieved by integrating Azure OpenAI’s GPT-4o with MongoDB Vector Search through LangChain.js. The system processes user queries via an LLM (Large Language Model), which retrieves relevant information from a vectorized database, ensuring contextual and accurate responses. Azure OpenAI Embeddings convert text into dense vector representations, enabling semantic search within MongoDB. The LangChain RunnableSequence structures the retrieval and response generation workflow, while the StringOutputParser ensures proper text formatting. The most relevant code snippets to include are: AzureChatOpenAI instantiation, MongoDB connection setup, and the API endpoint handling QA queries using vector search and embeddings. There are some code snippets below to explain major parts of the code. Azure AI Chat Completion Model This is the model used in this implementation of RAG, where we use it as the model for chat completion. Below is a code snippet for it. const llm = new AzureChatOpenAI({ azTokenProvider, azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME, azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME, azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION }) Using a Runnable Sequence to give out Chat Output This shows how a runnable sequence can be used to give out a response given the particular output format/ output parser added on to the chain. //Stream response app.post(`${process.env.BASE_URL}/az-openai/runnable-sequence/stream/chat`, async (req,res) => { //check for human message const { chatMsg } = req.body if(!chatMsg) return res.status(201).json({ message:'Hey, you didn\'t send anything.' }) //put the code in an error-handler try{ //create a prompt template format template const prompt = ChatPromptTemplate.fromMessages( [ ["system", `You are a French-to-English translator that detects if a message isn't in French. If it's not, you respond, "This is not French." Otherwise, you translate it to English.`], ["human", `${chatMsg}`] ] ) //runnable chain const chain = RunnableSequence.from([prompt, llm, outPutParser]) //chain result let result_stream = await chain.stream() //set response headers res.setHeader('Content-Type','application/json') res.setHeader('Transfer-Encoding','chunked') //create readable stream const readable = Readable.from(result_stream) res.status(201).write(`{"message": "Successful translation.", "response": "`); readable.on('data', (chunk) => { // Convert chunk to string and write it res.write(`${chunk}`); }); readable.on('end', () => { // Close the JSON response properly res.write('" }'); res.end(); }); readable.on('error', (err) => { console.error("Stream error:", err); res.status(500).json({ message: "Translation failed.", error: err.message }); }); }catch(e){ //deliver a 500 error response return res.status(500).json( { message:'Failed to send request.', error:e } ) } }) To run the front end of the code, go to your BASE_URL with the port given. This enables you to run the chatbot above and achieve similar results. The chatbot is basically HTML+CSS+JS. Where JavaScript is mainly used with fetch API to get a response. Thanks for reading. I hope you play around with the code and learn some new things. Additional Reads Introduction to LangChain.js Create an FAQ Bot on Azure Build a basic chat app in Python using Azure AI Foundry SDK32Views0likes0CommentsPartner Case Study | Esri
With Esri's cloud-forward geospatial platform, organizations can view, edit, manage, and analyze data to make more informed decisions. The agricultural industry depends on data to feed and clothe populations from roughly 38% of the global land surface. In fact, according to the United Nations Environment Programme, by the year 2050, there will be a need to feed 9 to 10 billion people as the population continues to experience steady growth. That, coupled with environmental factors like climate change, presents a challenge for growers. The data the world relies on should be organized and housed in a way that is easily accessible even by the smallest growers. Esri, a global market leader in geographic information system (GIS) software and location intelligence (LI), transforms decision-making with spatial insights and mapping through their ArcGIS technology. Esri supports its customers in the agricultural industry to ensure every farmer has the data they need. Esri developed a solution powered by Microsoft Azure to meet this growing need for accessible data. Continue reading here Explore all case studies or submit your own Subscribe to case studies tag to follow all new case study posts.23Views0likes0CommentsHow can avoid over consumption in Azure cloud esp AKS ?
Over consumption is Azure Kubernetes services can lead to unnecessary costs and resource wastage if you are not leverage properly. here are some key things to avoid it. 1.Right size your cluster: a. Optimise Node pools: use appropriate VM size for worker nodes based on work load requirements. do not go more than what is required. b.Use Autoscaling feature to scale nodes up/down c.Set requests & Limits: define CPU and memory requets/lmits for pods to prevent over allocation. 2.Optimise workload scaling 3.Enable Monitoring to see the optimise costs 4.Use managed AKS instead of Shared AKS 5.Use azure files or managed disk efficiently to optimise the storage 6.Cleanup unused resources. delete idle work loads, unused namespaces.13Views0likes0CommentsSecuring VNet-Integrated Azure Functions with Blob Triggers: Private Endpoints and No Public Access
Azure Blob Trigger in Azure Functions enables automatic function invocation based on changes in Blob Storage, streamlining serverless integration with cloud storage. To ensure reliability, it handles failures by using poison blob queues and configurable retry mechanisms.VM Login issues
Issue: The virtual machine has been created, but login is not possible ? what could be reason ? Root cause: Multiple stale ip addresses were attached to the virtual machine possibly due to multiple recreation using same name, Solution: Recreate the virtual machine with unique name in another subnet. issue is resolved.91Views0likes0CommentsBlueprint opportunity for Designing and Implementing a Microsoft Azure AI Solutions
Greetings! Microsoft is updating a certification for Designing and Implementing a Microsoft Azure AI Solution, and we need your input through our exam blueprinting survey. The blueprint determines how many questions each skill in the exam will be assigned. Please complete the online survey by February 24th, 2025. Please also feel free to forward the survey to any colleagues you consider subject matter experts for this certification. If you have any questions, feel free to contact John Sowles at josowles@microsoft.com or Rohan Mahadevan at rmahadevan@microsoft.com. Designing and Implementing a Microsoft Azure AI Solution blueprint survey link: https://microsoftlearning.co1.qualtrics.com/jfe/form/SV_9tUvZ4THa0dhDU2 Thank you!80Views2likes1CommentWhat are the best practices for data governance in Azure across hybrid and multi-cloud environments?
Best practices for data governance in Azure start with a unified strategy that leverages Microsoft Purview for data cataloging, classification, and lineage tracking. Purview enables organizations to gain end-to-end visibility across their data estate, even in hybrid and multi-cloud environments. Azure Policy plays a crucial role in enforcing governance by defining compliance rules that automatically apply across subscriptions and services. Combining it with Azure Blueprints can help ensure that governance frameworks are consistently deployed at scale. For enhanced security, integrating Microsoft Defender for Cloud allows continuous monitoring and risk assessment of data assets. Additionally, organizations should implement role-based access control (RBAC) and encryption mechanisms to safeguard sensitive information. One challenge in multi-cloud governance is achieving real-time data classification and policy enforcement. Has anyone successfully extended Purview’s capabilities to non-Azure environments, such as AWS or GCP?Solved120Views0likes1Comment