javascript
88 TopicsCreate your own QA RAG Chatbot with LangChain.js + Azure OpenAI Service
Demo: Mpesa for Business Setup QA RAG Application In this tutorial we are going to build a Question-Answering RAG Chat Web App. We utilize Node.js and HTML, CSS, JS. We also incorporate Langchain.js + Azure OpenAI + MongoDB Vector Store (MongoDB Search Index). Get a quick look below. Note: Documents and illustrations shared here are for demo purposes only and Microsoft or its products are not part of Mpesa. The content demonstrated here should be used for educational purposes only. Additionally, all views shared here are solely mine. What you will need: An active Azure subscription, get Azure for Student for free or get started with Azure for 12 months free. VS Code Basic knowledge in JavaScript (not a must) Access to Azure OpenAI, click here if you don't have access. Create a MongoDB account (You can also use Azure Cosmos DB vector store) Setting Up the Project In order to build this project, you will have to fork this repository and clone it. GitHub Repository link: https://github.com/tiprock-network/azure-qa-rag-mpesa . Follow the steps highlighted in the README.md to setup the project under Setting Up the Node.js Application. Create Resources that you Need In order to do this, you will need to have Azure CLI or Azure Developer CLI installed in your computer. Go ahead and follow the steps indicated in the README.md to create Azure resources under Azure Resources Set Up with Azure CLI. You might want to use Azure CLI to login in differently use a code. Here's how you can do this. Instead of using az login. You can do az login --use-code-device OR you would prefer using Azure Developer CLI and execute this command instead azd auth login --use-device-code Remember to update the .env file with the values you have used to name Azure OpenAI instance, Azure models and even the API Keys you have obtained while creating your resources. Setting Up MongoDB After accessing you MongoDB account get the URI link to your database and add it to the .env file along with your database name and vector store collection name you specified while creating your indexes for a vector search. Running the Project In order to run this Node.js project you will need to start the project using the following command. npm run dev The Vector Store The vector store used in this project is MongoDB store where the word embeddings were stored in MongoDB. From the embeddings model instance we created on Azure AI Foundry we are able to create embeddings that can be stored in a vector store. The following code below shows our embeddings model instance. //create new embedding model instance const azOpenEmbedding = new AzureOpenAIEmbeddings({ azureADTokenProvider, azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME, azureOpenAIApiEmbeddingsDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_EMBEDDING_NAME, azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION, azureOpenAIBasePath: "https://eastus2.api.cognitive.microsoft.com/openai/deployments" }); The code in uploadDoc.js offers a simple way to do embeddings and store them to MongoDB. In this approach the text from the documents is loaded using the PDFLoader from Langchain community. The following code demonstrates how the embeddings are stored in the vector store. // Call the function and handle the result with await const storeToCosmosVectorStore = async () => { try { const documents = await returnSplittedContent() //create store instance const store = await MongoDBAtlasVectorSearch.fromDocuments( documents, azOpenEmbedding, { collection: vectorCollection, indexName: "myrag_index", textKey: "text", embeddingKey: "embedding", } ) if(!store){ console.log('Something wrong happened while creating store or getting store!') return false } console.log('Done creating/getting and uploading to store.') return true } catch (e) { console.log(`This error occurred: ${e}`) return false } } In this setup, Question Answering (QA) is achieved by integrating Azure OpenAI’s GPT-4o with MongoDB Vector Search through LangChain.js. The system processes user queries via an LLM (Large Language Model), which retrieves relevant information from a vectorized database, ensuring contextual and accurate responses. Azure OpenAI Embeddings convert text into dense vector representations, enabling semantic search within MongoDB. The LangChain RunnableSequence structures the retrieval and response generation workflow, while the StringOutputParser ensures proper text formatting. The most relevant code snippets to include are: AzureChatOpenAI instantiation, MongoDB connection setup, and the API endpoint handling QA queries using vector search and embeddings. There are some code snippets below to explain major parts of the code. Azure AI Chat Completion Model This is the model used in this implementation of RAG, where we use it as the model for chat completion. Below is a code snippet for it. const llm = new AzureChatOpenAI({ azTokenProvider, azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME, azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME, azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION }) Using a Runnable Sequence to give out Chat Output This shows how a runnable sequence can be used to give out a response given the particular output format/ output parser added on to the chain. //Stream response app.post(`${process.env.BASE_URL}/az-openai/runnable-sequence/stream/chat`, async (req,res) => { //check for human message const { chatMsg } = req.body if(!chatMsg) return res.status(201).json({ message:'Hey, you didn\'t send anything.' }) //put the code in an error-handler try{ //create a prompt template format template const prompt = ChatPromptTemplate.fromMessages( [ ["system", `You are a French-to-English translator that detects if a message isn't in French. If it's not, you respond, "This is not French." Otherwise, you translate it to English.`], ["human", `${chatMsg}`] ] ) //runnable chain const chain = RunnableSequence.from([prompt, llm, outPutParser]) //chain result let result_stream = await chain.stream() //set response headers res.setHeader('Content-Type','application/json') res.setHeader('Transfer-Encoding','chunked') //create readable stream const readable = Readable.from(result_stream) res.status(201).write(`{"message": "Successful translation.", "response": "`); readable.on('data', (chunk) => { // Convert chunk to string and write it res.write(`${chunk}`); }); readable.on('end', () => { // Close the JSON response properly res.write('" }'); res.end(); }); readable.on('error', (err) => { console.error("Stream error:", err); res.status(500).json({ message: "Translation failed.", error: err.message }); }); }catch(e){ //deliver a 500 error response return res.status(500).json( { message:'Failed to send request.', error:e } ) } }) To run the front end of the code, go to your BASE_URL with the port given. This enables you to run the chatbot above and achieve similar results. The chatbot is basically HTML+CSS+JS. Where JavaScript is mainly used with fetch API to get a response. Thanks for reading. I hope you play around with the code and learn some new things. Additional Reads Introduction to LangChain.js Create an FAQ Bot on Azure Build a basic chat app in Python using Azure AI Foundry SDK14Views0likes0CommentsUse AI for Free with GitHub Models and TypeScript! 💸💸💸
Learn how to use AI for free with GitHub Models! Test models like GPT-4o without paying for APIs or setting up infrastructure. This step-by-step guide shows how to integrate GitHub Models with TypeScript in the Microblog AI Remix project. Start exploring AI for free today!Unlocking the Power of Azure Container Apps in 1 Minute Video
Azure Container Apps provides a seamless way to build, deploy, and scale cloud-native applications without the complexity of managing infrastructure. Whether you’re developing microservices, APIs, or AI-powered applications, this fully managed service enables you to focus on writing code while Azure handles scalability, networking, and deployments. In this blog post, we explore five essential aspects of Azure Container Apps—each highlighted in a one-minute video. From intelligent applications and secure networking to effortless deployments and rollbacks, these insights will help you maximize the capabilities of serverless containers on Azure. Azure Container Apps - in 1 Minute Azure Container Apps is a fully managed platform designed for cloud-native applications, providing effortless deployment and scaling. It eliminates infrastructure complexity, letting developers focus on writing code while Azure automatically handles scaling based on demand. Whether running APIs, event-driven applications, or microservices, Azure Container Apps ensures high performance and flexibility with minimal operational overhead. Watch the video on YouTube Intelligent Apps with Azure Container Apps – in 1 Minute Azure Container Apps, Azure OpenAI, and Azure AI Search make it possible to build intelligent applications with Retrieval-Augmented Generation (RAG). Your app can call Azure OpenAI in real-time to generate and interpret data, while Azure AI Search retrieves relevant information, enhancing responses with up-to-date context. For advanced scenarios, AI models can execute live code via Azure Container Apps, and GPU-powered instances support fine-tuning and inferencing at scale. This seamless integration enables AI-driven applications to deliver dynamic, context-aware functionality with ease. Watch the video on YouTube Networking for Azure Container Apps: VNETs, Security Simplified – in 1 Minute Azure Container Apps provides built-in networking features, including support for Virtual Networks (VNETs) to control service-to-service communication. Secure internal traffic while exposing public endpoints with custom domain names and free certificates. Fine-tuned ingress and egress controls ensure that only the right traffic gets through, maintaining a balance between security and accessibility. Service discovery is automatic, making inter-app communication seamless within your Azure Container Apps environment. Watch the video on YouTube Azure Continuous Deployment and Observability with Azure Container Apps - in 1 Minute Azure Container Apps simplifies continuous deployment with built-in integrations for GitHub Actions and Azure DevOps pipelines. Every code change triggers a revision, ensuring smooth rollouts with zero downtime. Observability is fully integrated via Azure Monitor, Log Streaming, and the Container Console, allowing you to track performance, debug live issues, and maintain real-time visibility into your app’s health—all without interrupting operations. Watch the video on YouTube Effortless Rollbacks and Deployments with Azure Container Apps – in 1 Minute With Azure Container Apps, every deployment creates a new revision, allowing multiple versions to run simultaneously. This enables safe, real-time testing of updates without disrupting production. Rolling back is instant—just select a previous revision and restore your app effortlessly. This powerful revision control system ensures that deployments remain flexible, reliable, and low-risk. Watch the video on YouTube Watch the Full Playlist For a complete overview of Azure Container Apps capabilities, watch the full JavaScript on Azure Container Apps YouTube Playlist Create Your Own AI-Powered Video Content Inspired by these short-form technical videos? You can create your own AI-generated videos using Azure AI to automate scriptwriting and voiceovers. Whether you’re a content creator, or business looking to showcase technical concepts, Azure AI makes it easy to generate professional-looking explainer content. Learn how to create engaging short videos with Azure AI by following our open-source AI Video Playbook. Conclusion Azure Container Apps is designed to simplify modern application development by providing a fully managed, serverless container environment. Whether you need to scale microservices, integrate AI capabilities, enhance security with VNETs, or streamline CI/CD workflows, Azure Container Apps offers a comprehensive solution. By leveraging its built-in features such as automatic scaling, revision-based rollbacks, and deep observability, developers can deploy and manage applications with confidence. These one-minute videos provide a quick technical overview of how Azure Container Apps empowers you to build scalable, resilient applications with ease. FREE Content Check out our other FREE content to learn more about Azure services and Generative AI: Generative AI for Beginners - A JavaScript Adventure! Learn more about Azure AI Agent Service LlamaIndex on Azure JavaScript on Azure Container Apps JavaScript at MicrosoftSupercharge Your TypeScript Workflow: ESLint, Prettier, and Build Tools
Introduction TypeScript has become the go-to language for modern JavaScript developers, offering static typing, better tooling, and improved maintainability. But writing clean and efficient TypeScript isn’t just about knowing the syntax, it’s about using the right tools to enhance your workflow. In this blog, we’ll explore essential TypeScript tools like ESLint, Prettier, tsconfig settings, and VS Code extensions to help you write better code, catch errors early, and boost productivity. By the end, you’ll have a fully optimized TypeScript development environment with links to quality resources to deepen your knowledge. Why Tooling Matters in TypeScript Development Unlike JavaScript, TypeScript enforces static typing and requires compilation. This means proper tooling can: ✅ Catch syntax and type errors early. ✅ Ensure consistent formatting across your project. ✅ Improve code maintainability and collaboration. ✅ Enhance debugging and refactoring efficiency. Now, let’s dive into the must-have TypeScript tools for an optimal workflow! Setting Up ESLint for TypeScript 🛠 ESLint is a linter that helps catch errors, bad practices, and inconsistencies in your TypeScript code. Installing ESLint in a TypeScript Project First, install the required packages for ESLint, TypeScript, and our tooling: npm install --save-dev eslint @eslint/js typescript typescript-eslint Configuring ESLint Create an eslint.config.mjs file in your project root and populate it with the following: // TS-check import eslint from '@eslint/js'; import tseslint from 'typescript-eslint'; export default tseslint.config( eslint.configs.recommended, tseslint.configs.recommended, ); This setup: ✅ Uses TypeScript parser to understand TypeScript syntax. ✅ Enables recommended TypeScript rules to enforce best practices. ✅ Disables some strict rules for flexibility (can be adjusted later). Running ESLint To check your code for errors, run: pnpm eslint . 💡 Pro Tip: Add "lint": "pnpm eslint ." in package.json scripts to run ESLint easily with pnpm lint. ESLint will lint all TypeScript compatible files within the current folder, and will output the results to your terminal. Optimizing tsconfig.json for Better Type Safety TypeScript’s compiler settings (tsconfig.json) control how TypeScript checks and compiles your code. Recommended tsconfig.json Setup 💡 Pro Tip: Use tsc --noEmit to check for type errors without compiling the code. Debugging TypeScript in VS Code 🐞 VS Code provides built-in debugging for TypeScript. Setting Up Debugging Go to Run & Debug (Ctrl + Shift + D) → Click "Create a launch.json file". Select "Node.js". Modify .vscode/launch.json: Run the debugger by pressing F5. 💡 Pro Tip: Set breakpoints in .ts files and VS Code will map them correctly to .js files using source maps. Best VS Code Extensions for TypeScript Boost your productivity with these must-have extensions: ✅ Prettier ESLint TypeScript Formatter – Formats TypeScript code through Prettier, then through ESLint. ✅ Path Intellisense – Auto-suggests import paths. ✅ Error Lens – Highlights TypeScript errors inline. MS Learn Resources to Deepen Your Knowledge 📚 Here are some official Microsoft Learn resources to help you master TypeScript tooling: Using ESLint and Prettier in Visual Studio Code Linting JavaScript/Typescript in Visual Studio Getting Started with ESLint282Views0likes0CommentsWhy Every JavaScript Developer Should Try TypeScript
Introduction "Why did the JavaScript developer break up with TypeScript?" "Because they couldn’t handle the commitment!" As a student entrepreneur, you're constantly juggling coursework, projects, and maybe even a startup idea. You don’t have time to debug mysterious JavaScript errors at 2 AM. That's where TypeScript comes in helping you write cleaner, more reliable code so you can focus on building, not debugging. In this post, I’ll show you why TypeScript is a must-have skill for any student developer and how it can set your projects up for success. Overview of TypeScript JavaScript, the world's most-used programming language, powers cross-platform applications but wasn't designed for large-scale projects. It lacks some features needed for managing extensive codebases, making it challenging for IDEs. TypeScript overcomes these limitations while preserving JavaScript’s versatility, ensuring code runs seamlessly across platforms, browsers, and hosts. What is TypeScript? TypeScript is an open-source, strongly typed superset of JavaScript that compiles down to regular JavaScript. Created by Microsoft, it introduces static typing, interfaces, and modern JavaScript features, making it a favorite for both small projects and enterprise applications Why Should Student Entrepreneurs Care About TypeScript? TypeScript Saves You Time: You know that feeling when your JavaScript app breaks for no reason just before a hackathon deadline? TypeScript catches errors before your code even runs, so you don’t waste hours debugging. TypeScript Makes Your Code More Professional: If you're building a startup, investors and potential employers will look at your code. TypeScript makes your projects scalable, readable, and industry ready. TypeScript Helps You Learn Faster: As a student, you’re still learning. Typescripts autocomplete and type hints guide you, reducing the number of Google searches you need. For a beginner-friendly introduction to TypeScript, check out this MS Learn module: 🔗 Introduction to TypeScript Setting Up TypeScript in 5 Minutes Prerequisites Knowledge of JavaScript NodeJS Code editor Visual Studio Code Install TypeScript TypeScript is available as a package in the npm registry as typescript. To install the latest version of TypeScript: In the Command Prompt window, enter npm install -g typescript. npm install -g typescript Enter tsc to confirm that TypeScript is installed. If it was successfully installed, this command should show a list of compiler commands and options. Create a new TypeScript file Create a new folder in your desktop called “demo”, right-click on the folder icon and select open with vs code When vs code opens, click on add file icon and create new file “index.ts” Let’s write a simple function to add two numbers Compile a TypeScript file TypeScript is a strict superset of ECMAScript 2015 (ECMAScript 6 or ES6). All JavaScript code is also TypeScript code, and a TypeScript program can seamlessly consume JavaScript. You can convert a JavaScript file to a TypeScript file just by renaming the extension from .js to .ts. However, not all TypeScript code is JavaScript code. TypeScript adds new syntax to JavaScript, which makes the JavaScript easier to read and implements some features, such as static typing. You transform TypeScript code into JavaScript code by using the TypeScript compiler. You run the TypeScript compiler at the command prompt by using the tsc command. When you run tsc with no parameters, it compiles all the .ts files in the current folder and generates a .js file for each one. To compile our code, open command prompt in vs code and type tsc index.ts Notice that a new JavaScript file has been added, You might need to refresh the Explorer pane to view the file At the Terminal command prompt, enter node index.js. This command runs the JavaScript and displays the result in the console log. And that’s it! 🎉 Core TypeScript Features Every Developer Should Know Static Typing for Safer Code – TypeScript’s static typing prevents runtime errors by catching type mismatches at compile time, making code more reliable. This prevents unintended assignments like: Interfaces for Better Object Structures – Interfaces help define the structure of objects, ensuring consistency and maintainability across a codebase. Enums for Readable Constants – Enums define named constants, making code more readable and reducing the risk of using incorrect values. Generics for Reusable Code – Generics allow you to create flexible, type-safe functions and components that work with multiple data types. Type Assertions for Flexibility – Type assertions let you explicitly specify a value’s type when TypeScript cannot infer it correctly, enhancing type safety in dynamic scenarios. Conclusion: TypeScript is Your Superpower🚀 TypeScript is more than just a superset of JavaScript—it's a game-changer for developers, especially those working on large-scale projects or building career-defining applications. By introducing static typing, interfaces, Enums, generics, and type assertions, TypeScript helps eliminate common JavaScript pitfalls while maintaining flexibility. These features not only enhance code quality and maintainability but also improve collaboration among teams, ensuring that projects scale smoothly. Whether you're a student entrepreneur, a freelancer, or a professional developer, adopting TypeScript early will give you a competitive edge in the industry. Embracing TypeScript means writing safer, cleaner, and more efficient code without sacrificing JavaScript’s versatility. With its powerful developer tools and seamless integration with modern frameworks, TypeScript ensures that your code remains robust and adaptable to changing requirements. As the demand for TypeScript continues to grow, learning and using it in your projects will open new opportunities and set you apart in the ever-evolving world of web development. Read More And do more with Typescript Declare variables in Typescript TypeScript repository on GitHub TypeScript tutorial in Visual Studio Code Build JavaScript applications using TypeScript224Views2likes0CommentsPDF viewer does not work with JavaScript
Hi All of our corporate documents are managed in a document solution that generates PDF files from the documents with a specific header in the document stating when they have been requested etc. This header uses JavaScript build into the PDF which works just fine in Adobe Acrobat Reader, Internet Explorer (as it uses Acrobat Reader) etc. But sadly in Edge it only loads the top of the header on each page and displays an error saying: "TypeError: this.info.toSource is not a function". The rest of each page is missing (see screenshot below) This means that thousands of documents that cannot be viewed with the build-in PDF viewer in Edge, but we have to manually download the file and then open it in Adobe Reader.32KViews0likes6CommentsAdd speech input & output to your app with the free browser APIs
One of the amazing benefits of modern machine learning is that computers can reliably turn text into speech, or transcribe speech into text, across multiple languages and accents. We can then use those capabilities to make our web apps more accessible for anyone who has a situational, temporary, or chronic issue that makes typing difficult. That describes so many people - for example, a parent holding a squirmy toddler in their hands, an athlete with a broken arm, or an individual with Parkinson's disease. There are two approaches we can use to add speech capabilities to our apps: Use the built-in browser APIs: the SpeechRecognition API and SpeechSynthesis API. Use a cloud-based service, like the Azure Speech API. Which one to use? The great thing about the browser APIs is that they're free and available in most modern browsers and operating systems. The drawback of the APIs is that they're often not as powerful and flexible as cloud-based services, and the speech output often sounds more robotic. There are also a few niche browser/OS combos where the built-in APIs don't work. That's why we decided to add both options to our most popular RAG chat solution, to give developers the option to decide for themselves. However, in this post, I'm going to show you how to add speech capabilities using the free built-in browser APIs, since free APIs are often easier to get started with and it's important to do what we can to improve the accessibility of our apps. The GIF below shows the end result, a chat app with both speech input and output buttons: All of the code described in this post is part of openai-chat-vision-quickstart, so you can grab the full code yourself after seeing how it works. Speech input with SpeechRecognition API To make it easier to add a speech input button to any app, I'm wrapping the functionality inside a custom HTML element, SpeechInputButton . First I construct the speech input button element with an instance of the SpeechRecognition API, making sure to use the browser's preferred language if any are set: class SpeechInputButton extends HTMLElement { constructor() { super(); this.isRecording = false; const SpeechRecognition = window.SpeechRecognition || window.webkitSpeechRecognition; if (!SpeechRecognition) { this.dispatchEvent( new CustomEvent("speecherror", { detail: { error: "SpeechRecognition not supported" }, }) ); return; } this.speechRecognition = new SpeechRecognition(); this.speechRecognition.lang = navigator.language || navigator.userLanguage; this.speechRecognition.interimResults = false; this.speechRecognition.continuous = true; this.speechRecognition.maxAlternatives = 1; } Then I define the connectedCallback() method that will be called whenever this custom element has been added to the DOM. When that happens, I define the inner HTML to render a button and attach event listeners for both mouse and keyboard events. Since we want this to be fully accessible, keyboard support is important. connectedCallback() { this.innerHTML = ` <button class="btn btn-outline-secondary" type="button" title="Start recording (Shift + Space)"> <i class="bi bi-mic"></i> </button>`; this.recordButton = this.querySelector('button'); this.recordButton.addEventListener('click', () => this.toggleRecording()); document.addEventListener('keydown', this.handleKeydown.bind(this)); } handleKeydown(event) { if (event.key === 'Escape') { this.abortRecording(); } else if (event.key === ' ' && event.shiftKey) { // Shift + Space event.preventDefault(); this.toggleRecording(); } } toggleRecording() { if (this.isRecording) { this.stopRecording(); } else { this.startRecording(); } } The majority of the code is in the startRecording function. It sets up a listener for the "result" event from the SpeechRecognition instance, which contains the transcribed text. It also sets up a listener for the "end" event, which is triggered either automatically after a few seconds of silence (in some browsers) or when the user ends the recording by clicking the button. Finally, it sets up a listener for any "error" events. Once all listeners are ready, it calls start() on the SpeechRecognition instance and styles the button to be in an active state. startRecording() { if (this.speechRecognition == null) { this.dispatchEvent( new CustomEvent("speech-input-error", { detail: { error: "SpeechRecognition not supported" }, }) ); } this.speechRecognition.onresult = (event) => { let input = ""; for (const result of event.results) { input += result[0].transcript; } this.dispatchEvent( new CustomEvent("speech-input-result", { detail: { transcript: input }, }) ); }; this.speechRecognition.onend = () => { this.isRecording = false; this.renderButtonOff(); this.dispatchEvent(new Event("speech-input-end")); }; this.speechRecognition.onerror = (event) => { if (this.speechRecognition) { this.speechRecognition.stop(); if (event.error == "no-speech") { this.dispatchEvent( new CustomEvent("speech-input-error", { detail: {error: "No speech was detected. Please check your system audio settings and try again."}, })); } else if (event.error == "language-not-supported") { this.dispatchEvent( new CustomEvent("speech-input-error", { detail: {error: "The selected language is not supported. Please try a different language.", }})); } else if (event.error != "aborted") { this.dispatchEvent( new CustomEvent("speech-input-error", { detail: {error: "An error occurred while recording. Please try again: " + event.error}, })); } } }; this.speechRecognition.start(); this.isRecording = true; this.renderButtonOn(); } If the user stops the recording using the keyboard shortcut or button click, we call stop() on the SpeechRecognition instance. At that point, anything the user had said will be transcribed and become available via the "result" event. stopRecording() { if (this.speechRecognition) { this.speechRecognition.stop(); } } Alternatively, if the user presses the Escape keyboard shortcut, we instead call abort() on the SpeechRecognition instance, which stops the recording and does not send any previously untranscribed speech over. abortRecording() { if (this.speechRecognition) { this.speechRecognition.abort(); } } Once the custom HTML element is fully defined, we register it with the desired tag name, speech-input-button : customElements.define("speech-input-button", SpeechInputButton); To use the custom speech-input-button element in a chat application, we add it to the HTML for the chat form: <speech-input-button></speech-input-button> <input id="message" name="message" type="text" rows="1"></input> Then we attach an event listener for the custom events dispatched by the element, and we update the input text field with the transcribed text: const speechInputButton = document.querySelector("speech-input-button"); speechInputButton.addEventListener("speech-input-result", (event) => { messageInput.value += " " + event.detail.transcript.trim(); messageInput.focus(); }); You can see the full custom HTML element code in speech-input.js and the usage in index.html. There's also a fun pulsing animation for the button's active state in styles.css. Speech output with SpeechSynthesis API Once again, to make it easier to add a speech output button to any app, I'm wrapping the functionality inside a custom HTML element, SpeechOutputButton . When defining the custom element, we specify an observed attribute named "text", to store whatever text should be turned into speech when the button is clicked. class SpeechOutputButton extends HTMLElement { static observedAttributes = ["text"]; In the constructor, we check to make sure the SpeechSynthesis API is supported, and remember the browser's preferred language for later use. constructor() { super(); this.isPlaying = false; const SpeechSynthesis = window.speechSynthesis || window.webkitSpeechSynthesis; if (!SpeechSynthesis) { this.dispatchEvent( new CustomEvent("speech-output-error", { detail: { error: "SpeechSynthesis not supported" } })); return; } this.synth = SpeechSynthesis; this.lngCode = navigator.language || navigator.userLanguage; } When the custom element is added to the DOM, I define the inner HTML to render a button and attach mouse and keyboard event listeners: connectedCallback() { this.innerHTML = ` <button class="btn btn-outline-secondary" type="button"> <i class="bi bi-volume-up"></i> </button>`; this.speechButton = this.querySelector("button"); this.speechButton.addEventListener("click", () => this.toggleSpeechOutput() ); document.addEventListener('keydown', this.handleKeydown.bind(this)); } The majority of the code is in the toggleSpeechOutput function. If the speech is not yet playing, it creates a new SpeechSynthesisUtterance instance, passes it the "text" attribute, and sets the language and audio properties. It attempts to use a voice that's optimal for the desired language, but falls back to "en-US" if none is found. It attaches event listeners for the start and end events, which will change the button's style to look either active or unactive. Finally, it tells the SpeechSynthesis API to speak the utterance. toggleSpeechOutput() { if (!this.isConnected) { return; } const text = this.getAttribute("text"); if (this.synth != null) { if (this.isPlaying || text === "") { this.stopSpeech(); return; } // Create a new utterance and play it. const utterance = new SpeechSynthesisUtterance(text); utterance.lang = this.lngCode; utterance.volume = 1; utterance.rate = 1; utterance.pitch = 1; let voice = this.synth .getVoices() .filter((voice) => voice.lang === this.lngCode)[0]; if (!voice) { voice = this.synth .getVoices() .filter((voice) => voice.lang === "en-US")[0]; } utterance.voice = voice; if (!utterance) { return; } utterance.onstart = () => { this.isPlaying = true; this.renderButtonOn(); }; utterance.onend = () => { this.isPlaying = false; this.renderButtonOff(); }; this.synth.speak(utterance); } } When the user no longer wants to hear the speech output, indicated either via another press of the button or by pressing the Escape key, we call cancel() from the SpeechSynthesis API. stopSpeech() { if (this.synth) { this.synth.cancel(); this.isPlaying = false; this.renderButtonOff(); } } Once the custom HTML element is fully defined, we register it with the desired tag name, speech-output-button : customElements.define("speech-output-button", SpeechOutputButton); To use this custom speech-output-button element in a chat application, we construct it dynamically each time that we've received a full response from an LLM, and call setAttribute to pass in the text to be spoken: const speechOutput = document.createElement("speech-output-button"); speechOutput.setAttribute("text", answer); messageDiv.appendChild(speechOutput); You can see the full custom HTML element code in speech-output.js and the usage in index.html. This button also uses the same pulsing animation for the active state, defined in styles.css. Acknowledgments I want to give a huge shout-out to John Aziz for his amazing work adding speech input and output to the azure-search-openai-demo, as that was the basis for the code I shared in this blog post.This Month in Azure Static Web Apps | 10/2024
We’re back with another edition of the Azure Static Web Apps Community! 🎉 October was a month full of incredible contributions from the Technical Community! 🚀 If you’d like to learn more about Azure Static Web Apps, we have: 🔹 Tutorials 🔹 Videos 🔹 Sample Code 🔹 Official Documentation 🔹 And much more! Want to be featured here next month but don’t know how? Keep reading and find out how to participate at the end of this article! 😉 🤝Special Thanks A big thank you to everyone who contributed amazing content to the community! You are the reason this community is so special! ❤️ Let’s dive into this month’s highlights! 🌟Community Content Highlights – October 2024 Below are the key contributions created by the community this month. Video: Azure Data API Builder Community Standup - Static Web Apps Date: October 2, 2024 Author: Microsoft Azure Developers Link: Azure Data API Builder Community Standup - Static Web Apps The Azure Data API Builder Community Standup showcased how Azure Static Web Apps simplifies the development and deployment of static applications integrated with databases on Azure. The session explored connecting front-end apps to databases like Cosmos DB, Azure SQL, MySQL, and PostgreSQL using REST or GraphQL endpoints provided by the Data API Builder. The integration with Azure Static Web Apps offers a managed experience for the Data API Builder, eliminating container management and ensuring a simple and efficient setup. Highlights included automatic database connection initialization via the swad db init command and configuration files to define schemas and access permissions. The Database Connections feature, currently in preview, was showcased as an ideal solution for use cases requiring quick API creation. This service is perfect for building proof-of-concept projects swiftly and scalably, with continuous deployment using GitHub or Azure DevOps repositories. Additionally, Azure Static Web Apps were highlighted for hosting front-end resources like React and Blazor, combining data APIs and user interfaces in an optimized developer environment. The session also included a practical example of creating a CRUD application connected to Cosmos DB, demonstrating how Azure Static Web Apps streamline the rapid and secure implementation of modern projects. Explore more about Azure Static Web Apps capabilities and best practices in the full content. Article: Hugo Deployed to Azure Static Web Apps Date: October 14, 2024 Author: CyberWatchDoug Link: Hugo Deployed to Azure Static Web Apps The article "Hugo Deployed to Azure Static Web Apps" details the process of deploying Hugo-built websites on Azure Static Web Apps (SWA), emphasizing the simplicity and flexibility provided by integration with GitHub Actions. The publication provides a step-by-step guide for setting up a static application in Azure, including GitHub authentication, repository selection, and configuring specific presets for Hugo. Additionally, the article addresses common questions about Hugo's version and explains how to customize the GitHub Actions workflow file to define environment variables like PLATFORM_NAME and HUGO_VERSION, ensuring proper build execution. Azure SWA's integration is highlighted as an efficient solution for managing automated deployments, while tools like Oryx are suggested for additional build process control. The article also explores the potential for infrastructure customization to meet specific needs. With clear and practical guidelines, the article serves as an excellent introduction to using Azure Static Web Apps for developers interested in deploying Hugo sites quickly and efficiently. Article: Implementing CI/CD for Azure Static Web Apps with GitHub Actions Date: October 22, 2024 Author: Syncfusion Link: Implementing CI/CD for Azure Static Web Apps with GitHub Actions The article Implementing CI/CD for Azure Static Web Apps with GitHub Actions offers a comprehensive guide for setting up continuous integration and continuous deployment (CI/CD) pipelines with Azure Static Web Apps. It highlights how the native integration with GitHub simplifies automatic deployments, allowing changes to be published as soon as code is pushed to the repository. The benefits presented include integrated support for popular frameworks like React, Angular, and Vue.js, along with features such as custom domains, automatic SSL certificates, and global content delivery. The article details the setup steps, from creating the resource in the Azure portal to generating automated workflows in GitHub Actions. Best practices are explored, such as using Azure Key Vault for credential security and caching to optimize build and deployment times. Monitoring deployments is addressed with native Azure tools and integrations with Slack or Microsoft Teams for real-time notifications. The article emphasizes the cost-effectiveness of Azure Static Web Apps, especially for small projects or startups, thanks to its free tier that includes essential features. Check out the full content to understand how to apply these practices to your workflow and take advantage of this managed solution. Article: Deploying your portal with Azure Static Web Apps Date: October 24, 2024 Author: Qlik Talend Help home Link: Deploying your portal with Azure Static Web Apps This article provides a step-by-step guide to implementing a portal using Azure Static Web Apps by connecting a GitHub repository to the service for automated deployment. The process includes creating a Static Web App in Azure, configuring repositories and branches in GitHub, and using GitHub Actions for build and deployment automation. The integration with GitHub simplifies the development workflow and supports build tools like Hugo to generate static sites. Additionally, it mentions the automatically generated URL in Azure to access the portal after publication. Check out the full material to understand how Azure Static Web Apps facilitates creating and publishing static applications. Video: A Beginner’s Guide to Azure Static Web Apps Free Hosting for Blazor, React, Angular, Vue, & more! Date: October 21, 2024 Author: CliffTech Link: A Beginner’s Guide to Azure Static Web Apps Free Hosting for Blazor, React, Angular, Vue, & more! This video demonstrates a step-by-step process for hosting a React application using Azure Static Web Apps, highlighting the benefits of this platform for front-end developers. It explores the differences between Azure Static Web Apps and Azure App Service, explaining that the former is ideal for static applications and provides features like automated CI/CD pipelines and GitHub integration for continuous deployment. The tutorial covers creating a Resource Group in the Azure portal, configuring the Azure Static Web Apps service, and selecting source code directly from GitHub. The automated pipelines functionality is highlighted, ensuring that any update to the main branch code is automatically published to the production environment. Additionally, the video explains how to customize the deployment, adjust output folders in the project's build, and add custom domains to personalize the application's URL. The platform is praised for its simplicity and agility, recommended for personal projects, hobbies, or even production, depending on the application's demands. Watch the video for detailed instructions and learn how this solution simplifies deploying modern applications with frameworks like React, Angular, Vue, and Next.js. Article: Configure File in Azure Static Web Apps Date: October 31, 2024 Author: TechCommunity Link: Configure File in Azure Static Web Apps The article Configure File in Azure Static Web Apps explains how to customize settings in the Azure Static Web Apps service through the staticwebapp.config.json file. It covers different configuration scenarios depending on the type of application: no framework, pre-compiled frameworks (like MkDocs), and frameworks built during the deployment process (like React). Practical examples, such as customizing the Access-Control-Allow-Origin header, are provided, detailing where to place the configuration file and how to adjust CI/CD workflows, whether using GitHub Actions or Azure DevOps. The article also highlights best practices for integrating environment variables and handling dynamic build directories, ensuring that configurations are correctly applied. This is an essential guide for developers looking to customize their applications on Azure Static Web Apps and optimize the deployment process with modern frameworks. Explore the full article to learn more. Video: User Group App - Day 2: Deploy to Static Web Apps Date: October 30, 2024 Author: The Dev Talk Show Link: User Group App - Day 2: Deploy to Static Web Apps This video provides a step-by-step guide to deploying an application on Azure Static Web Apps in an automated way. During the demonstration, the presenters explore different approaches to configure and manage the service, highlighting tools like Azure CLI and Azure Developer CLI to simplify the resource creation and deployment process. They also discuss best automation practices, such as generating reusable scripts and integrating with CI/CD pipelines via GitHub Actions. The concept of "automate everything" is emphasized as an essential strategy to ensure consistency and efficiency in projects. Furthermore, challenges and necessary configurations for linking GitHub repositories to the service are addressed, making the deployment of new versions faster and more integrated. Watch the full video to learn how to structure and automate the deployment of applications using Azure Static Web Apps. Documentation: Static React Web App + Functions with C# API and SQL Database on Azure Date: October 10, 2024 Author: Microsoft Learn Link: Static React Web App + Functions with C# API and SQL Database on Azure This guide outlines how to create and deploy a static application using Azure Static Web Apps, with a React-based front-end and an Azure Functions back-end using a C# API and Azure SQL Database. The architecture highlights integration with complementary services like Azure Monitor for monitoring and Azure Key Vault for credential security. The guide includes a template for quick customization and configuration using the Azure Developer CLI (azd), making provisioning and deployment straightforward with commands like azd up. Security features such as managed identities and advanced options like integration with Azure API Management (APIM) for backend protection are also covered. Additionally, the guide explores how to set up CI/CD pipelines, perform active monitoring, and debug locally, showcasing the flexibility and potential of Azure Static Web Apps as a practical and scalable solution for modern applications. Article: Simple Steps to Deploy Angular Application on Azure Service Date: October 16, 2024 Author: Codewave Link: Simple Steps to Deploy Angular Application on Azure Service This article provides a detailed guide for deploying Angular applications using Azure Static Web Apps, from prerequisites to launching the application in production. It highlights how this Azure service simplifies the deployment process, offering GitHub integration, pipeline automation, and scalable infrastructure. Initially, the article covers the basics of starting the project, such as creating a GitHub repository and setting up the Angular application using Angular CLI and Node.js. From there, it explores creating a Static Web App resource in the Azure portal, where integration with the GitHub repository is directly configured. This integration automates the entire build and deployment process, ensuring agility and precision. Key highlights include the simplicity of Azure's Angular presets, optimizing configuration steps like defining the application directory and output folder for final build files. The article also emphasizes that Azure Static Web Apps provides benefits like global infrastructure to minimize latency, advanced security measures to protect application data, and high reliability in content delivery. Finally, the deployment process is described as efficient and straightforward, with the application being published within minutes. The Azure-generated URL ensures global accessibility and optimized performance for users. The article not only presents the technical steps for using Azure Static Web Apps but also highlights its ability to improve the developer experience and provide scalable solutions for Angular applications. Explore the full content to understand each step and make the most of this powerful Azure tool. Article: End-to-End Full-Stack Web Application with Azure AD B2C Authentication: A Complete Guide Date: October 21, 2024 Author: TechCommunity Link: End-to-End Full-Stack Web Application with Azure AD B2C Authentication: A Complete Guide This article guides the creation of a full-stack application using Azure Static Web Apps to host a React-developed front-end integrated with Azure AD B2C for authentication and authorization. The service is highlighted for its automated deployment via GitHub Actions, enabling CI/CD pipeline configuration to manage front-end builds and publishing directly on the platform. The article explores setting up Azure Static Web Apps-specific environment variables, such as redirect URLs and authentication scopes, to ensure efficient backend integration. It also covers how Azure Static Web Apps connects with complementary services like Azure Web Apps for the backend and Azure SQL Database, forming a modern, scalable architecture. The documentation emphasizes using tools like MSAL to handle login flows on the front end and highlights the simplicity of Azure Static Web Apps in supporting modern and secure applications. For more details on implementation and configuration, check out the full article. Article: Case Study: E-Commerce App Deployment Using Azure AKS Date: October 24, 2024 Author: Shubham Gupta Link: Case Study: E-Commerce App Deployment Using Azure AKS This case study explores using Azure Static Web Apps to host the front-end of a microservices-based e-commerce application, highlighting its integration with Azure Kubernetes Service (AKS). The article demonstrates how the service facilitates connecting backend APIs hosted on AKS using custom domains configured via Ingress and Nginx. The ReactJS front-end is deployed on Azure Static Web Apps, leveraging its simplicity in configuration and built-in API support. API calls using fetch() to consume backend services are showcased, emphasizing how the service enables a seamless interaction between front-end and backend components. Additionally, the article discusses best practices for testing and validating the integration between the front-end and microservices, ensuring performance and accessibility. This case study reinforces Azure Static Web Apps as an efficient choice for modern applications utilizing microservices architecture. Article: Getting Started with Azure Blob Storage: A Step-by-Step Guide to Static Web Hosting Date: October 22, 2024 Author: ADEX Link: Getting Started with Azure Blob Storage: A Step-by-Step Guide to Static Web Hosting This article explores using Azure Blob Storage to host static websites, offering an alternative to Azure Static Web Apps for specific scenarios. It provides a step-by-step configuration guide, from creating a storage account to enabling the static website functionality. The content also compares the advantages and limitations of each service, emphasizing that while Azure Blob Storage is efficient for simple static sites, Azure Static Web Apps offers more robust features such as native integration with GitHub and Azure DevOps, support for serverless APIs with Azure Functions, and optimized configurations for modern development. The article serves as a guide to understanding when to use Azure Blob Storage versus Azure Static Web Apps, considering the type of application, scalability needs, and available features. Explore the full article to discover which solution best fits your projects. Article: Canonical URL Troubleshooting - Managing Canonical URLs in Static Web Apps for SEO Optimization Date: October 12, 2024 Author: Mark Hazleton Link: Canonical URL Troubleshooting - Managing Canonical URLs in Static Web Apps for SEO Optimization This article addresses the complexities of managing canonical URLs in Azure Static Web Apps to optimize the SEO of static sites. It explores common issues, such as URL variations (/projectmechanics, /projectmechanics/, /projectmechanics/index.html), which can lead to penalties for duplicate content in search engines. The author details solutions, including using canonical tags in page headers and configuring redirects in the staticwebapp.config.json file. While these approaches mitigate some challenges, they don’t fully resolve the presented issues. The most effective solution involved integrating Azure Static Web Apps with Cloudflare Page Rules, leveraging Cloudflare's redirection capabilities to configure permanent (301) redirects and consolidate canonical URLs. This combination ensured efficient URL management, eliminating conflicts and enhancing user and search engine experiences. This article is a must-read for developers seeking to strengthen SEO in static projects and learn how to integrate complementary solutions like Cloudflare with Azure Static Web Apps. Check out the full article for a detailed guide and useful configuration links. Article: 1.4b Deploy application with Azure App Service Part 2 Date: October 9, 2024 Author: Cloud Native Link: 1.4b Deploy application with Azure App Service Part 2 This article details the process of deploying applications using Azure App Service, covering both backend and frontend components, with a focus on ensuring seamless communication between them. While the main highlight is using the Maven Azure Web App Plugin for Java applications, the content is also relevant for developers interested in integration with Azure Static Web Apps. Highlights include: Preparation and Configuration: How to prepare applications for deployment by creating packages (WAR and ZIP) and properly configuring the pom.xml for Maven. Backend Deployment: Using Maven to create and/or update the App Service automatically. Frontend Deployment: Configuring and deploying a ReactJS application, emphasizing using Azure CLI commands to manage services, set up startup files, and restart the app to apply changes. Verification and Testing: Guidelines to ensure deployed services work as expected and to debug issues like browser caching. Resource Cleanup: Instructions on removing resources to avoid unnecessary costs after testing. The article offers valuable insights into using Azure Static Web Apps for integrated application front-ends, mentioning the importance of features like authentication and serverless API support for modern applications. Developers can explore the synergy between Azure App Service and Azure Static Web Apps to maximize project efficiency. For more details, read the full article and explore the links to additional documentation. Conclusion October was an inspiring month, full of incredible contributions from the technical community about Azure Static Web Apps! 💙 We’d like to thank all the authors and content creators who dedicated their time to sharing their knowledge, helping to strengthen this amazing community. Every article, video, and project enriches learning and promotes the adoption of this powerful technology. If you want to learn more, check out the official documentation, explore the tutorials, and join the technical community transforming the development of static applications. 🚀 How to Participate or See Your Content Featured? Create something amazing (article, video, or project) about Azure Static Web Apps. Share it on social media with the hashtag #AzureStaticWebApps. Publish it in the official repository on GitHub and participate in the monthly discussions. If you enjoyed this article, share it with your network so more people can benefit from this content! Use the share buttons or copy the link directly. Your participation helps promote knowledge and strengthens our technical community. Let’s build a more connected and collaborative ecosystem together! 💻✨ See you in the next edition, and keep exploring the potential of Azure Static Web Apps! 👋Getting Started with Azure Cosmos DB SDK for TypeScript/JavaScript (4.2.0)
In this blog, we will walk through how to get started with Azure Cosmos DB SDK for TypeScript. Using the SDK, we'll cover how to set up a Cosmos DB client, interact with containers and items, and perform basic CRUD operations such as creating, updating, querying, and deleting items. By the end of this tutorial, you'll have a solid understanding of how to integrate Azure Cosmos DB into your TypeScript applications. What is an SDK? An SDK (Software Development Kit) is a collection of software development tools, libraries, and documentation that helps developers integrate and interact with a service or platform. In this case, the Azure Cosmos DB SDK for JavaScript/TypeScript provides a set of tools to interact with the Cosmos DB service, making it easier to perform operations like database and container management, data insertion, querying, and more. What is the Azure Cosmos DB Client Library for JavaScript/TypeScript? The Azure Cosmos DB Client Library for JavaScript/TypeScript is a package that allows developers to interact with Azure Cosmos DB through an easy-to-use API. It supports operations for creating databases, containers, and documents, as well as querying and updating documents. For our example, we will be using the SQL API, which is the most widely used API in Cosmos DB, and will show how to use the SDK for basic CRUD operations. To get started, make sure to install the SDK by running: npm i @azure/cosmos Prerequisites Before we can start interacting with Cosmos DB, we need to make sure we have the following prerequisites in place: 1. Azure Subscription You need an active Azure subscription. If you don’t have one, you can Sign up for a free Azure account, or Azure for students to get $100 azure credit. 2. Azure Cosmos DB Account To interact with Cosmos DB, you need to have a Azure Cosmos DB API account. Create one from the Azure Portal and keep the Endpoint URL and Primary Key handy. If you dont know how to do so check out this blog Getting started with Azure Cosmos Database (A Deep Dive) Overview of Cosmos Client Concepts Before diving into code, let's briefly go over the essential concepts you will interact with in Azure Cosmos DB. 1. Database A Database is a container for data in Cosmos DB. You can think of it as a high-level entity under which collections (containers) are stored. client.databases("<db id>") for creating new databases, and reading/querying all databases 2. Container A Container (formerly known as a collection) is a logical unit for storing items (documents). In Cosmos DB, each container is independent, and the items within a container are stored as JSON-like documents. Operations for reading, replacing, or deleting a specific, existing container by id. For creating new containers and reading/querying all containers; use .containers() ie .container(id).read() 3. Partition Key A Partition Key is used to distribute data across multiple physical partitions. When you insert data into a container, you must define a partition key. This helps Cosmos DB scale and optimize read and write operations. 4. Item An Item in Cosmos DB is a single piece of data that resides in a container. It is typically stored as a JSON document, and each item must have a unique ID and be associated with a partition key. Used to perform operations on a specific item. read method const { resource,statusCode } = await usersContainer.item(id, id).read<TUser>(); delete method const { statusCode } = await usersContainer.item(id, id).delete() 5. Items Items in Cosmos DB is used for Operations for creating new items and reading/querying all items. Used to perform operations on a many items. query method const { resources } = await usersContainer.items.query(querySpec).fetchAll(); read all items method const { resources} = await usersContainer.items.readAll<TUser[]>().fetchAll(); upsert (update or insert if item doesn't exist) const { resource } = await usersContainer.items.upsert<Partial<TUser>>(user); Environment Variables Setup Create a .env file in the root directory of your project with the following contents: NODE_ENV=DEVELOPMENT AZURE_COSMOS_DB_ENDPOINT=https://<your-cosmos-db-account>.documents.azure.com:443 AZURE_COSMOS_DB_KEY=<your-primary-key> AZURE_COSMOS_DB_DATABASE_NAME=<your-database-name> Let's set up a simple node app Initialize the Project Create a new directory for your project and navigate into it: mkdir simple-node-app cd simple-node-app Initialize a new Node.js project with default settings: npm init -y Install Dependencies Install the necessary dependencies for Express, TypeScript, and other tools: npm install @azure/cosmos zod uuid dotenv npm install --save-dev typescript rimraf tsx @types/node Configure TypeScript Create a tsconfig.json file in the root of your project with the following content: { "compilerOptions": { /* Base Options: */ "target": "es2022", "esModuleInterop": true, "skipLibCheck": true, "moduleResolution": "nodenext", "resolveJsonModule": true, /* Strictness */ "strict": true, "allowUnreachableCode": false, "noUnusedLocals": true, // "noUnusedParameters": true, "strictBindCallApply": true, /* If transpiling with TypeScript: */ "module": "NodeNext", "outDir": "dist", "rootDir": "./", "lib": [ "ES2022" ], }, "include": [ "./*.ts" ], "exclude": [ "node_modules", "dist" ] } Loading Environment Variables Create env.ts to securely and type safe load our env variables using zod add below code //./env.ts import dotenv from 'dotenv'; import { z } from 'zod'; // Load environment variables from .env file dotenv.config(); // Define the environment schema const EnvSchema = z.object({ // Node Server Configuration NODE_ENV: z.enum(['PRODUCTION', 'DEVELOPMENT']).default('DEVELOPMENT'), // CosmosDB Configuration AZURE_COSMOS_DB_ENDPOINT: z.string({ required_error: "AZURE_COSMOS_DB_ENDPOINT is required", invalid_type_error: "AZURE_COSMOS_DB_ENDPOINT must be a string", }), AZURE_COSMOS_DB_KEY: z.string({ required_error: "AZURE_COSMOS_DB_KEY is required", invalid_type_error: "AZURE_COSMOS_DB_KEY must be a string", }), AZURE_COSMOS_DB_DATABASE_NAME: z.string({ required_error: "AZURE_COSMOS_DB DB Name is required", invalid_type_error: "AZURE_COSMOS_DB must be a string", }), }); // Parse and validate the environment variables export const env = EnvSchema.parse(process.env); // Configuration object consolidating all settings const config = { nodeEnv: env.NODE_ENV, cosmos: { endpoint: env.AZURE_COSMOS_DB_ENDPOINT, key: env.AZURE_COSMOS_DB_KEY, database: env.AZURE_COSMOS_DB_DATABASE_NAME, containers: { users: 'usersContainer', }, }, }; export default config; Securely Setting Up Cosmos Client Instance To interact with Cosmos DB, we need to securely set up a CosmosClient instance. Here's how to initialize the client using environment variables for security. Create cosmosClient.ts and add below code // ./cosmosdb.config.ts import { PartitionKeyDefinitionVersion, PartitionKeyKind, Database, CosmosClient, Container, CosmosDbDiagnosticLevel, ErrorResponse, RestError, AbortError, TimeoutError } from '@azure/cosmos'; import config from './env'; let client: CosmosClient; let database: Database; let usersContainer: Container; async function initializeCosmosDB(): Promise<void> { try { // Create a new CosmosClient instance client = new CosmosClient({ endpoint: config.cosmos.endpoint, key: config.cosmos.key, diagnosticLevel: config.nodeEnv === 'PRODUCTION' ? CosmosDbDiagnosticLevel.info : CosmosDbDiagnosticLevel.debug }); // Create or get the database const { database: db } = await client.databases.createIfNotExists({ id: config.cosmos.database }); database = db; console.log(`Database '${config.cosmos.database}' initialized.`); // Initialize containers usersContainer = await createUsersContainer(); console.log('Cosmos DB initialized successfully.'); } catch (error: any) { return handleCosmosError(error); } } // Create the users container async function createUsersContainer(): Promise<Container> { const containerDefinition = { id: config.cosmos.containers.users, partitionKey: { paths: ['/id'], version: PartitionKeyDefinitionVersion.V2, kind: PartitionKeyKind.Hash, }, }; try { const { container } = await database.containers.createIfNotExists(containerDefinition); console.log(`'${container.id}' is ready.`); // const { container, diagnostics } = await database.containers.createIfNotExists(containerDefinition); // console.log(diagnostics.clientConfig) Contains aggregates diagnostic details for the client configuration // console.log(diagnostics.diagnosticNode) is intended for debugging non-production environments only return container; } catch (error: any) { return handleCosmosError(error); } } // Getter functions for containers function getUsersContainer(): Container { if (!usersContainer) { throw new Error('user container is not initialized.'); } return usersContainer; } const handleCosmosError = (error: any) => { if (error instanceof RestError) { throw new Error(`error: ${error.name}, message: ${error.message}`); } else if (error instanceof ErrorResponse) { throw new Error(`Error: ${error.message}, message: ${error.message}`); } else if (error instanceof AbortError) { throw new Error(error.message); } else if (error instanceof TimeoutError) { throw new Error(`TimeoutError code: ${error.code}, message: ${error.message}`); } else if (error.code === 409) { //if you try to create an item using an id that's already in use in your Cosmos DB database, a 409 error is returned throw new Error('Conflict occurred while creating an item using an existing ID.'); } else { console.log(JSON.stringify(error)); throw new Error('An error occurred while processing your request.'); } }; export { initializeCosmosDB, getUsersContainer, handleCosmosError }; This code is for initializing and interacting with Azure Cosmos DB using the Azure Cosmos SDK in a Node.js environment. Here's a brief and straightforward explanation of what each part does: Imports: The code imports several classes and enums from azure/cosmos that are needed to interact with Cosmos DB, like CosmosClient, Database, Container, and various error types. Variables: client, database, and usersContainer are declared to hold references to the Cosmos DB client, database, and a specific container for user data. initializeCosmosDB() function: Purpose: Initializes the Cosmos DB client, database, and container. Steps: Creates a new CosmosClient with credentials from the config (like endpoint, key, and diagnosticLevel). Attempts to create or retrieve a database (using createIfNotExists). Logs success and proceeds to initialize the usersContainer by calling createUsersContainer(). createUsersContainer() function: Purpose: Creates a container for storing user data in Cosmos DB with a partition key. Steps: Defines a partition key for the container (using /id as the partition key path). Attempts to create the container (or retrieves it if it already exists) with the given definition. Returns the container instance. getUsersContainer() function: Purpose: Returns the usersContainer object if it exists. Throws an error if the container is not initialized. handleCosmosError() function: Purpose: Handles errors thrown by Cosmos DB operations. Error Handling: It checks the type of error (e.g., RestError, ErrorResponse, AbortError, TimeoutError) and throws a formatted error message. Specifically handles conflict errors (HTTP 409) when attempting to create an item with an existing ID. Key Exported Functions: initializeCosmosDB: Initializes the Cosmos DB client and container. getUsersContainer: Returns the initialized users container. handleCosmosError: Custom error handler for Cosmos DB operations. Create User Schema This code defines data validation schemas using Zod, a TypeScript-first schema declaration and validation library. Create user.schema.ts and add below code // ./user.schema.ts import { z } from 'zod'; const coerceDate = z.preprocess((arg) => { if (typeof arg === 'string' || arg instanceof Date) { return new Date(arg); } else { return arg; } }, z.date()); export const userSchema = z.object({ id: z.string().uuid(), fullname: z.string(), email: z.string().email(), address: z.string(), createdAt: coerceDate.default(() => new Date()), }) const responseSchema = z.object({ statusCode: z.number(), message: z.string(), }) export type TResponse = z.infer<typeof responseSchema>; export type TUser = z.infer<typeof userSchema>; Here's a concise breakdown of the code: 1. coerceDate Schema: Purpose: This schema is designed to coerce a value into a Date object. How it works: z.preprocess() allows preprocessing of input before applying the base schema (z.date()). If the input is a string or an instance of Date, it converts it into a Date object. If the input is neither of these, it returns the original input without modification. Use: The coerceDate is used later in the userSchema to ensure that the createdAt field is always a valid Date object. 2. userSchema: Purpose: Defines the structure and validation rules for a user object. Fields: id: A required string that must be a valid UUID (z.string().uuid()). fullname: A required string. email: A required string that must be a valid email format (z.string().email()). address: A required string. createdAt: A Date field, which defaults to the current date/time if not provided (z.date() with default(() => new Date())), and uses coerceDate for preprocessing to ensure the value is a valid Date object. 3. responseSchema: Purpose: Defines the structure of a response object. Fields: statusCode: A required number (z.number()). message: A required string. 4. Type Inference: TResponse and TUser are TypeScript types that are automatically inferred from the responseSchema and userSchema, respectively. z.infer<typeof schema> generates TypeScript types based on the Zod schema, so: TResponse will be inferred as { statusCode: number, message: string }. TUser will be inferred as { id: string, fullname: string, email: string, address: string, createdAt: Date }. Let's implement Create Read Delete and Update Create user.service.ts and add the code below // ./user.service.ts import { SqlQuerySpec } from '@azure/cosmos'; import { getUsersContainer, handleCosmosError } from './cosmosClient'; import { TResponse, TUser } from './user.schema'; // Save user service export const saveUserService = async (user: TUser): Promise<Partial<TUser>> => { try { const usersContainer = getUsersContainer(); const res = await usersContainer.items.create<TUser>(user); if (!res.resource) { throw new Error('Failed to save user.'); } return res.resource; } catch (error: any) { return handleCosmosError(error); } }; // Update user service export const updateUserService = async (user: Partial<TUser>): Promise<Partial<TUser>> => { try { const usersContainer = getUsersContainer(); const { resource } = await usersContainer.items.upsert<Partial<TUser>>(user); if (!resource) { throw new Error('Failed to update user.'); } return resource; } catch (error: any) { return handleCosmosError(error); } }; // Fetch users service export const fetchUsersService = async (): Promise<TUser[] | null> => { try { const usersContainer = getUsersContainer(); const querySpec: SqlQuerySpec = { query: 'SELECT * FROM c ORDER BY c._ts DESC', }; const { resources } = await usersContainer.items.query<TUser[]>(querySpec).fetchAll(); return resources.flat(); } catch (error: any) { return handleCosmosError(error); } }; // Fetch user by email service export const fetchUserByEmailService = async (email: string): Promise<TUser | null> => { try { const usersContainer = getUsersContainer(); const querySpec: SqlQuerySpec = { query: 'SELECT * FROM c WHERE c.email = ', parameters: [ { name: '@email', value: email }, ], }; const { resources } = await usersContainer.items.query<TUser>(querySpec).fetchAll(); return resources.length > 0 ? resources[0] : null; } catch (error: any) { return handleCosmosError(error); } } // Fetch user by ID service export const fetchUserByIdService = async (id: string): Promise<TUser | null> => { try { const usersContainer = getUsersContainer(); const { resource } = await usersContainer.item(id, id).read<TUser>(); if (!resource) { return null; } return resource; } catch (error: any) { return handleCosmosError(error); } }; // Delete user by ID service export const deleteUserByIdService = async (id: string): Promise<TResponse> => { try { const usersContainer = getUsersContainer(); const userIsAvailable = fetchUserByIdService(id); if (!userIsAvailable) { throw new Error('User not found'); } const { statusCode } = await usersContainer.item(id, id).delete(); if (statusCode !== 204) { throw new Error('Failed to delete user.'); } return { statusCode, message: 'User deleted successfully', } } catch (error: any) { return handleCosmosError(error); } }; This code provides a set of service functions to interact with the Cosmos DB container for managing user data, including create, update, fetch, and delete operations. Here's a brief breakdown of each function: 1. saveUserService: Purpose: Saves a new user to the Cosmos DB container. How it works: Retrieves the usersContainer using getUsersContainer(). Uses items.create<TUser>(user) to create a new user document in the container. If the operation fails (i.e., no resource is returned), it throws an error. Returns the saved user object (with partial properties). Error Handling: Catches any error and passes it to handleCosmosError(). 2. updateUserService: Purpose: Updates an existing user in the Cosmos DB container. How it works: Retrieves the usersContainer. Uses items.upsert<Partial<TUser>>(user) to either insert or update the user data. If no resource is returned, an error is thrown. Returns the updated user object. Error Handling: Catches any error and passes it to handleCosmosError(). 3. fetchUsersService: Purpose: Fetches all users from the Cosmos DB container. How it works: Retrieves the usersContainer. Executes a SQL query (SELECT * FROM c ORDER BY c._ts DESC) to fetch all users ordered by timestamp (_ts). If the query is successful, it returns the list of users. If an error occurs, it is passed to handleCosmosError(). Return Type: Returns an array of TUser[] or null if no users are found. 4. fetchUserByEmailService: Purpose: Fetches a user by their email address. How it works: Retrieves the usersContainer. Executes a SQL query to search for a user by email (SELECT * FROM c WHERE c.email = Email). If the query finds a matching user, it returns the user object, otherwise returns null. Error Handling: Catches any error and passes it to handleCosmosError(). 5. fetchUserByIdService: Purpose: Fetches a user by their unique id. How it works: Retrieves the usersContainer. Uses item(id, id).read<TUser>() to read a user by its id. If no user is found, returns null. If the user is found, returns the user object. Error Handling: Catches any error and passes it to handleCosmosError(). 6. deleteUserByIdService: Purpose: Deletes a user by their unique id. How it works: Retrieves the usersContainer. Checks if the user exists by calling fetchUserByIdService(id). If the user is not found, throws an error. Deletes the user using item(id, id).delete(). Returns a response object with statusCode and a success message if the deletion is successful. Error Handling: Catches any error and passes it to handleCosmosError(). Summary of the Service Functions: Save a new user (saveUserService). Update an existing user (updateUserService). Fetch all users (fetchUsersService), a user by email (fetchUserByEmailService), or by id (fetchUserByIdService). Delete a user by id (deleteUserByIdService). Key Points: Upsert operation (upsert): If the user exists, it is updated; if not, it is created. Error Handling: All errors are passed to a centralized handleCosmosError() function, which ensures consistent error responses. Querying: Uses SQL-like queries in Cosmos DB to fetch users based on conditions (e.g., email or id). Type Safety: The services rely on the TUser and TResponse types from the schema, ensuring that the input and output adhere to the expected structure. This structure makes the service functions reusable and maintainable, while providing clean, type-safe interactions with the Azure Cosmos DB. Let's Create Server.ts Create server.ts and add the code below. //./server.ts import { initializeCosmosDB } from "./cosmosClient"; import { v4 as uuidv4 } from 'uuid'; import { TUser } from "./user.schema"; import { fetchUsersService, fetchUserByIdService, deleteUserByIdService, saveUserService, updateUserService, fetchUserByEmailService } from "./user.service"; // Start server (async () => { try { // Initialize CosmosDB await initializeCosmosDB(); // Create a new user const newUser: TUser = { id: uuidv4(), fullname: "John Doe", email: "john.doe@example.com", address: "Nairobi, Kenya", createdAt: new Date() }; const createdUser = await saveUserService(newUser); console.log('User created:', createdUser); // Fetch all users const users = await fetchUsersService(); console.log('Fetched users:', users); let userID = "81b4c47c-f222-487b-a5a1-805463c565a0"; // Fetch user by ID const user = await fetchUserByIdService(userID); console.log('Fetched user with ID:', user); //search for user by email const userByEmail = await fetchUserByEmailService("john.doe@example.com"); console.log('Fetched user with email:', userByEmail); // Update user const updatedUser = await updateUserService({ id: userID, fullname: "Jonathan Doe" }); console.log('User updated:', updatedUser); // Delete user const deleteResponse = await deleteUserByIdService(userID); console.log('Delete response:', deleteResponse); } catch (error: any) { console.error('Error:', error.message); } finally { process.exit(0); } })(); This server.ts file is the entry point of an application that interacts with Azure Cosmos DB to manage user data. It initializes the Cosmos DB connection and performs various CRUD operations (Create, Read, Update, Delete) on user records. Breakdown of the Code: 1. Imports: initializeCosmosDB: Initializes the Cosmos DB connection and sets up the database and container. uuidv4: Generates a unique identifier (UUID) for the id field of the user object. TUser: Type definition for a user, ensuring that the user object follows the correct structure (from user.schema.ts). Service Functions: These are the CRUD operations that interact with the Cosmos DB (fetchUsersService, fetchUserByIdService, etc.). 2. Asynchronous IIFE (Immediately Invoked Function Expression): The entire script runs inside an async IIFE, which is an asynchronous function that executes immediately when the file is run. 3. Workflow: Here’s what the script does step-by-step: Initialize Cosmos DB: The initializeCosmosDB() function is called to set up the connection to Cosmos DB. If the connection is successful, it logs Cosmos DB initialized. to the console. Create a New User: A new user is created with a unique ID (uuidv4()), full name, email, address, and a createdAt timestamp. The saveUserService(newUser) function is called to save the new user to the Cosmos DB container. If successful, the created user is logged to the console. Fetch All Users: The fetchUsersService() function is called to fetch all users from the Cosmos DB. The list of users is logged to the console. Fetch User by ID: The fetchUserByIdService(userID) function is called with a hardcoded userID to fetch a specific user by their unique ID. The user (if found) is logged to the console. Fetch User by Email: The fetchUserByEmailService(email) function is called to find a user by their email address ("john.doe@example.com"). The user (if found) is logged to the console. Update User: The updateUserService({ id: userID, fullname: "Jonathan Doe" }) function is called to update the user's full name. The updated user is logged to the console. Delete User: The deleteUserByIdService(userID) function is called to delete the user with the specified ID. The response from the deletion (status code and message) is logged to the console. 4. Error Handling: If any operation fails, the catch block catches the error and logs the error message to the console. This ensures that any issues (e.g., database connection failure, user not found, etc.) are reported. 5. Exit Process: After all operations are completed (or if an error occurs), the script exits the process with process.exit(0) to ensure the Node.js process terminates cleanly. Example Output: If everything runs successfully, the console output would look like this (assuming the hardcoded userID exists in the database and the operations succeed): Cosmos DB initialized. User created: { id: 'some-uuid', fullname: 'John Doe', email: 'john.doe@example.com', address: 'Nairobi, Kenya', createdAt: 2024-11-28T12:34:56.789Z } Fetched users: [{ id: 'some-uuid', fullname: 'John Doe', email: 'john.doe@example.com', address: 'Nairobi, Kenya', createdAt: 2024-11-28T12:34:56.789Z }] Fetched user with ID: { id: '81b4c47c-f222-487b-a5a1-805463c565a0', fullname: 'John Doe', email: 'john.doe@example.com', address: 'Nairobi, Kenya', createdAt: 2024-11-28T12:34:56.789Z } Fetched user with email: { id: 'some-uuid', fullname: 'John Doe', email: 'john.doe@example.com', address: 'Nairobi, Kenya', createdAt: 2024-11-28T12:34:56.789Z } User updated: { id: '81b4c47c-f222-487b-a5a1-805463c565a0', fullname: 'Jonathan Doe', email: 'john.doe@example.com', address: 'Nairobi, Kenya', createdAt: 2024-11-28T12:34:56.789Z } Delete response: { statusCode: 204, message: 'User deleted successfully' } Error Handling The SDK generates various types of errors that can occur during an operation. ErrorResponse is thrown if the response of an operation returns an error code of >=400. TimeoutError is thrown if Abort is called internally due to timeout. AbortError is thrown if any user passed signal caused the abort. RestError is thrown in case of failure of underlying system call due to network issues. Errors generated by any devDependencies. For Eg. azure/identity package could throw CredentialUnavailableError. Following is an example for handling errors of type ErrorResponse, TimeoutError, AbortError, and RestError. import { ErrorResponse, RestError, AbortError, TimeoutError } from '@azure/cosmos'; const handleCosmosError = (error: any) => { if (error instanceof RestError) { throw new Error(`error: ${error.name}, message: ${error.message}`); } else if (error instanceof ErrorResponse) { throw new Error(`Error: ${error.message}, message: ${error.message}`); } else if (error instanceof AbortError) { throw new Error(error.message); } else if (error instanceof TimeoutError) { throw new Error(`TimeoutError code: ${error.code}, message: ${error.message}`); } else if (error.code === 409) { //if you try to create an item using an id that's already in use in your Cosmos DB database, a 409 error is returned throw new Error('Conflict occurred while creating an item using an existing ID.'); } else { console.log(JSON.stringify(error)); throw new Error('An error occurred while processing your request.'); } }; Read More Quickstart Guide for Azure Cosmos DB Javascript SDK v4 Best practices for JavaScript SDK in Azure Cosmos DB for NoSQL Visit the JavaScript SDK v4 Release Notes page for the rest of our documentation and sample code. Announcing JavaScript SDK v4 for Azure Cosmos DB454Views1like0Comments