rag
22 TopicsBuilding Intelligent Applications with Local RAG in .NET and Phi-3: A Hands-On Guide
Let's learn how to do Retrieval Augmented Generation (RAG) using local resources in .NET! In this post, we’ll show you how to combine the Phi-3 language model, Local Embeddings, and Semantic Kernel to create a RAG scenario.15KViews5likes13CommentsOptimizing Retrieval for RAG Apps: Vector Search and Hybrid Techniques
In this blog we are going to dive into optimizing our search strategy with Hybrid search techniques. Common practices for implementing the retrieval step in retrieval-augmented generation (RAG) applications are; Keyword search Vector Search Hybrid search (Keyword + Vector) Hybrid + Semantic ranker7.8KViews3likes0CommentsBuilding your own copilot – yes, but how? (Part 1 of 2)
Are you interested in building your own AI co-pilot? Check out the first of a two-part blog post from Carlotta Castelluccio that covers the basics of creating a virtual assistant that can help you with tasks like scheduling, email management, and more. Learn about the tools and technologies involved, including Microsoft's Bot Framework and Language Understanding Intelligent Service (LUIS). Whether you're a software developer or just curious about the possibilities of AI, this post is a great introduction to building your own co-pilot.31KViews3likes2CommentsTiny But Mighty: Unleashing the Power of Small Language Models 🚀
While Large Language Models (LLMs) like GPT-4 dominate headlines with their extensive capabilities, they often come at the cost of high computational requirements and complexity. For developers and organizations looking to implement AI solutions on edge devices or with limited resources, Small Language Models (SLMs) are emerging as a practical alternative. SLMs are not just "smaller" versions of their larger counterparts—they're designed to be faster, more efficient, and adaptable for specific tasks. With fewer parameters and lower computational needs, SLMs open the door to deploying AI on mobile devices, IoT systems, and edge environments without compromising performance. What You Stand to Learn 🧠 Introduction to Microsoft's AI Ecosystem Discover Microsoft's end-to-end AI development tools, from Azure AI Services to ONNX Runtime, enabling efficient and secure deployment of AI models across cloud and edge environments. The Advantages of SLMs over LLMs SLMs are game-changers for edge AI applications, providing faster training and inference times, reduced energy costs, and scalability across diverse devices. Hands-On with Phi-3 and ONNX Runtime Experience live demonstrations of SLMs in action with tools like Phi-3 and ONNX Runtime, showcasing how to fine-tune and deploy models on mobile devices, IoT, and hybrid cloud environments. Responsible AI Practices Understand how to safeguard your AI applications with Microsoft's Responsible AI toolkit, ensuring ethical and trustworthy deployments. Watch the Full Session 👨💻 📅 Date: December 12, 2024 ⏰ Time: 4 PM GMT | 5 PM CEST | 8 AM PT | 11 AM ET | 7 PM EAT A session packed with live demos, practical examples, and Q&A opportunities. Register NOW | Events | Microsoft Reactor Agenda 🔍 Introduction (5 min) A brief overview of the session and its focus on SLMs and LLMs. Microsoft AI Tooling (5 min) Explore the latest tools like Azure AI Services, Azure Machine Learning, and Responsible AI Tooling. How to Choose the Right Model (10 min) Key considerations such as performance, customizability, and ethical implications. Comparing SLMs vs LLMs (10 min) The strengths, weaknesses, and best use cases for both Small and Large Language Models. Deploying Models at the Edge (10 min) Insights into optimizing AI for mobile, IoT, and edge devices. Q&A Addressing participant questions about AI development and deployment.357Views2likes0CommentsWhy Should Business Adopt RAG and migrate from LLMs?
In this blog we are going to discuss the importance of migrating your product or startup project from LLMS to RAG. Adopting RAG empowers businesses to leverage external knowledge, enhance accuracy, and create more robust AI applications. It’s a strategic move toward building intelligent systems that bridge the gap between generative capabilities and authoritative information. Below are topics in this blog. Brief History of AI What are Large Language Models (LLMS). Limitation of LLMS. How can we incorporate domain knowledge. What is Retrieval Augmented Generation (RAG). What is Robust retrieval for RAG Apps. Once we are done with these concepts, I hope to convince you to adopt RAG in your project.3.4KViews2likes0Comments