azure machine learning
26 TopicsThe Future of AI: Harnessing AI for E-commerce - personalized shopping agents
Explore the development of personalized shopping agents that enhance user experience by providing tailored product recommendations based on uploaded images. Leveraging Azure AI Foundry, these agents analyze images for apparel recognition and generate intelligent product recommendations, creating a seamless and intuitive shopping experience for retail customers.379Views5likes2CommentsNew controls for model governance and secure access to on-premises or custom VNET resources
Learn how to create an allowed model list for the Azure AI model catalog, plus a new way to access on-premises and custom VNET resources from your managed VNET for your training, fine-tuning, and inferencing scenarios.2.7KViews3likes1CommentMeta’s next generation model, Llama 3.1 405B is now available on Azure AI
Microsoft, in collaboration with Meta, is launching Llama 3.1 405B, now available via Azure AI’s Models as a Service. Also introducing fine-tuned versions of Llama 3.1 8B and 70B. Leverage powerful AI for synthetic data generation and distillation. Access these models and more through Azure AI Studio and popular developer tools like prompt flow, OpenAI, LangChain, LiteLLM, and more. Streamline development and enhance efficiency with Azure AI.46KViews3likes7CommentsAnnouncing management center and other tools to secure and govern Azure AI Foundry
We’re pleased to share new security and IT governance capabilities in Azure AI Foundry that can help organizations build and scale GenAI solutions that are secure by default, including a new management center, granular networking controls, and the general availability of data and service connections.3.1KViews2likes0CommentsAccelerate enterprise GenAI application development with tracing in Azure AI Foundry
We are excited to announce the public preview of tracing in Azure AI Foundry, a powerful capability designed to enhance monitoring and debugging capabilities for your machine learning models and applications. Tracing allows you to gain deeper insights into the performance and behavior of your models, to help ensure they operate efficiently and effectively. Enable comprehensive monitoring and analysis of your application's execution Tracing allows you to trace application processes from input to output, review intermediate results, and measure execution times. Additionally, detailed logs for each function call in your workflow are accessible. You can inspect parameters, metrics, and outputs of each AI model used, for easier debugging and optimization of your application. The Azure AI Foundry SDK supports tracing to various endpoints including local viewers (Prompty trace viewer and Aspire dashboard), Azure AI Foundry, and Azure Monitor Application Insights. This flexibility helps you integrate tracing with any application, facilitating testing, evaluation, and deployment across different orchestrations and existing GenAI frameworks. Key Capabilities Basic debugging In situations where your application encounters an error, the trace functionality becomes extremely useful. It allows you to delve into the function causing the error, assess the frequency of exceptions, and troubleshoot using the provided exception message and stack trace. Detailed execution logs Tracing captures detailed traces of your model's execution, including data preprocessing, feature extraction, model inference, and post-processing steps. These details provide valuable insights into the inner workings of your models, helping you identify bottlenecks and optimize performance. For example, understanding the call flow of an application is crucial for complex AI systems where multiple components and services interact. By enabling tracing, developers can identify bottlenecks, understand dependencies, and optimize the flow for better performance. Performance metrics In addition to execution logs, tracing collects key performance metrics, such as latency and token utilization. These metrics allow you to monitor the efficiency of your models and make data-driven decisions to improve their performance. Building monitoring dashboards with the data collected from tracing can provide real-time visibility into the system's health. These dashboards can track key performance indicators (KPIs), provide alerts on anomalies, and help ensure that the AI services are running as expected. Error tracking Tracing helps you identify and troubleshoot errors in your models by capturing detailed error logs. Whether it's a data preprocessing issue or a model inference error, tracing provides the information you need to diagnose and fix problems quickly. This is particularly useful for capturing runtime exceptions, such as rate-limiting, which are critical for maintaining the reliability of your applications. Evaluations and user feedback You can attach evaluations metrics and user feedback to traces via online evaluation capabilities in Azure AI Foundry. Online evaluation allows you to incorporate real-world performance data and user insights into your monitoring process, to assess whether your models meet the desired quality standards. The Azure AI Foundry SDK simplifies the process of downstream evaluation, facilitating continuous improvement and validation of AI models against real-world data. Additionally, capturing user evaluations and interactions can provide insights into how users are engaging with the AI features, to inform user-centric improvements. Visualize Traces Azure AI Foundry provides robust tools for visualizing traces, both for local debugging and production-level monitoring. You can use these tools to gain a better understanding of your model's behavior and performance. The visualization capabilities include: Local debugging: Visualize traces during development to identify and resolve issues early, helping ensure that models are optimized before deployment. Visualize the data via Azure AI Foundry portal and Azure Monitor: In the post-deployment phase, developers often want to delve deeper into their applications' performance to optimize it further. For instance, you might want to monitor your GenAI application's performance, usage, and costs. In this scenario, the trace data for each request, the aggregated metrics, and user feedback become vital. Tracing seamlessly integrates with Azure Monitor, allowing you to visualize and analyze your model's performance metrics and logs using a customizable dashboard in Azure Monitor Application Insights. This integration provides a holistic view of your model's health and performance, enabling you to make informed decisions. Getting Started To start using tracing in Azure AI Foundry and Azure Monitor, follow these simple steps: Log Traces: Enable Tracing via Azure AI SDK for enabling tracing on Model inference API. Configure Logging: Set up the logging configuration to capture the desired level of detail for your model's execution. Enable Tracing in AI Studio: In your Azure AI Project, navigate to the Tracing and enable the feature for your models. Monitor and Analyze: Use Azure Monitor to visualize and analyze the collected logs and metrics, gaining insights into your model's performance. Find detailed guidance in our documentation: Overview of tracing capabilities in Azure AI Foundry Learn how to implement and use tracing with the Azure AI Foundry SDK Visualize your traces Build production-ready GenAI apps with Azure AI Foundry Want to learn about more ways to build and monitor enterprise-ready GenAI applications? Here are other exciting announcements from Microsoft Ignite to support your GenAIOps workflows: New ways to evaluate generative AI outputs for quality and safety New ways to monitor performance with Azure AI Foundry and Azure Monitor Whether you’re joining in person or online, we can’t wait to see you at Microsoft Ignite 2024. We’ll share the latest from Azure AI and go deeper into best practices for GenAIOps with these sessions: Microsoft Ignite Keynote Multi-agentic GenAIOps from prototype to production with dev tools Azure AI and the dev toolchain you need to infuse AI in all your apps1.4KViews0likes0CommentsRetrieval Augmented Fine Tuning: Use GPT-4o to fine tune GPT-4o mini for domain specific application
Are you a developer looking to enhance your conversational assistant's performance? Struggling with domain adaptation and incorrect answers? Look no further! Our self-paced, hands-on workshop on Retrieval Augmented Fine-Tuning (RAFT) using Azure OpenAI is here https://aka.ms/aoai-raft-workshop to help you take your AI projects to the next level. In today's fast-paced world, having a conversational assistant that can accurately answer domain-specific questions is crucial. Whether you're working in banking, healthcare, or any other industry, RAFT can help you fine-tune your language models to provide precise and relevant answers. This workshop is designed to be educational and practical, giving you the tools and knowledge to implement RAFT effectively at your own pace.6.3KViews1like0CommentsThe Future of AI: The paradigm shifts in Generative AI Operations
Dive into the transformative world of Generative AI Operations (GenAIOps) with Microsoft Azure. Discover how businesses are overcoming the challenges of deploying and scaling generative AI applications. Learn about the innovative tools and services Azure AI offers, and how they empower developers to create high-quality, scalable AI solutions. Explore the paradigm shift from MLOps to GenAIOps and see how continuous improvement practices ensure your AI applications remain cutting-edge. Join us on this journey to harness the full potential of generative AI and drive operational excellence.6.4KViews0likes1Comment