onnx
22 TopicsPose Estimation with the AI Dev Gallery
What's Going On Here? This blog post is the first in an upcoming series that will spotlight the local AI samples contained in the new AI Dev Gallery. The Gallery is a preview project that aims to showcase local AI scenarios on Windows and to give developers the guidance they need to enable those scenarios themselves. The Gallery is open-source and contains a wide selection of different models and samples, including text, image, audio, and video use cases. In addition to being able to see a given model in action, each sample contains a source code view and a button to export the sample directly to a new Visual Studio project. The Gallery is available on the Microsoft Store and is entirely open sourced on GitHub. For this first sample spotlight, we will be taking a look at one of my favorite scenarios: Human Pose Estimation with HRNet. This sample is enabled by ONNX Runtime, and depending on the processor in your Windows device, this sample supports running on the CPU and NPU. I'll cover how to check which hardware is supported and how to switch between them later in the post. Pose Estimation Demo This sample takes in an uploaded photo and renders pose estimations onto the main human figure in the photo. It will render connections between the torso and limbs, along with five points corresponding to key facial features (eyes, nose, and ears). Before diving into the code for this sample, here's a quick video example: Let's get right to the code to see how this implemented. Code Walkthrough This walkthrough will focus on essential code and may gloss over some UI logic and helper functions. The full code for this sample can be browsed in depth in the AI Dev Gallery itself or in the GitHub repository. When this sample is first opened, it will make an initial call to LoadModelAsync which looks like this: protected override async Task LoadModelAsync(SampleNavigationParameters sampleParams) { // Tell our inference session where our model lives and which hardware to run it on await InitModel(sampleParams.ModelPath, sampleParams.HardwareAccelerator); sampleParams.NotifyCompletion(); // Make first call to inference once model is loaded await DetectPose(Path.Join(Windows.ApplicationModel.Package.Current.InstalledLocation.Path, "Assets", "pose_default.png")); } In this function, a ModelPath and HardwareAccelerator are passed into our InitModel function, which handles instantiating an ONNX Runtime InferenceSession with our model location and the hardware that inference will be performed on. You can jump to Switching to NPU Execution later in this post for more in depth information on how the InferenceSession is instantiated. Once the model has finished initializing, this function calls for an initial round of inference via DetectPose on a default image. Preprocessing, Calling For Inference, and Postprocessing Output The inference logic, along with the required preprocessing and postprocessing, takes place in the DetectPose function. This is a pretty long function, so let's go through it piece by piece. First, this function checks that it was passed a valid file path and performs some updates to our XAML: private async Task DetectPose(string filePath) { // Check if the passed in file path exists, and return if not if (!Path.Exists(filePath)) { return; } // Update XAML to put the view into the "Loading" state Loader.IsActive = true; Loader.Visibility = Visibility.Visible; UploadButton.Visibility = Visibility.Collapsed; DefaultImage.Source = new BitmapImage(new Uri(filePath)); Next, the input image is loaded into a Bitmap and then resized to the expected input size of the HRNet model (256x192) with the helper function ResizeBitmap: // Load bitmap from image filepath using Bitmap originalImage = new(filePath); // Store expected input dimensions in variables, as these will be used later int modelInputWidth = 256; int modelInputHeight = 192; // Resize Bitmap to expected dimensions with ResizeBitmap helper using Bitmap resizedImage = BitmapFunctions.ResizeBitmap(originalImage, modelInputWidth, modelInputHeight); Once the image is stored in a bitmap of the proper size, we create a Tensor of dimensionality 1x3x192x256 that will represent the image. Each dimension, in order, corresponds to these values: Batch Size: our first value of 1 is just the number of inputs that are being processed. This implementation processes a single image at a time, so the batch size is just one. Color Channels: The next dimension has a value of 3 and corresponds to each of the typical color channels: red, green, and blue. This will define the color of each pixel in the image. Width: The next value of 256 (passed as modelInputWidth) is the pixel width of our image. Height: The last value of 192 (passed as modelInputHeight) is the pixel height of our image. Taken as a whole, this tensor represents a single image where each pixel in that image is defined by an X (width) and Y (height) pixel value and three-color values (red, green, blue). Also, it is good to note that the processing and inference section of this function is being ran in a Task to prevent the UI from becoming blocked: // Run our processing and inference logic as a Task to prevent the UI from being blocked var predictions = await Task.Run(() => { // Define a tensor that represents every pixel of a single image Tensor<float> input = new DenseTensor<float>([1, 3, modelInputWidth, modelInputHeight]); To improve the quality of the input, instead of just passing in the original pixel values to the tensor, the pixels values are normalized with the PreprocessBitmapWithStdDev helper function. This function uses the mean of each RGB value and the standard deviation (how far a value typically varies away from its mean) to "level out" outlier color values. You can think of it as a way of preventing images with really dramatic color differences from confusing the model. This step does not affect the dimensionality of the input. It only adjusts the values that will be stored in the tensor: // Normalize our input and store it in the "input" tensor. Dimension is still 1x3x256x192 input = BitmapFunctions.PreprocessBitmapWithStdDev(resizedImage, input); There is one last small step of set up before the input is passed to the InferenceSession, as ONNX expects a certain input format for inference. A List of type NamedOnnxValue is created with only one entry representing the input tensor that was just processed. Each NamedOnnxValue expects a metadata name (which is grabbed from the model itself using the InferenceSession) and a value (the tensor that was just processed): // Snag the input metadata name from the inference session var inputMetadataName = _inferenceSession!.InputNames[0]; // Create a list of NamedOnnxValues, with one entry var onnxInputs = new List<NamedOnnxValue> { // Call NamedOnnxValue.CreateFromTensor and pass in input metadata name and input tensor NamedOnnxValue.CreateFromTensor(inputMetadataName, input) }; The onnxInputs list that was just created is passed to InferenceSession.Run. It returns a collection of DisposableNamedOnnxValues to be processed: // Call Run to perform inference using IDisposableReadOnlyCollection<DisposableNamedOnnxValue> results = _inferenceSession!.Run(onnxInputs); The output of the HRNet model is a bit more verbose than a list of coordinates that correspond with human pose key points (like left knee, or right shoulder). Instead of exact predictions, it returns a heatmap for every pose key point that scores each location on the image with a probability that a certain joint exists there. So, there's a bit more work to do to get points that can be placed on an image. First, the function sets up the necessary values for post processing: // Fetch the heatmaps list from the inference results var heatmaps = results[0].AsTensor<float>(); // Get the output name from the inference session var outputName = _inferenceSession!.OutputNames[0]; // Use the output name to get the dimensions of the output from the inference session var outputDimensions = _inferenceSession!.OutputMetadata[outputName].Dimensions; // Finally, get the output width and height from those dimensions float outputWidth = outputDimensions[2]; float outputHeight = outputDimensions[3]; The output width and height are passed, along with the heatmaps list and the original image dimensions, to the PostProcessResults helper function. This function does two actions with each heatmap: It iterates over every value in the heatmap to find the coordinates where the probability is highest for each pose key point. It scales that value back to the size of the original image, since it was changed when it was passed into inference. This is why the original image dimensions were passed. From this function, a list of tuples containing the X and Y location of each key point is returned, so that they can be properly rendered onto the image: // Post process heatmap results to get key point coordinates List<(float X, float Y)> keypointCoordinates = PoseHelper.PostProcessResults(heatmaps, originalImage.Width, originalImage.Height, outputWidth, outputHeight); // Return those coordinates from the task return keypointCoordinates; }); Next up is rendering. Rendering Pose Predictions Rendering is handled by the RenderPredictions helper function which takes in the original image, the predictions that were generated, and a marker ratio to define how large to draw the predictions on the image. Note that this code is still being called from the DetectPose function: using Bitmap output = PoseHelper.RenderPredictions(originalImage, predictions, .02f); Rendering predictions is pretty key to the pose estimation flow, so let's dive into this function. This function will draw two things: Red ellipses at each pose key point (right knee, left eye, etc.) Blue lines connecting joint key points (right knee to right ankle, left shoulder to left elbow, etc.) Face key points (eyes, nose, ears) do not have any connections, and will just have dots ellipses rendered for them. The first thing the function does is set up the Graphics, Pen, and Brush objects necessary for drawing: public static Bitmap RenderPredictions(Bitmap image, List<(float X, float Y)> keypoints, float markerRatio, Bitmap? baseImage = null) { // Create a graphics object from the image using (Graphics g = Graphics.FromImage(image)) { // Average out width and height of image. // Ignore baseImage portion, it is used by another sample. var averageOfWidthAndHeight = baseImage != null ? baseImage.Width + baseImage.Height : image.Width + image.Height; // Get the marker size from the average dimension value and the marker ratio int markerSize = (int)(averageOfWidthAndHeight * markerRatio / 2); // Create a Red brush for the keypoints and a Blue pen for the connections Brush brush = Brushes.Red; using Pen linePen = new(Color.Blue, markerSize / 2); Next, a list of (int, int) tuples is instantiated that represents each connection. Each tuple has a StartIdx (where the connection starts, like left shoulder) and an EndIdx (where the connection ends, like left elbow). These indexes are always the same based on the output of the pose model and move from top to bottom on the human figure. As a result, you'll notice that indexes 0-4 are skipped, as those indexes represent the face key points, which don't have any connections: // Create a list of index tuples that represents each pose connection, face key points are excluded. List<(int StartIdx, int EndIdx)> connections = [ (5, 6), // Left shoulder to right shoulder (5, 7), // Left shoulder to left elbow (7, 9), // Left elbow to left wrist (6, 8), // Right shoulder to right elbow (8, 10), // Right elbow to right wris (11, 12), // Left hip to right hip (5, 11), // Left shoulder to left hip (6, 12), // Right shoulder to right hip (11, 13), // Left hip to left knee (13, 15), // Left knee to left ankle (12, 14), // Right hip to right knee (14, 16) // Right knee to right ankle ]; Next, for each tuple in that list, a blue line represenating a connection is drawn on the image with DrawLine. It takes in the Pen that was created, along with start and end coordinates from the keypoints list that was passed into the function: // Iterate over connections with a foreach loop foreach (var (startIdx, endIdx) in connections) { // Store keypoint start and end values in tuples var (startPointX, startPointY) = keypoints[startIdx]; var (endPointX, endPointY) = keypoints[endIdx]; // Pass those start and end coordinates, along with the Pen, to DrawLine g.DrawLine(linePen, startPointX, startPointY, endPointX, endPointY); } Next, the exact same thing is done for the red ellipses representing the keypoints. The entire keypoints list is iterated over because every key point gets an indicator regardless of whether or not it was included in a connection. The red ellipses are drawn second as they should be rendered on top of the blue lines representing connections: // Iterate over keypoints with a foreach loop foreach (var (x, y) in keypoints) { // Draw an ellipse using the red brush, the x and y coordinates, and the marker size g.FillEllipse(brush, x - markerSize / 2, y - markerSize / 2, markerSize, markerSize); } Now just return the image: return image; Jumping back over to DetectPose, the last thing left to do is to update the UI with the rendered predictions on the image: // Convert the output to a BitmapImage BitmapImage outputImage = BitmapFunctions.ConvertBitmapToBitmapImage(output); // Enqueue all our UI updates to ensure they don't happen off the UI thread. DispatcherQueue.TryEnqueue(() => { DefaultImage.Source = outputImage; Loader.IsActive = false; Loader.Visibility = Visibility.Collapsed; UploadButton.Visibility = Visibility.Visible; }); That's it! The final output looks like this: Switching to NPU Execution This sample also supports running on the NPU, in addition to the CPU, if you have met the correct device requirements. You will need a Windows with device with a Qualcomm NPU to run NPU samples in the Gallery. The easiest way to check if your device is NPU capable is within the Gallery itself. Using the Select Model dropdown, you can see which execution providers are supported on your device: I'm on a device with a Qualcomm NPU, so the Gallery is giving the option to run the sample on both CPU and NPU. How Gallery Samples Handle Switching Between Execution Providers When the pose is selected with specific hardware accelerator, that information is passed to the InitModel function that handles how the inference session is instantiated. It will specify the Qualcomm QNN execution provider that enables NPU execution. It looks like this: private Task InitModel(string modelPath, HardwareAccelerator hardwareAccelerator) { return Task.Run(() => { // Check if we already have an inference session if (_inferenceSession != null) { return; } // Set up ONNX Runtime (ORT) session options object SessionOptions sessionOptions = new(); sessionOptions.RegisterOrtExtensions(); if (hardwareAccelerator == HardwareAccelerator.QNN) // Check if QNN was passed { // Add the QNN execution provider if so Dictionary<string, string> options = new() { { "backend_path", "QnnHtp.dll" }, { "htp_performance_mode", "high_performance" }, { "htp_graph_finalization_optimization_mode", "3" } }; sessionOptions.AppendExecutionProvider("QNN", options); } // Create a new inference session with these sessionOptions, if CPU is selected, they will be default _inferenceSession = new InferenceSession(modelPath, sessionOptions); }); } With this function, an InferenceSession can be instantiated to fit whatever execution provider is passed in that particular situation and then that InferenceSession can be used throughout the sample. What's Next More in-depth coverage of the other samples in the gallery will be released periodically, covering a range of what is possible with local AI on Windows. Stay tuned for more sample breakdowns coming soon. In the meantime, go check out the AI Dev Gallery to explore more samples and models on Windows. If you run into any problems, feel free to open an issue on the GitHub repository. This project is open-sourced and any feedback to help us improve the Gallery is highly appreciated.Getting Started with the AI Dev Gallery
The AI Dev Gallery is a new open-source project designed to inspire and support developers in integrating on-device AI functionality into their Windows apps. It offers an intuitive UX for exploring and testing interactive AI samples powered by local models. Key features include: Quickly explore and download models from well-known sources on GitHub and HuggingFace. Test different models with interactive samples over 25 different scenarios, including text, image, audio, and video use cases. See all relevant code and library references for every sample. Switch between models that run on CPU and GPU depending on your device capabilities. Quickly get started with your own projects by exporting any sample to a fresh Visual Studio project that references the same model cache, preventing duplicate downloads. Part of the motivation behind the Gallery was exposing developers to the host of benefits that come with on-device AI. Some of these benefits include improved data security and privacy, increased control and parameterization, and no dependence on an internet connection or third-party cloud provider. Requirements Device Requirements Minimum OS Version: Windows 10, version 1809 (10.0; Build 17763) Architecture: x64, ARM64 Memory: At least 16 GB is recommended Disk Space: At least 20GB free space is recommended GPU: 8GB of VRAM is recommended for running samples on the GPU Visual Studio 2022 You will need Visual Studio 2022 with the Windows Application Development workload. Running the Gallery To run the gallery: Clone the repository: git clone https://github.com/microsoft/AI-Dev-Gallery.git Run the solution: .\AIDevGallery.sln Hit F5 to build and run the Gallery Using the Gallery The AI Dev Gallery has can be navigated in two ways: The Samples View The Models View Navigating Samples In this view, samples are broken up into categories (Text, Code, Image, etc.) and then into more specific samples, like in the Translate Text pictured below: On clicking a sample, you will be prompted to choose a model to download if you haven’t run this sample before: Next to the model you can see the size of the model, whether it will run on CPU or GPU, and the associated license. Pick the model that makes the most sense for your machine. You can also download new models and change the model for a sample later from the sample view. Just click the model drop down at the top of the sample: The last thing you can do from the Sample pane is view the sample code and export the project to Visual Studio. Both buttons are found in the top right corner of the sample, and the code view will look like this: Navigating Models If you would rather navigate by models instead of samples, the Gallery also provides the model view: The model view contains a similar navigation menu on the right to navigate between models based on category. Clicking on a model will allow you to see a description of the model, the versions of it that are available to download, and the samples that use the model. Clicking on a sample will take back over to the samples view where you can see the model in action. Deleting and Managing Models If you need to clear up space or see download details for the models you are using, you can head over the Settings page to manage your downloads: From here, you can easily see every model you have downloaded and how much space on your drive they are taking up. You can clear your entire cache for a fresh start or delete individual models that you are no longer using. Any deleted model can be redownload through either the models or samples view. Next Steps for the Gallery The AI Dev Gallery is still a work in progress, and we plan on adding more samples, models, APIs, and features, and we are evaluating adding support for NPUs to take the experience even further If you have feedback, noticed a bug, or any ideas for features or samples, head over to the issue board and submit an issue. We also have a discussion board for any other topics relevant to the Gallery. The Gallery is an open-source project, and we would love contribution, feedback, and ideation! Happy modeling!3.7KViews4likes3CommentsUsing Advanced Reasoning Model on EdgeAI Part 1 - Quantization, Conversion, Performance
DeepSeek-R1 is very popular, and it can achieve the same capabilities as OpenAI o1 in advanced reasoning. Microsoft has also added DeepSeek-R1 models to Azure AI Foundry and GitHub Models. We can compare DeepSeek-R1 ith other available models through GitHub Models Playground Note This series revolves around deployment of SLMs to Edge Devices 'Edge AI' we will focus on the deployment advanced reasoning models, with different application scenarios. You can learn more in the following session AI Tour BRK453. In this experiement we want to deploy advanced reasoning models to the edge, so that they can run on edge devices with limited computing power and offline environments. At this time, the recommendation is to use the traditional ONNX model . We can use Microsoft Olive to convert the DeepSeek-R1 Distrill model. Getting started with Microsoft Olive is very straightforward. Install the Microsoft Olive library through the command line and Python 3.10+ (recommended) pip install olive-ai The DeepSeek-R1 Distrill model series has different parameters such as 1.5B, 7B, 8B, 14B, 32B, 70B, etc. This article is mainly based on the 1.5B, 7B, and 14B models (so a Small Language Model). CPU Inference Let's discuss 1.5B and 7B, which are models with lower parameter. We can directly use the CPU as computing for inference to test the effect (hardware environment Azure DevBox, AMD EPYC 7763 64-Core + 64GB Memory + 2T SSD) Quantization conversion olive auto-opt --model_name_or_path <Your DeepSeek-R1-Distill-Qwen-1.5B/7B local location> --output_path <Your Convert ONNX INT4 Model local location> --device cpu --provider CPUExecutionProvider --precision int4 --use_model_builder --log_level 1 You can download it directly from my Hugging face Repo (Note: This model is for testing and has not been fully tested by AI Content Safety or provided as an Offical Model) DeepSeek-R1-Distill-Qwen-1.5B-ONNX-INT4-CPU DeepSeek-R1-Distill-Qwen-7B-ONNX-INT4-CPU Running with ONNX Runtime GenAI Install ONNX Runtime GenAI and ONNX Runtime CPU support libraries pip install onnxruntime-genai pip install onnxruntime Sample Code https://github.com/kinfey/EdgeAIForAdvancedReasoning/blob/main/notebook/demo-1.5b.ipynb https://github.com/kinfey/EdgeAIForAdvancedReasoning/blob/main/notebook/demo-7b.ipynb Performance comparison 1.5B vs 7B We compare two different inference scenarios explain 1+1=2 1.5B quantized ONNX model memory occupied, time consumption and number of tokens generated: 7B quantized ONNX model memory occupied, time consumption and number of tokens generated 2. Find all pairwise different isomorphism groups with order 147 and no elements with order 49 1.5B quantized ONNX model memory occupied, time consumption and number of tokens generated: 7B quantized ONNX model memory occupied, time consumption and number of tokens generated Results of the numbers Through the test, we can see that the 1.5B model of DeepSeek is more suitable for use on CPU inference and can be deployed on traditional PCs or IoT devices. As for 7B, although it has better inference, it is not very effective on CPU operation. GPU Inference It is ideal if we have a GPU on the edge device. We can quantize and convert it to an ONNX model for CPU inference through Microsoft Olive. Of course, it can also be converted to a model for GPU inference. Here I take the 14B DeepSeek-R1-Distill-Qwen-14B as an example and make an inference comparison with Microsoft's Phi-4-14B Quantization conversion olive auto-opt --model_name_or_path <Your Phi-4-14B or DeepSeek-R1-Distill-Qwen-14B local path > --output_path <Your converted Phi-4-14B or DeepSeek-R1-Distill-Qwen-14B local path > --device gpu --provider CUDAExecutionProvider --precision int4 --use_model_builder --log_level 1 You can download it directly from my Hugging face Repo (Note: This model is for testing and has not been fully tested by AI Content Safety and not an Official Model) DeepSeek-R1-Distill-Qwen-14B-ONNX-INT4-GPU Phi-4-14B-ONNX-INT4-GPU Running with ONNX Runtime GenAI CUDA Install ONNX Runtime GenAI and ONNX Runtime GPU support libraries pip install onnxruntime-genai-cuda pip install onnxruntime-gpu Compare the results in the GPU environment with Gradio It is recommended to use a GPU with more than 8G memory To increase the comparison of the results, we compare it with Phi-4-14B-ONNX-INT4-GPU and DeepSeek-R1-Distill-Qwen-14B-ONNX-INT4-GPU to see the different results. We also show we use OpenAI o1-mini (it is recommended to use o1-mini through GitHub Models), Sample Code https://github.com/kinfey/EdgeAIForAdvancedReasoning/blob/main/notebook/Performance_AdvancedReasoning_ONNX_CPU.ipynb You can test any prompt on Gradio to compare the results of Phi-4-14B-ONNX-INT4-GPU, DeepSeek-R1-Distill-Qwen-14B-ONNX-INT4-GPU and OpenAI o1 mini. DeepSeek-R1 reduces the cost of inference models and produces more instructive results on professional problems, but Phi-4-14B also has advantages in reasoning and uses lower computing power to complete inference. As for OpenAI o1 mini, it is more comprehensive and can touch all problems. If you want to deploy to Edge Device, Phi-4-14B and quantized DeepSeek-R1 are good choices for you. This blog is just a simple test and the first in this series. Please share your feedback and continue the discussion in the Microsoft AI Discord Channel. Feel free to me a message or comment. We look forward to sharing more around the opportunity of EdgeAI and more content in this series. Resource DeepSeek-R1 in GitHub Models https://github.com/marketplace/models/azureml-deepseek/DeepSeek-R1 DeepSeek-R1 in Azure AI Foundry https://ai.azure.com/explore/models/DeepSeek-R1/version/1/registry/azureml-deepseek Phi-4-14B in Hugging face https://huggingface.co/microsoft/phi-4 Learn about Microsoft Olive https://github.com/microsoft/olive Learn about ONNX Runtime GenAI https://github.com/microsoft/onnxruntime-genai Microsoft AI Discord Channel BRK453 Exploring cutting-edge models: LLMs, SLMs, local development and more https://aka.ms/aitour/brk453716Views0likes0CommentsAI Toolkit for VS Code January Update
AI Toolkit is a VS Code extension aiming to empower AI engineers in transforming their curiosity into advanced generative AI applications. This toolkit, featuring both local-enabled and cloud-accelerated inner loop capabilities, is set to ease model exploration, prompt engineering, and the creation and evaluation of generative applications. We are pleased to announce the January Update to the toolkit with support for OpenAI's o1 model and enhancements in the Model Playground and Bulk Run features. What's New? January’s update brings several exciting new features to boost your productivity in AI development. Here's a closer look at what's included: Support for OpenAI’s new o1 Model: We've added access to GitHub hosted OpenAI’s latest o1 model. This new model replaces the o1-preview and offers even better performance in handling complex tasks. You can start interacting with the o1 model within VS Code for free by using the latest AI Toolkit update. Chat History Support in Model Playground: We have heard your feedback that tracking past model interactions is crucial. The Model Playground has been updated to include support for chat history. This feature saves chat history as individual files stored entirely on your local machine, ensuring privacy and security. Bulk Run with Prompt Templating: The Bulk Run feature, introduced in the AI Toolkit December release, now supports prompt templating with variables. This allows users to create templates for prompts, insert variables, and run them in bulk. This enhancement simplifies the process of testing multiple scenarios and models. Stay tuned for more updates and enhancements as we continue to innovate and support your journey in AI development. Try out the AI Toolkit for Visual Studio Code, share your thoughts, and file issues and suggest features in our GitHub repo. Thank you for being a part of this journey with us!Getting Started - Generative AI with Phi-3-mini: Running Phi-3-mini in Intel AI PC
In 2024, with the empowerment of AI, we will enter the era of AI PC. On May 20, Microsoft also released the concept of Copilot + PC, which means that PC can run SLM/LLM more efficiently with the support of NPU. We can use models from different Phi-3 family combined with the new AI PC to build a simple personalized Copilot application for individuals. This content will combine Intel's AI PC, use Intel's OpenVINO, NPU Acceleration Library, and Microsoft's DirectML to create a local Copilot.30KViews2likes2CommentsAI Toolkit for Visual Studio Code: October 2024 Update Highlights
The AI Toolkit’s October 2024 update revolutionizes Visual Studio Code with game-changing features for developers, researchers, and enthusiasts. Explore multi-model integration, including GitHub Models, ONNX, and Google Gemini, alongside custom model support. Dive into multi-modal capabilities for richer AI testing and seamless multi-platform compatibility across Windows, macOS, and Linux. Tailored for productivity, the enhanced Model Catalog simplifies choosing the best tools for your projects. Try it now and share feedback to shape the future of AI in VS Code!2.6KViews4likes0CommentsONNX and NPU Acceleration for Speech on ARM
This project is from Students at University College London and explores the benefits of ONNX and NPU accelerators in accelerating the inference of Whisper models and developing a local Whisper model leveraging these techniques for ARM-based systems.1.4KViews0likes0CommentsBuilding Retrieval Augmented Generation on VSCode & AI Toolkit
LLMs usually have limited knowledge about specific domains. Retrieval Augmented Generation (RAG) helps LLMs be more accurate and give relevant output to specific domains and datasets. We will see how we can do this for local models using AI Toolkit,Running Phi-3-vision via ONNX on Jetson Platform
Unlock the potential of NVIDIA's Jetson platform by running the Phi-3-vision model in ONNX format. Dive into the seamless process of compiling onnxruntime-genai, setting up the environment, and executing high-performance inference tasks on low-power devices like Jetson Orin Nano. Discover how to utilize quantized models efficiently, enabling robust image and text dialogue tasks, all while keeping your GPU workload-optimized. Whether you’re working with FP16 or Int 4 models, this guide will walk you through each step, ensuring you harness the full capabilities of edge AI on Jetson.5.9KViews2likes18Comments