Blog Post

AI - Machine Learning Blog
3 MIN READ

Announcing DeepSeek-V3 on Azure AI Foundry and GitHub

santiagxf's avatar
santiagxf
Icon for Microsoft rankMicrosoft
Mar 07, 2025

This latest iteration is part of our commitment to enable powerful, efficient, and accessible AI solutions through the breadth and diversity of choice in the model catalog.

Building on the interest of DeepSeek-R1, launched one month ago, we are pleased to announce the availability of DeepSeek-V3 on Azure AI Foundry model catalog with token-based billing and GitHub Models free experience. This latest iteration is part of our commitment to enable powerful, efficient, and accessible AI solutions through the breadth and diversity of choice in the model catalog. DeepSeek-V3 is set to empower organizations across industries to unlock value from their data.

What is DeepSeek-V3?

As DeepSeek mentions, ​DeepSeek-V3 is an advanced large language model (LLM), that has gained significant attention for its performance and cost-effectiveness. DeepSeek's innovations highlight the potential for achieving high-level AI performance with fewer resources, challenging existing industry norms and prompting discussions about the future direction of AI development.

Features of DeepSeek-V3:

  • Advanced Mixture-of-Experts Architecture: With 671 billion total parameters, DeepSeek-V3 employs a Mixture-of-Experts (MoE) model where 37 billion parameters are activated per token. This architecture optimizes computational efficiency without compromising performance.
  • Extended Context Length: The model supports a context length of up to 128K tokens, enabling it to process and understand longer sequences of data effectively.
  • Efficient Training and Deployment: Despite its scale, DeepSeek reports training DeepSeek-V3 over approximately 55 days at a cost of $5.58 million, utilizing around 2,000 Nvidia H800 GPUs.

Pricing

DeepSeek-V3 is now available from Microsoft through the model catalog in Azure AI Foundry. Customers can use DeepSeek-V3 via Azure AI model inferencing solutions at the following prices:

Model

Input Pricing in USD (1K Tokens)

Output Pricing in USD (1K Tokens)

DeepSeek-V3 Global

$0.00114

$0.00456

DeepSeek-V3 Regional

$0.00125

$0.005

Why Choose DeepSeek-V3 on Azure AI Foundry?

DeepSeek-V3 stands out not only for its technical capabilities but also for its seamless integration with Azure's ecosystem. Here's what makes it unique:

  • Enhanced Security and data privacy:  We are committed to enabling customers to build production-ready AI applications quickly while maintaining the highest levels of safety and security. Check out this article to learn more about data handling when you deploy models from the model catalog in Azure AI Foundry. 
  • Provision an API Endpoint: Create your DeepSeek-V3 endpoint in seconds with token-based billing. 
  • Experiment: Try it out in the Azure AI Foundry playground or integrate it with popular LLM app development tools. 

Getting Started with DeepSeek-V3

To get started with Azure AI Foundry and deploy the model, follow these clear steps:

  1. Try it out for free on GitHub Models exploring its capabilities.
  2. Familiarize yourself: If you're new to Azure AI Foundry, start by reviewing this documentation to understand the basics and set up your first project.
  3. Access the model catalog: Open the model catalog in Azure AI Foundry.
  4. Find the model: Click the “View models” button on the announcement card or open the DeepSeek-V3 model from the list.
  5. Deploy the model: Click on ‘Deploy’.

Don't miss the opportunity to be at the forefront of AI innovation. Explore DeepSeek-V3 on the Azure AI Foundry today and unlock the full potential of your data.

Are you ready to take your AI capabilities to the next level? Let us know how we can help you get started with DeepSeek-V3!

 

Updated Mar 07, 2025
Version 5.0
No CommentsBe the first to comment