Blog Post

Microsoft Defender XDR Blog
2 MIN READ

Get visibility into your DeepSeek use with Defender for Cloud Apps

Maayan Bar-Niv's avatar
Jan 31, 2025

The world has never seen technology adopted at the pace of AI. While AI increases productivity and is deeply integrated into business processes, it can also come with risks in terms of security, privacy and compliance.

On January 20, DeepSeek caused a big splash after announcing DeepSeek R1, a powerful and inexpensive AI reasoning model that can answer questions, solve logic problems and write its own computer programs.

DeepSeek has seen unprecedented adoption with millions of app downloads in just a few days! Chances are that many users within your organization may already be leveraging it. However, safe adoption within your business requires a careful assessment of the risks that an AI app may bring to your organization—and that’s where Microsoft Defender for Cloud Apps comes in.

Microsoft Defender for Cloud Apps helps you discover and protect more than 800 generative AI applications, now including DeepSeek. It provides the necessary overview of an app's usage in your organization, combined with the potential risk that the app poses for your organization. In fact, it profiles more than 90 separate risk attributes for each application in the Cloud App Catalog so you can make informed choices in a unified experience.

 

 

From the Cloud Discovery dashboard navigate to the Generative AI section to see high-level usage statistics of DeepSeek. Here you can review, understand the top entities using the app, identify usage trends and review the potential risk it poses for your organization. You can also do a deep dive to identify usage spikes, data uploads, transactions, total traffic and so on.

After analyzing the risk and usage of the application, an admin can decide which app controls should be applied to this application using the app actions. Admins can control cloud applications by:

 

Want to learn more about how Defender for Cloud Apps can help you manage AI adoption securely? Dive into our resources for a deeper conversation. Get started now.

 

More information

Updated Jan 31, 2025
Version 1.0
  • Cassim's avatar
    Cassim
    Copper Contributor

    The capability to monitor AI related risks is a big plus. Many organizations, especially here in Africa have their data accessed without their knowledge. Not only that, alot of external is introduced amd mershed with internal data causing “unrealised data confussion”, which can intern lead to wrong decisions or assumptions.