securing ai
18 TopicsUnlocking the Power of Microsoft Purview for ChatGPT Enterprise
In today's rapidly evolving technology landscape, data security and compliance are key. Microsoft Purview offers a robust solution for managing and securing interactions with AI based solutions. This integration not only enhances data governance but also ensures that sensitive information is handled with the appropriate controls. Let's dive into the benefits of this integration and outline the steps to integrate with ChatGPT Enterprise in specific. The integration works for Entra connected users on the ChatGPT workspace, if you have needs that goes beyond this, please tell us why and how it impacts you. Benefits of Integrating ChatGPT Enterprise with Microsoft Purview Enhanced Data Security: By integrating ChatGPT Enterprise with Microsoft Purview, organizations can ensure that interactions are securely captured and stored within their Microsoft 365 tenant. This includes user text prompts and AI app text responses, providing a comprehensive record of communications. Compliance and Governance: Microsoft Purview offers a range of compliance solutions, including Insider Risk Management, eDiscovery, Communication Compliance, and Data Lifecycle & Records Management. These tools help organizations meet regulatory requirements and manage data effectively. Customizable Detection: The integration allows for the detection of built in can custom classifiers for sensitive information, which can be customized to meet the specific needs of the organization. To help ensures that sensitive data is identified and protected. The audit data streams into Advanced Hunting and the Unified Audit events that can generate visualisations of trends and other insights. Seamless Integration: The ChatGPT Enterprise integration uses the Purview API to push data into Compliant Storage, ensuring that external data sources cannot access and push data directly. This provides an additional layer of security and control. Step-by-Step Guide to Setting Up the Integration 1. Get Object ID for the Purview account in Your Tenant: Go to portal.azure.com and search for "Microsoft Purview" in the search bar. Click on "Microsoft Purview accounts" from the search results. Select the Purview account you are using and copy the account name. Go to portal.azure.com and search for “Enterprise" in the search bar. Click on Enterprise applications. Remove the filter for Enterprise Applications Select All applications under manage, search for the name and copy the Object ID. 2. Assign Graph API Roles to Your Managed Identity Application: Assign Purview API roles to your managed identity application by connecting to MS Graph utilizing Cloud Shell in the Azure portal. Open a PowerShell window in portal.azure.com and run the command Connect-MgGraph. Authenticate and sign in to your account. Run the following cmdlet to get the ServicePrincipal ID for your organization for the Purview API app. (Get-MgServicePrincipal -Filter "AppId eq '9ec59623-ce40-4dc8-a635-ed0275b5d58a'").id This command provides the permission of Purview.ProcessConversationMessages.All to the Microsoft Purview Account allowing classification processing. Update the ObjectId to the one retrieved in step 1 for command and body parameter. Update the ResourceId to the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{a4543e1f-6e5d-4ec9-a54a-f3b8c156163f}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam It will look something like this from the command line We also need to add the permission for the application to read the user accounts to correctly map the ChatGPT Enterprise user with Entra accounts. First run the following command to get the ServicePrincipal ID for your organization for the GRAPH app. (Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'").id The following step adds the permission User.Read.All to the Purview application. Update the ObjectId with the one retrieved in step 1. Update the ResourceId with the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{df021288-bdef-4463-88db-98f22de89214}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam 3. Store the ChatGPT Enterprise API Key in Key Vault The steps for setting up Key vault integration for Data Map can be found here Create and manage credentials for scans in the Microsoft Purview Data Map | Microsoft Learn When setup you will see something like this in Key vault. 4. Integrate ChatGPT Enterprise Workspace to Purview: Create a new data source in Purview Data Map that connects to the ChatGPT Enterprise workspace. Go to purview.microsoft.com and select Data Map, search if you do not see it on the first screen. Select Data sources Select Register Search for ChatGPT Enterprise and select Provide your ChatGPT Enterprise ID Create the first scan by selecting Table view and filter on ChatGPT Add your key vault credentials to the scan Test the connection and once complete click continue When you click continue the following screen will show up, if everything is ok click Save and run. Validate the progress by clicking on the name, completion of the first full scan may take an extended period of time. Depending on size it may take more than 24h to complete. If you click on the scan name you expand to all the runs for that scan. When the scan completes you can start to make use of the DSPM for AI experience to review interactions with ChatGPT Enterprise. The mapping to the users is based on the ChatGPT Enterprise connection to Entra, with prompts and responses stored in the user's mailbox. 5. Review and Monitor Data: Please see this article for required permissions and guidance around Microsoft Purview Data Security Posture Management (DSPM) for AI, Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Use Purview DSPM for AI analytics and Activity Explorer to review interactions and classifications. You can expand on prompts and responses in ChatGPT Enterprise 6. Microsoft Purview Communication Compliance Communication Compliance (here after CC) is a feature of Microsoft Purview that allows you to monitor and detect inappropriate or risky interactions with ChatGPT Enterprise. You can monitor and detect requests and responses that are inappropriate based on ML models, regular Sensitive Information Types, and other classifiers in Purview. This can help you identify Jailbreak and Prompt injection attacks and flag them to IRM and for case management. Detailed steps to configure CC policies and supported configurations can be found here. 7. Microsoft Purview Insider Risk Management We believe that Microsoft Purview Insider Risk Management (here after IRM) can serve a key role in protecting your AI workloads long term. With its adaptive protection capabilities, IRM dynamically adjusts user access based on evolving risk levels. In the event of heightened risk, IRM can enforce Data Loss Prevention (DLP) policies on sensitive content, apply tailored Entra Conditional Access policies, and initiate other necessary actions to effectively mitigate potential risks. This strategic approach will help you to apply more stringent policies where it matters avoiding a boil the ocean approach to allow your team to get started using AI. To get started use the signals that are available to you including CC signals to raise IRM tickets and enforce adaptive protection. You should create your own custom IRM policy for this. Do include Defender signals as well. Based on elevated risk you may select to block users from accessing certain assets such as ChatGPT Enterprise. Please see this article for more detail Block access for users with elevated insider risk - Microsoft Entra ID | Microsoft Learn. 8. eDiscovery eDiscovery of AI interactions is crucial for legal compliance, transparency, accountability, risk management, and data privacy protection. Many industries must preserve and discover electronic communications and interactions to meet regulatory requirements. Including AI interactions in eDiscovery ensures organizations comply with these obligations and preserves relevant evidence for litigation. This process also helps maintain trust by enabling the review of AI decisions and actions, demonstrating due diligence to regulators. Microsoft Purview eDiscovery solutions | Microsoft Learn 9. Data Lifecycle Management Microsoft Purview offers robust solutions to manage AI data from creation to deletion, including classification, retention, and secure disposal. This ensures that AI interactions are preserved and retrievable for audits, litigation, and compliance purposes. Please see this article for more information Automatically retain or delete content by using retention policies | Microsoft Learn. Closing By following these steps, organizations can leverage the full potential of Microsoft Purview to enhance the security and compliance of their ChatGPT Enterprise interactions. This integration not only provides peace of mind but also empowers organizations to manage their data more effectively. We are still in preview some of the features listed are not fully integrated, please reach out to us if you have any questions or if you have additional requirements.1.6KViews1like2CommentsShowcase your skills with this new Security Certification
Introducing the Microsoft Certified: Information Security Administrator Certification Designed specifically for data security and information protection professionals, our new Microsoft Certified: Information Security Administrator Certification validates the skills needed to plan and implement information security for sensitive data by using Microsoft Purview and related services. It also validates the skills needed to mitigate risks from internal and external threats by protecting data inside collaboration environments that are managed by Microsoft 365. Plus, it verifies subject matter expertise needed to participate in information security incident responses. The Microsoft Certified: Information Security Administrator Certification is currently in Beta and will become available in April 2025, and you can earn the Certification by passing Exam SC-401: Administering Information Security in Microsoft 365. While this new Certification’s study material includes learning modules from SC-400, it also includes new modules tailored to data security and information protection skillsets. Understand Microsoft Purview Insider Risk Management Microsoft Purview Insider Risk Management is a compliance solution designed to minimize internal risks by detecting, investigating, and acting on malicious and inadvertent activities within your organization. This training module provides an in-depth understanding of how to identify potential risks using analytics and create policies to manage security and compliance. By the end of this module, you'll be equipped with the knowledge to implement insider risk management effectively, ensuring user-level privacy through pseudonymization and role-based access controls. Prepare for Microsoft Purview Insider Risk Management Preparation is key to successfully implementing any security solution. The "Prepare for Microsoft Purview Insider Risk Management" training module guides you through the strategies for planning and configuring the solution to meet your organizational needs. You'll learn how to collaborate with stakeholders, understand the prerequisites for implementation, and configure settings to align with compliance and privacy requirements. This module is essential for administrators and risk practitioners looking to protect their organization's data and privacy. Create and Manage Insider Risk Management Policies Creating and managing effective policies is crucial for mitigating insider risks. This training module covers the process of developing and implementing insider risk management policies using Microsoft Purview. You'll learn how to define the types of risks to identify, configure risk indicators, and customize event thresholds for policy indicators. The module also provides insights into using templates for quick policy creation and configuring anomaly detections to identify unusual user activities. By mastering these skills, you can ensure that your organization is well-protected against potential internal threats. Identify and Mitigate AI Data Security Risks As artificial intelligence (AI) becomes increasingly integrated into business operations, understanding and mitigating AI-related data security risks is vital. The "Identify and Mitigate AI Data Security Risks" training module offers a comprehensive overview of AI security fundamentals. You'll learn about the types of security controls applicable to AI systems and the security testing procedures that can enhance the security posture of AI environments. This module is perfect for developers, administrators, and security engineers looking to safeguard their AI-driven systems. Retiring the Information Protection and Compliance Administrator Associate Certification We’re retiring the Microsoft Certified: Information Protection and Compliance Administrator Associate Certification and its related Exam SC-400: Administering Information Protection and Compliance in Microsoft 365. The Certification, related exam, and renewal assessments will all be retired on May 31, 2025. For data security and information protection professionals: We’re introducing a new Certification – more on that in the section below! For compliance professionals: We don’t have plans to create a new Certification for compliance-related roles, however we do offer Microsoft Applied Skills that can validate these skills. You can find more details in this blog. The following questions and answers can help you determine how these retirements could impact your learning goals: Q: What if I’m studying for Exam SC-400? A: If you’re currently preparing for Exam SC-400, you should take and pass the exam before May 31, 2025. If you’re just starting your preparation process, we recommend that you explore the new Information Security Administrator Certification and its related Exam SC-401: Administering Information Security in Microsoft 365. Q: I’ve already earned the Information Protection and Compliance Administrator Associate Certification. What happens now? A: If you’ve already earned the Information Protection and Compliance Administrator Associate Certification, it will stay on the transcript in your profile on Microsoft Learn. If you’re eligible to renew your Certification before May 31, 2025, we recommend that you consider doing so, because it won’t be possible to renew the Certification after this date. Find the right resources to support your security journey Whether you are looking to build on your existing expertise, need specific product documentation, or want to connect with like-minded communities, partners, and thought leaders, you can find the latest security skill-building content on our Security hub on MS Learn.1.8KViews0likes0CommentsThe security benefits of structuring your Azure OpenAI calls – The System Role
In the rapidly evolving landscape of GenAI usage by companies, ensuring the security and integrity of interactions is paramount. A key aspect is managing the different conversational roles—namely system, user, and assistant. By clearly defining and separating these roles, you can maintain clarity and context while enhancing security. In this blog post, we explore the benefits of structuring your Azure OpenAI calls properly, focusing especially on the system prompt. A misconfigured system prompt can create a potential security risk for your application, and we’ll explain why and how to avoid it. The Different Roles in an AI-Based Chat Application Any AI chat application, regardless of the domain, is based on the interaction between two primary players, the user and the assistant. The user provides input or queries. The assistant generates contextually appropriate and coherent responses. Another important but sometimes overlooked player is the designer or developer of the application. This individual determines the purpose, flow, and tone of the application. Usually, this player is referred to as the system. The system provides the initial instructions and behavioral guidelines for the model. Microsoft Defender for Cloud’s researchers identified emerging anti-pattern Microsoft Defender for Cloud (MDC) offers security posture management and threat detection capabilities across clouds and has recently released a new set of features to help organizations build secure enterprise-ready gen-AI apps in the cloud, helping them build securely and stay secure. MDC’s research experts continuously track the development patterns to enhance the offering but also to promote secure practices to their customers and the wider tech community. They are also primary contributors to the OWASP Top 10 threats for LLM (Idan Hen, research team manager). Recently, MDC's research experts identified a common anti-pattern in AI application development is emerging – appending the system to the user prompt. Mixing these sections is easy and tempting – developers often use it because it’s slightly faster while building and also allows them to maintain context through long conversations. But this practice is harmful – it introduces detrimental security risks that could easily result in 'game over' – exposing sensitive data, getting your computer abused, or making your system vulnerable to Jailbreak attacks. Diving deeper: How system prompts evaluation keeps your application secure Separate system, user and assistant prompts with Azure OpenAI ChatCompletion API Azure OpenAI Service's Chat Completion API is a powerful tool designed to facilitate rich and interactive conversational experiences. Leveraging the capabilities of advanced language models, this API enables developers to create human-like chat interactions within their applications. By structuring conversations with distinct roles—system, user, and assistant—the API ensures clarity and context throughout the dialogue: [{"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request]}, {"role": "assistant", "content": [Model’s response] } ] This structured interaction model allows for enhanced user engagement across various use cases such as customer support, virtual assistants, and interactive storytelling. By understanding and predicting the flow of conversation, the Chat Completion API helps create not only natural and engaging user experiences but securer applications, driving innovation in communication technology. Anti-pattern explained When developers append their instructions to the user prompt. The model receives single input composed by two different sources: developer and user: {"role": "user", "content”: [Developer’s instructions] + [User’s request]} {"role": "assistant", "content": [Model’s response] } When developer instructions are mingled with user input, detection and content filtering systems often struggle to distinguish between the two. Anti-pattern resulting in less secured application This blurring of input roles can facilitate easier manipulation through both direct and indirect prompt injections, thereby increasing the risk of misuse and harmful content not being detected properly by security and safety systems. Developer instructions frequently contain security-related content, such as forbidden requests and responses, as well as lists of do's and don'ts. If these instructions are not conveyed using the system role, this important method for restricting model usage becomes less effective. Additionally, customers have reported that protection systems may misinterpret these instructions as malicious behavior, leading to a high rate of false positive alerts and the unwarranted blocking of benign content. In one case, a customer described forbidden behavior and appended it to the user role. The threat detection system then flagged it as malicious user activity. Moreover, developer instructions may contain private content and information related to the application's inner workings, such as available data sources and tools, their descriptions, and legitimate and illegitimate operations. Although it is not recommended, these instructions may also include information about the logged-in user, connected data sources and information related to the application's operation. Content within the system role enjoys higher privacy; a model can be instructed not to reveal it to the user, and a system prompt leak is considered a security vulnerability. When developer instructions are inserted together with user instructions, the probability of a system prompt leak is much higher, thereby putting our application at risk. 1: Good Protection vs Poor Protection Why do developers mingle their instructions with user input? In many cases, recurring instructions improve the overall user experience. During lengthy interactions, the model tends to forget earlier conversations, including the developer instructions provided in the system role. For example, a model instructed to role-play in an English teaching application or act as a medical assistant in a hospital support application may forget its assigned role by the end of the conversation. This can lead to poor user experience and potential confusion. To mitigate this issue, it is crucial to find methods to remind the model of its role and instructions throughout the interaction. One incorrect approach is to append the developer's instructions to user input by adding them to the User role. Although it keeps developers’ instructions fresh in the model's 'memory,' this practice can significantly impact security, as we saw earlier. Enjoy both user experience and secured application To enjoy both quality detection and filtering capabilities along with a maximal user experience throughout the entire conversation, one option is to refeed developer instructions using the system role several times as the conversation continues: {"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request 1]} {"role": "assistant", "content": [Model’s response 1] } {"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request 2]} {"role": "assistant", "content": [Model’s response 2] } By doing so, we achieve the best of both worlds: maintaining the best practice of separating developer instructions from user requests using the Chat Completion API, while keeping the instructions fresh in the model's memory. This approach ensures that detection and filtering systems function effectively, our instructions get the model's full attention, and our system prompt remains secure, all without compromising the user experience. To further enhance the protection of your AI applications and maximize detection and filtering capabilities, it is recommended to provide contextual information regarding the end user and the relevant application. Additionally, it is crucial to identify and mark various input sources and involved entities, such as grounding data, tools, and plugins. By doing so, our system can achieve a higher level of accuracy and efficacy in safeguarding your AI application. In our upcoming blog post, we will delve deeper into these critical aspects, offering detailed insights and strategies to further optimize the protection of your AI applications. Start secure and stay secure when building Gen-AI apps with Microsoft Defender for Cloud Structuring your prompts securely is the best-practice when designing chatbots. There are other lines of defense that must be put in place to fully secure your environment. Sign up and Enable the new Defender for cloud threat protection for AI for active threat detection (preview). Enable posture management to cover all your cloud security risks, including new AI posture features. Further Reading Microsoft Defender for Cloud (MDC). AI protection using MDC. Chat Completion API. Security challenges related to GenAI. How to craft effective System Prompt. The role of System Prompt in Chat Completion API. Responsible AI practices for Azure OpenAI models. Asaf Harari, Data Scientist, Microsoft Threat Protection Research. Shiran Horev, Principal Product Manager, Microsoft Defender for Cloud. Slava Reznitsky, Principal Architect, Microsoft Defender for Cloud.520Views2likes0CommentsAccelerate AI adoption with next-gen security and governance capabilities
Generative AI adoption is accelerating across industries, and organizations are looking for secure ways to harness its potential. Today, we are excited to introduce new capabilities designed to drive AI transformation with strong security and governance tools.8.8KViews2likes0CommentsInsider Risk Management empowering risky AI usage visibility and security investigations
Discover how Microsoft Purview Insider Risk Management helps you safeguard your data in the AI era and empowers security operations centers to enhance incident investigations with comprehensive data security context.4.4KViews0likes1CommentSafely activate your data estate with Microsoft Purview
60% of CDOs cite data integration challenges as a top pain-point due to lack of knowledge of where relevant data resides [1]. Companies operate on multi-platform, multi-cloud data estates making it harder than ever to seamlessly discover, secure, govern and activate data. This increases the overall complexity when enabling users to responsibly derive insights and drive business value from data. In the era of AI, data governance is no longer an afterthought, data security and data governance are now both table stakes. Data Governance is not a new concept but with the proliferation of AI and evolving regulatory landscape, data governance is critical for safeguarding data related to AI-driven business innovation. With 95% of organizations implementing or developing an AI strategy [2], customers are facing emerging governance challenges, such as: False signals: The lack of clean accurate data can cause false signals in AI which can trigger consequential business outcomes or lead to incorrect reported forecasting and regulatory fines. Time to insight: Data scientists and analysts spend 60-80% of their time on data access and preparation to feed AI initiatives which leads to staff frustration, increased OPEX, and delays in critical AI innovation priorities. Shadow innovation: Data innovation outside governance can increase business risks around data leakage, oversharing, or inaccurate outcomes. This is why federated governance has surfaced as a top priority across security and data leaders because it unlocks data innovation while maintaining appropriate data oversight to help minimize risks. Customers are seeking more unified solutions that enable data security and governance seamlessly across their complex data estate. To help customers better respond to these needs, Microsoft Purview unifies data security, data governance, and data compliance solutions across the heterogeneous data estate for the era of AI. Microsoft Purview also works closely with Microsoft Fabric to integrate capabilities that help seamlessly secure and govern data to help reduce risks associated with data activation across the Microsoft Intelligent Data Platform and across the Microsoft Cloud portfolio. Microsoft Fabric delivers a pre-integrated and optimized SaaS environment for data teams to work faster together over secure and governed data within the Fabric environment. Combining the strengths of Microsoft Purview and Microsoft Fabric enables organizations to more confidently leverage Fabric to unlock data innovation across data engineers, analysts, data scientists, and developers whilst Purview enables data security teams to extend Purview advanced data security value and enables the central data office to extend Purview advanced data governance value across Fabric, Azure, M365, and the heterogenous data estate. Furthering this vision, today Microsoft is announcing 1. a new name for the Purview Data Governance solution, Purview Unified Catalog, to better reflect its growing catalog capabilities, 2. integration with new OneLake catalog, 3. a new data quality scan engine, 4. Purview Analytics in OneLake, and 5. expanded Data Loss Prevention (DLP) capabilities for Fabric lakehouse and semantic models. Introducing Unified Catalog: a new name for the visionary solution The Microsoft Purview data governance solution, made generally available in September, delivers comprehensive visibility, data confidence, and responsible innovation—for greater business value in the era of AI. The solution streamlines metadata from disparate catalogs and sources, like OneLake, Databricks Unity, and Snowflake Polaris, into a unified experience. To better reflect these comprehensive customer benefits, Microsoft Purview Data Catalog is being renamed to Microsoft Purview Unified Catalog to exemplify the growing catalog capabilities such as deeper data quality support for more cloud sources, and Purview Analytics in OneLake. A data catalog serves as a comprehensive inventory of an organization's data assets. As the Microsoft Purview Unified Catalog continues to add on capabilities within curation, data quality, and third-party platform integration, the new Unified Catalog name reflects the current cross-cloud capability. This cross-cloud capability is illustrated in the figure below. This data product contains data assets from multiple different sources, including a Fabric lakehouse table, Snowflake Table and Azure Databricks Table. With the proper curation of analytics into data products, data users can govern data assets easier than ever. Figure 1: Curation of a data product from disparate data sources within Purview’s Unified Catalog Introducing OneLake catalog (Preview) As announced in the Microsoft Fabric blog earlier today, the OneLake catalog is a solution purpose-built for data engineers, data scientists, developers, analysts, and data consumers to explore, manage, and govern data in Fabric. The new OneLake catalog works with Purview by seamlessly connecting data assets governed by OneLake catalog into Purview Unified Catalog, enabling the central data office to centrally govern and manage data assets. The Purview Unified Catalog offers data stewards and data owners advanced capabilities for data curation, advanced data quality, end-to-end data lineage, and an intuitive global catalog that spans the data estate. For data leaders, Unified Catalog offers built-in reports for actionable insights into data health and risks and the ability to confidently govern data across the heterogeneous data estate. In figure 2, you can see how Fabric data is seamlessly curated into the Corporate Emissions Created by AI for CY2024 Data Product, built with data assets from OneLake. Figure 2: Data product curated with Fabric assets Introducing a new data quality scan engine for deeper data quality (Preview) Purview offers deeper data quality support, through a new data quality scan engine for big data platforms, including: Microsoft Fabric, Databricks Unity Catalog, Snowflake, Google Big Query, and Amazon S3, supporting open standard file and table formats. In short, this new scan engine allows businesses to centrally perform rich data quality management from within the Purview Unified Catalog. In Figure 3, you can see how users can run different data quality rules on a particular asset, in this case, a table hosted in OneLake, and when users click on “run quality scan”, the scanner runs a deep scan on the data itself, running the data quality rules in real time, and updating the quality score for that particular asset. Figure 3: Running a data quality scan on an asset living in OneLake Introducing Purview Analytics in OneLake (Preview) To further an organization’s data quality management practice, data stewards can now leverage a new Purview Analytics in OneLake capability, in preview, to extract tenant-specific metadata from the Purview Unified Catalog and publish to OneLake. This new capability enables deeper data quality and lineage investigation using the rich capabilities in Power BI within Microsoft Fabric. Figure 4: In Unified Catalog settings, a user can add self-serve analytics to Microsoft Fabric Figure 5: Curated metadata from Purview within Fabric Expanded Data Loss Prevention (DLP) capabilities for Fabric lakehouse and semantic models To broaden Purview data security features for Fabric, today we are announcing that the restrict access action in Purview DLP policies now extends to Fabric semantic models. With the restrict access action, DLP admins can configure policies to detect sensitive information in semantic models and limit access to only internal users or data owners. This control is valuable for when a Fabric tenant includes guest users and you want to limit unnecessary access to internal proprietary data. The addition of the restrict access action for Fabric semantic models augments the existing ability to detect upload of sensitive data to Fabric lakehouses announced earlier this year. Learn more about the new Purview DLP capabilities for Fabric lakehouses and semantic models in the DLP blog. Figure 6: Example of restricted access to a Fabric semantic model enforced through a Purview DLP policy. Summary With these investments in security and governance, Microsoft Purview is delivering on its vision to extend data protection customer value and innovation across your heterogenous data estate for reduced complexities and improved risk mitigation. Together Purview and Fabric set the foundations for a modern intelligent data platform with seamless security and governance to drive AI innovation you can trust. Learn more As we continue to innovate our products to expand the security and governance capabilities, check out these resources to stay informed. https://aka.ms/Try-Purview-Governance https://www.microsoft.com/en-us/security/business/microsoft-purview https://aka.ms/try-fabric [1] Top 7 Challenges in Data Integration and How to Solve Them | by Codvo Marketing | Medium [2] Microsoft internal research May 2023, N=6382.7KViews1like0CommentsStreamlining AI Compliance: Introducing the Premium Template for Indonesia's PDP Law in Purview
In today’s evolving regulatory environment, businesses must navigate complex data privacy laws while fostering customer trust, especially as AI transforms industries. To support organizations in meeting compliance requirements, we’re introducing the Premium Assessment Template for Indonesia's Personal Data Protection (PDP) Law within Microsoft Purview Compliance Manager. This powerful tool automates critical compliance tasks, simplifies assessments, and integrates seamlessly with Microsoft’s E5 security and Purview solutions, helping businesses reduce manual effort and ensure compliance more efficiently. Discover how this template can streamline your compliance efforts and build trust in an AI-driven world.4.1KViews0likes0CommentsArchitecting secure Gen AI applications: Preventing Indirect Prompt Injection Attacks
Indirect Prompt Injection is an emerging attack vector specifically designed to target and exploit Generative AI applications powered by large language models (LLMs). But what exactly is indirect prompt injection, and how can you build applications that are more resilient and less susceptible to abuse by attackers?8.9KViews1like0CommentsBest practices to architect secure generative AI applications
This blog post delves into the best practices to securely architect Gen AI applications, ensuring they operate within the bounds of authorized access and maintain the integrity and confidentiality of sensitive data.13KViews2likes1Comment