microsoft purview
17 TopicsOversharing Control at Enterprise Scale | Updates for Microsoft 365 Copilot in Microsoft Purview
Minimize risks that come with oversharing and potential data loss. Use Microsoft Purview and its new Data Security Posture Management (DSPM) for AI insights, along with new Data Loss Prevention policies for Microsoft 365 Copilot, and SharePoint Advanced Management, which is now included with Microsoft 365 Copilot. Automate site access reviews at scale and add controls to restrict access to sites if they contain highly sensitive information. Erica Toelle, Microsoft Purview Senior PM, shows how to control data visibility, automate site access reviews, and fine-tune permissions with Pilot, Deploy, Optimize phases. Protect your data from unwanted exposure. Find and secure high-risk SharePoint sites with Microsoft Purview’s oversharing report. Start here. Secure Microsoft 365 Copilot adoption at scale. Check out the Pilot-Deploy-Optimize approach, to align AI use with your organization’s data governance. Watch here. Boost security, compliance, and governance. Scoped DLP policies enable Microsoft 365 Copilot to respect data labels. Take a look. Watch our video here. QUICK LINKS: 00:00 — Minimize risk of oversharing 01:24 — Oversharing scenarios 04:03 — How oversharing can occur 05:38 — Restrict discovery & limit access 06:36 — Scope sites 07:15 — Pilot phase 08:16 — Deploy phase 09:17 — Site access reviews 10:00 — Optimize phase 10:54 — Wrap up Link References Check out https://aka.ms/DeployM365Copilot Watch our show on the basics of oversharing at https://aka.ms/SMBoversharing Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube:https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community:https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast:https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter:https://twitter.com/MSFTMechanics Share knowledge on LinkedIn:https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram:https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok:https://www.tiktok.com/@msftmechanics Video Transcript: -Are you looking to deploy Microsoft 365 Copilot at scale, but concerned that your information is overshared? Ultimately, you want to ensure that your users and teams can only get to the data required to do their jobs and nothing more. For example, while using Microsoft 365 Copilot and interacting with work data, you don’t want information surfaced that users should not have permissions to view. So, where do you even start to solve for this? You might have hundreds or thousands of SharePoint sites to assess and right-size information access. Additionally, knowing where your sensitive or high value information resides and making sure that the policies you set to protect information continuously and avoid returning to an oversharing state, can come with challenges. -The good news is there are a number of updated tools and resources available to help you get a handle on all this. In the next few minutes, I’ll unpack the approach you can take to help you minimize the risks that come with oversharing and potential data loss using Microsoft Purview and its new Data Security Posture Management for AI insights, along with new Data Loss Prevention policies for Microsoft 365 Copilot and more. And SharePoint Advance Management, which is now included with Microsoft 365 Copilot. This helps you automate site access reviews at scale and adds controls to restrict access to sites even if they contain highly sensitive information. First, let’s look at how information oversharing can inadvertently occur just as it would with everyday search when using Microsoft 365 Copilot. -I’ll explain how it works. When you submit a prompt before presenting that to a large language model, the prompt is interpreted by Copilot and using a process called Retrieval Augmented Generation it then finds and retrieves grounding information that you are allowed to access in places like SharePoint, OneDrive, Microsoft Teams, your email and calendar, and optionally the internet, as well as other connected data sources. The retrieved information is appended to your prompt as additional context. Then that larger prompt is presented to the large language model. With that added grounding information, the response is generated then formatted for the app that you’re using. For this to work well, that information retrieval step relies on accurate search. And what’s important here is as you use Copilot it can only retrieve information that you explicitly have access to and nothing else. This is how search works in Microsoft 365 and SharePoint. The controls you put in place to achieve just enough access will reduce data security risk, whether you intend to use Microsoft 365 Copilot or not. -So, let me show you a few examples you may have experienced where content is overshared. I’ll start in Business Chat. I’m logged in is Adele Vance from the sales team. Her customers are pressuring her for information about new products that haven’t been internally or externally announced. She submits a prompt for 2025 product plans and the response returns a few clearly sensitive documents that she shouldn’t have access to, and the links in the response and in the citations take Adele right to those files. -Now, I’m going to switch perspectives to someone on the product planning team building the confidential plan stored in a private SharePoint site. I’m working on the 2025 product plan on a small team. This is the same doc that Adele just found in Business Chat, and if you look at the top of the document right now, there was one other person who I expect in the document. Then suddenly a few more people appear to have the document open and I don’t know who these people are and they shouldn’t be here. So, this file is definitely overshared. -Now, I’m going to switch back to Adele’s perspective as beyond the product planning doc. The response also describes a new project with the code name Thunderbolt. So, I’ll choose the Copilot recommended prompt to provide more details about Project Thunderbolt, and we can see a couple of recent documents with information that I as Adele should not have access to as a member of the sales team. In fact, if I open the file, I can get right to the detailed specifications and pricing information. -Now, let’s dig into the potential reasons why this is happening, and then I’ll cover how you discover and correct these conditions at enterprise scale. First, privacy settings for SharePoint sites can be set to public or private. These settings are most commonly configured as sites are created. Often sites are set to public, which means anyone in your organization can find content contained within those sites, and by extension, so can Microsoft 365 Copilot. -Next, is setting the default sharing option to everyone in an organization. One common misperception here is just by creating the link, you’re enabling access to that file, folder, or site automatically. That’s not how these links work though. Once a sharing link is redeemed or clicked on by the recipient, that person will have access to and be able to search for the shared content. There are, however, sharing approaches, which auto-redeem sharing links, such as pasting the link into an email and sending that to lots of people. In that case, those recipients have access to the content and will be able to search for it immediately. -Related to this is granting permissions to the everyone except external users group, as you define membership for your SharePoint sites. This group gives everyone in your organization access and the ability to search for that information too. And you’ll also want to look into permissions granted to other large and inclusive groups, which are often maintained using dynamic group membership. And if you’re using Data Loss Prevention, information protection, or other classification controls from Microsoft Purview, labeled content can also trigger sharing restrictions. -So, let’s move on to addressing these common issues and the controls you will use in Microsoft 365, Microsoft Purview, and SharePoint Advance Management. At a high level, there are two primary ways to implement protections. The first approach is to restrict content discovery so that information doesn’t appear in search. Restricting discovery still allows users to access content they’ve previously accessed as well as the content shared with them. The downsides are that content people should not have access to is still accessible, and importantly, Copilot cannot work with restricted content even if it’s core to a person’s job. So, we recommend restricting content discovery as a short-term solution. -The second approach is to limit information access by tightening permissions on sites, folders, and individual files. This option has stronger protections against data loss and users can still request access, if they need it to do their jobs. Meaning only people who need access have access. We recommend limiting access as an ongoing best practice. Then to scope the sites that you want to allow and protect, we provide a few options to help you know where to start. First, you can use the SharePoint Active sites list where you can sort by activity to discover which SharePoint sites should be universally accessible for all employees in your organization. Then as part of the new Data Security Posture Management for AI reporting in Microsoft Purview, the oversharing report lets you easily find the sites with higher risk containing the most sensitive information that you want to protect. The sites you define to allow access and limit access will be used in later steps. Now, let’s move on to the steps for repairing your data from Microsoft 365 Copilot. We’ve mapped best practices and tools for Copilot adoption across Pilot, Deploy, and Optimize phases. -First, in the Pilot phase, we recommend organization-wide controls to easily restrict discovery when using Copilot. This means taking your list of universally accessible sites previously mentioned, then using a capability called Restricted SharePoint search, where you can create and allow list of up to 100 sites, then allow just those sites to be used with search in Copilot. Then in parallel in Microsoft Purview, we’ll configure ways to get visibility into Copilot usage patterns where you can enable audit mode using Data Loss Prevention policies to detect sharing of labeled or unlabeled sensitive content. And likewise, you’ll enable analysis of Copilot interactions as a part of communication compliance. Again, these approaches do not impact information access only discoverability via Copilot and search. -Now, let’s move on to the broader Deploy phase where you will enable Copilot for more users. Here you’ll use the list of identified sites from Microsoft Purview’s oversharing report to identify sites with the most sensitive information. Controls in Microsoft Purview provide proactive information protection with sensitivity labels for your files, emails, meetings, groups, and sites. For each item, you can use more targeted controls to right-size site access by assigning permissions to specific users and groups. And when applied, these controls on the backend will move public sites to private and control access to defined site members based on the permissions you set. Next, you can enable new Data Loss Prevention from Microsoft 365 Copilot policies to exclude specific labels from Copilot prompts and responses. And you can change your DLP policies from the audit mode that you set during the Pilot phase to start blocking unnecessary sharing of labeled content where you’ll now turn on the policies in order to enforce them. -Then, two options from SharePoint Advance Management are to use restricted access control to limit access to individual sites. That way only members in defined security groups will have access, and to limit site access by operationalizing site owner access reviews. Then as an additional fine-tuning option, you can target restricted content discovery on individual sites, like you see here with our leadership site to prevent Copilot from using their content as you continue to work through access management controls. And as part of the Deploy phase, you’ll disable restricted SharePoint search once you have the right controls in place. Together, these options will impact both access permissions, as well as discovery via Copilot and search. -Next, the final Optimize phase is about setting your organization up for the long term. This includes permissioning, information classifications, and data lifecycle management. Here you’ll continually monitor your data security risks using oversharing reports. Then implement auto-labeling and classification strategies using Microsoft Purview, and ensure that as new sites are created, site owners and automated provisioning respect access management principles. These processes help ensure that your organization doesn’t drift back into an oversharing state to keep your data protected and ongoing permissions in check. Now, if we switch back to our initial user examples in Business Chat with our controls in place, if we try the same prompts as before, you’ll see that Adele can no longer access sensitive information, even if she knows exactly what to look for in her prompts. The data is now protected and access has been right-sized for everyone in the organization. -So, those are the steps and tools to prepare your information from Microsoft 365 Copilot at enterprise scale, and help ensure that your data is protected and that everyone has just enough access to do their jobs. To learn more, check out aka.ms/DeployM365Copilot. Also, watch our recent show on the basics of oversharing at aka.ms/SMBoversharing for more tips to rightsize permissions for SharePoint site owners. Keep watching Microsoft Mechanics for the latest updates and thanks for watching.1.4KViews0likes0CommentsProtect data used in prompts with common AI apps | Microsoft Purview
Protect data while getting the benefits of generative AI with Microsoft Defender for Cloud Apps and Microsoft Purview. Safeguard against shadow IT risks with Microsoft Defender for Cloud Apps, unveiling hidden generative AI applications. Leverage Microsoft Purview to evaluate data exposure, automating policy enforcement for enhanced security. Ensure compliance with built-in data protections in Copilot for Microsoft 365, aligned with organizational policies set in Microsoft Purview, while maintaining trust and mitigating risks seamlessly across existing and future cloud applications. Erin Miyake, Microsoft Purview’s Principal Product Manager, shares how to take a unified approach to protecting your data. Block sensitive data from being used with generative AI. See how to use data loss prevention policies for content sensitivity in Microsoft Purview. Locate and analyze generative AI apps in use. Auto-block risky apps as they’re classified using updated risk assessments, eliminating the need to manually control allowed and blocked apps. See how it works. Create data loss prevention policies. Secure data for generative AI. Steps to get started in Microsoft Purview’s AI Hub. Watch our video here: QUICK LINKS: 00:00 — Secure your data for generative AI 01:16 — App level experiences 01:46 — Block based on data sensitivity 02:45 — Admin experience 03:57 — Microsoft Purview AI Hub 05:08 — Set up policies 05:53 — Tailor policies to your needs 06:35 — Set up AI Hub in Microsoft Purview 07:09 — Wrap Up Link References: For information on Microsoft Defender for Cloud Apps go to https://aka.ms/MDA Check out Microsoft Purview capabilities for AI go to https://aka.ms/PurviewAI/docs Watch our episode on Copilot for Microsoft 365 data protections at https://aka.ms/CopilotAdminMechanics Watch our episode about Data Loss Prevention policy options at https://aka.ms/DLPMechanics Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -Generative AI with large language models like GPT is fast becoming a central part of everyday app experiences. With hundreds of popular apps now available and growing. But do you know which generative AI apps are being adopted via shadow IT inside your organization? And if your sensitive data is at risk? -Today I am going to show you a unified approach to protecting your data while still getting the benefits of generative AI. With Microsoft Defender for Cloud Apps to help you quickly see what risky generative AI apps are in use and Microsoft Purview to assess your sensitive data exposure so that you can automate policy enforced protections based on data sensitivity and the AI app in use. -Now, this isn’t to say that there aren’t safe ways to take advantage of generative AI with work data right now. Copilot for Microsoft 365, for example, has the unique advantage of data protections built in that respect your organization’s data security and compliance needs. This is based on the policies you set in Microsoft Purview for your data in Microsoft 365. -That said, the challenge is in knowing which generative AI apps that people are using inside your organization to trust. What you want is to have policies where you can “set it and forget it” so that existing and future cloud apps are visible to IT. And if the risk thresholds you set are met, they’re blocked and audited. Let’s start with the user experience. Here, I’m on a managed device. I’m not signed in with a work account or connected to a VPN and I’m trying to access an AI app that is unsanctioned by my IT and security teams. -You’ll see that the Google Gemini app in this case, and this could be any app you choose, is blocked with a red SmartScreen page and a message for why it was blocked. This app level block is based on Microsoft Defender for Endpoint with Cloud App policies. More on that in a second. Beyond app level policies, let’s try something else. You can also act based on the sensitivity of the data being used with generative AI. For example, the copy and paste of sensitive work data from a managed device into a generative AI app. Let me show you. -I have a Word document open, which contains sensitive information on the left and on the right I have OpenAI’s ChatGPT web experience running, and I’m signed in using my personal account. This file is sensitive because it includes keywords we’ve flagged in data loss prevention policies for a confidential project named Obsidian. Let’s say I want to summarize the content from the confidential Word Doc. -I’ll start by selecting all the texts I want and copied it into my clipboard, but when I try to paste it into the prompt, you’ll see that I’m blocked and the reason why. This block was based on an existing data loss prevention policy for content sensitivity defined in Microsoft Purview, which we’ll explore in a moment. Importantly, these examples did not require that my device used a VPN with firewall controls to filter sites or IP addresses, and I didn’t have to use my working email account to sign into those generative AI apps for the protections to work. -So let’s switch gears to the admin perspective to see what you can do to find generative AI apps in use. To get started, you’ll run Cloud discovery and Microsoft Defender for cloud apps. It’s a process that can parse network traffic logs for most major providers to discover and analyze apps and use. Once you’ve uploaded your networking logs, analysis can take up to 24 hours. And that process then parses the traffic from your network logs and brings it together with Microsoft’s intelligent and continuously updated knowledge base of cloud apps. -The reports from your cloud discovery show you app categories, risk levels from visited apps, discovered apps with the most traffic, top entities, which can be users or IPs along with where various app headquarters locations are in the world. A lot of this information is easily filtered and there are links into categories, apps, and sub reports. In fact, I’ll click into generative AI here to filter on those discovered apps and find out which apps people are using. -From here, you can manually sanction or unsanction apps from the list, and you can create policies to automatically unsanction and block risky apps as they’re added to this category based on continuously updated risk assessments so that you don’t need to keep returning to the policy to manually add apps. Next, to protect high value sensitive information that’s where Microsoft Purview comes in. -And now with the new AI hub, it can even show you where sensitive information is used with AI apps. AI Hub gives you a holistic view of data security risks and Microsoft Copilot and in other generative AI assistants in use. It provides insights about the number of prompts sent to Microsoft Copilot experiences over time and the number of visits to other AI assistants. Below that is where you can see the total number of prompts with sensitive data across AI assistants used in your organization, and you can also see the sensitive information types being shared. -Additionally, there are charts that break down the number of users accessing AI apps by insider risk severity level, including Microsoft Copilot as well as other AI assistants in use. Insider risk severity levels for users reflect potentially risky activities and are calculated by insider risk management and Microsoft Purview. Next in the Activity Explorer, you’ll find a detailed view of the interactions with AI assistants, along with information about the sensitive information type, content labels, and file names. You can drill into each activity for more information with details about the sensitive information that was added to the prompt. -All of this detail super useful because it can help you fine tune your policies further. In fact, let’s take a look at how simple it is to set up policies. From the policies tab, you can easily create policies to get started. I’ll choose the fortify your data security for generative AI policy template. It’s designed to protect against unwanted content sharing with AI assistants. -You’ll see that this sets up built-in risk levels for Adaptive Protection. It also creates data loss prevention policies to prevent pasting or uploading sensitive information by users with an elevated risk level. This is initially configured in test mode, but as I’ll show you can edit this later, and if you don’t have labels already set up, default labels for content classification will be set up for you so that you can preserve document access rights in Copilot for Microsoft 365. -After you review the details, it’s just one click to create these policies. And as I mentioned, these policies are also editable once they’ve been configured, so you can tailor them to your needs. I’m in the DLP policy view and here’s the policy we just created in AI Hub. I’ll select it and edit the policy. To save time, I’ve gone directly to the advanced rules option, and I’ll edit the first one. -Now, I’ll add the sensitive info type we saw before. I’ll search for Obsidian, select it, and add. Now, if I save my changes, I can move to policy mode. Currently I’m in test mode, and when I’m comfortable with my configurations, I can select turn the policy on immediately, and within an hour the policy will be enforced. And for more information about data loss prevention policy options, check out our recent episode at aka.ms/DLPMechanics. -So that’s what AI Hub and Microsoft Purview can do. And if you’re wondering how to set it up for the first time, the good news is when you open AI Hub, once you have audit enabled, and if you have Copilot for Microsoft 365, you’ll already start to see analytics insights populated. Otherwise, once you turn on Microsoft Purview audit, it takes up to 24 hours to initiate. -Then you’ll want to install the Microsoft Purview browser extension to detect risky user activity and get insights into user interactions with other AI assistant. And onboard devices to Microsoft Purview to take advantage of endpoint DLP capabilities to protect sensitive data from being shared. So as I demonstrated today, the combination of both Microsoft Defender for Cloud Apps and Microsoft Purview gives you the visibility you need to detect risky AI apps in use with your sensitive data and enforce automated policy protections. -To learn more about implementing Microsoft Defender for Cloud Apps, go to aka.ms/MDA. To learn more about implementing Microsoft Purview capabilities for AI, go to aka.ms/PurviewAI/docs. And for a deeper dive on Copilot for Microsoft 365 protections, check out our recent episode at aka.ms/CopilotAdminMechanics. Of course, keep watching Microsoft Mechanics for the latest tech updates, and thanks for watching.6.8KViews1like0Comments