security copilot
22 TopicsIntegrating API data into Microsoft Security Copilot using custom logs and KQL plugins
Microsoft Security Copilot (Copilot) is a generative Artificial Intelligence (AI) system for cybersecurity use cases. Copilot is not a monolithic system but is an ecosystem running on a platform that allows data requests from multiple sources using a unique plugin mechanism. Currently, it supports API, GPT, and KQL-based plugins, API and KQL-based plugins can be used to pull external data into Security Copilot. In this blog post we will discuss how both methods can be combined and how we can get data only available through the API to Security Copilot, while benefiting from KQL plugin simplicity. KQL vs. API plugins KQL-based plugins can gather insights from Microsoft Sentinel workspaces, M365 Defender XDR, and Azure Data Explorer clusters. Such plugins do not require any development skills beyond ability to write KQL queries, do not require any additional authentication mechanism and can be easily extended with new features/queries as needed. API plugins give Copilot the capability to pull data from any external data source if it supports REST API, e.g. allowing Copilot to make Graph API calls. KQL and API plugins each have their specific use cases. The KQL option is often chosen for its simplicity and due to certain limitations associated with API plugins: API plugin request body schemas are limited to a depth of 1, which means they cannot handle deeply nested data structures. Output from APIs often needs to be parsed before Security Copilot can ingest it, as Security Copilot, like all other large language models (LLMs) based applications, has limits on how much information it can process at once, known as a "token limit". API must be publicly available for Copilot to access it, which means API endpoint must be properly secured and authentication method must be supported by Copilot. The best of both worlds A possible solution is to integrate data available only through the API into the Log Analytics workspace, allowing for subsequent querying via KQL. The solution consists of two parts: Logic App to query API data and send it to Log Analytics (Sentinel) workspace. Custom KQL plugin for Security Copilot to query custom tables. As an example, we will build a solution that allows querying Defender XDR Secure Score historical data, which is currently only available through Graph API. Create Logic App to store data retrieved via API in Log Analytics workspace We will start with building a simple Logic App to get API data and send it to Log Analytics. While we use Secure Score data, as an example, the same method can be used for any other data that does not change often and suitable for KQL table storage. The Logic App will do the following: Logic App is triggered once a day, in accordance with Secure Score update schedule (once in 24 hours). It gets the latest Secure Score via HTTP call to Graph API: HTTP GET to https://graph.microsoft.com/v1.0/security/secureScores?$top=1: Graph API call is authenticated via Managed Identity: Managed Identity will require SecurityEvents.Read.All permission to get access to Secure Score data: Connect-AzAccount $GraphAppId = "00000003-0000-0000-c000-000000000000" $NameOfMSI = "LOGIC_APP_NAME” $Permission = "SecurityEvents.Read.All" $GraphServicePrincipal = Get-AzADServicePrincipal -AppId $GraphAppId $AppRole = $GraphServicePrincipal.AppRole | Where-Object { $_.Value -eq $Permission -and $_.Origin -contains "Application" } New-AzADServicePrincipalAppRoleAssignment ` -ServicePrincipalDisplayName $NameOfMSI ` -ResourceDisplayName $GraphServicePrincipal.DisplayName ` -AppRoleId $AppRole.Id Received Secure Score data is sent to Log Analytics workspace using built-in Azure Log Analytics Data Collector connector. For convenience we will split data returned by Secure Score API into two categories to store them in different custom log tables: overall Secure Score values and specific values for each security control. Managed Identity assigned to the Logic App will have to be granted Log Analytics Contributor role on the chosen Log Analytics workspace: As a result of running Logic App for the first time, two custom logs will be generated in the selected Log Analytics workspace. SecureScoreControlsXDR_CL log will contain all information about specific controls: SecureScoreXDR_CL will contain just one entry per day, but it is handy when it comes to tracking Secure Score changes in the organization. Create custom KQL Plugin for Security Copilot Now when we have our data conveniently stored in Log Analytics workspace, we can proceed to creation of custom KQL plugin. Below is an example of such plugin with some basic skills, but thanks to simplicity of KQL plugins, it can easily be extended and adjusted to ones needs: Descriptor: Name: SecureScoreXDRPlugin DisplayName: Defender XDR Secure Score plugin Description: Skills to query and track Microsoft Defender XDR Secure Score SkillGroups: - Format: KQL Skills: - Name: GetSecureScoreXDR DisplayName: Get Defender XDR Secure Score for specific date Description: Queries Defender XDR Secure Score current status for Apps, Identity, Devices, Data and Total ExamplePrompts: - 'Get Defender Secure Score for today' - 'Get Secure Score for 2022-01-01' - 'What is the current Secure Score' - 'What was Secure Score 7 days ago' Inputs: - Name: date Description: The date to query the Secure Score for Required: true Settings: Target: Defender Template: |- let specifieddate = todatetime('{{date}}'); SecureScoreControlsXDR_CL | where TimeGenerated between (startofday(specifieddate) .. endofday(specifieddate)) | summarize IdentityScore = sumif(score_d, controlCategory_s == "Identity"), AppsScore = sumif(score_d, controlCategory_s == "Apps"), DeviceScore = sumif(score_d, controlCategory_s == "Device"), DataScore = sumif(score_d, controlCategory_s == "Data") by bin(TimeGenerated, 1d) | extend TotalScore = (IdentityScore + AppsScore + DeviceScore + DataScore) - Name: GetSecureScoreXDRChanges DisplayName: Get Defender XDR Secure Score controls changes for the past 7 days Description: Queries Defender XDR Secure Score and shows changes during the past 7 days ExamplePrompts: - 'How did secure score change in the past week' - 'What are secure score controls changes' - 'Show recent changes across secure score controls' - 'Show secure score changes for the past 7 days' Inputs: - Name: date Description: The date to query the Secure Score for Required: true Settings: Target: Defender Template: |- let specifieddate = todatetime('{{date}}'); let Controls = SecureScoreControlsXDR_CL | project TimeGenerated, RecommendationCategory=controlCategory_s, ControlName=controlName_s, Recommendation=description_s, ImplementationStatus=implementationStatus_s, ControlScore = score_d | where TimeGenerated >= specifieddate; Controls | summarize distinctScoreCount = count_distinct(ControlScore) by ControlName | where distinctScoreCount > 1 | join kind=inner ( Controls ) on ControlName | summarize TimeGenerated = max(TimeGenerated) by ControlName | join kind=inner ( Controls) on TimeGenerated, ControlName | project TimeGenerated, ControlName, RecommendationCategory, Recommendation, ImplementationStatus, ControlScore Now we need to save text above to YAML file and add it as custom KQL plugin: Once plugin is deployed, we can query and track Secure Score data using Security Copilot: Conclusion Storing non-log data within a Log Analytics workspace is an established practice. This method has been used to allow security analysts easy access to supplementary data via KQL, facilitating its use in KQL queries for detection enrichment purposes. As illustrated in the scenario above, we can still generate alerts based on this data, such as notifications for declining Secure Scores. Additionally, this approach now enables further AI-powered Security Copilot scenarios.603Views1like0CommentsPurview Webinars
Register for all webinars here🔗 Upcoming Microsoft Purview Webinars MAR 12 (8:00AM) Microsoft Purview | Microsoft Purview AMA - Data Security, Compliance, and Governance MAR 18 (8:00AM) Microsoft Purview | Microsoft Teams and Purview Information Protection: Inheriting Sensitivity Labels from Shared Files to Teams Meetings Microsoft Purview Information Protection now supports label policy settings to apply inheritance from shared files to meetings. This enhances protection in Teams when sensitive files are shared in Teams chat or live shared during meeting. MAR 19 (8:00AM) Microsoft Purview | Unlocking the Power of Microsoft Purview for ChatGPT Enterprise Join us for an exciting presentation where we unveil the seamless integration between Microsoft Purview and ChatGPT Enterprise. Discover how you can effortlessly set up and integrate these powerful tools to ensure that interactions are securely captured, meet regulatory requirements and manage data effectively. Don't miss out on this opportunity to learn about the future of intelligent data management and AI-driven insights! 2025 Past Recordings JAN 8 - Microsoft Purview AMA | Blog Post 📺 Subscribe to our Microsoft Security Community YouTube channel for ALL Microsoft Security webinar recordings, and more!433Views0likes0CommentsThe security benefits of structuring your Azure OpenAI calls – The System Role
In the rapidly evolving landscape of GenAI usage by companies, ensuring the security and integrity of interactions is paramount. A key aspect is managing the different conversational roles—namely system, user, and assistant. By clearly defining and separating these roles, you can maintain clarity and context while enhancing security. In this blog post, we explore the benefits of structuring your Azure OpenAI calls properly, focusing especially on the system prompt. A misconfigured system prompt can create a potential security risk for your application, and we’ll explain why and how to avoid it. The Different Roles in an AI-Based Chat Application Any AI chat application, regardless of the domain, is based on the interaction between two primary players, the user and the assistant. The user provides input or queries. The assistant generates contextually appropriate and coherent responses. Another important but sometimes overlooked player is the designer or developer of the application. This individual determines the purpose, flow, and tone of the application. Usually, this player is referred to as the system. The system provides the initial instructions and behavioral guidelines for the model. Microsoft Defender for Cloud’s researchers identified emerging anti-pattern Microsoft Defender for Cloud (MDC) offers security posture management and threat detection capabilities across clouds and has recently released a new set of features to help organizations build secure enterprise-ready gen-AI apps in the cloud, helping them build securely and stay secure. MDC’s research experts continuously track the development patterns to enhance the offering but also to promote secure practices to their customers and the wider tech community. They are also primary contributors to the OWASP Top 10 threats for LLM (Idan Hen, research team manager). Recently, MDC's research experts identified a common anti-pattern in AI application development is emerging – appending the system to the user prompt. Mixing these sections is easy and tempting – developers often use it because it’s slightly faster while building and also allows them to maintain context through long conversations. But this practice is harmful – it introduces detrimental security risks that could easily result in 'game over' – exposing sensitive data, getting your computer abused, or making your system vulnerable to Jailbreak attacks. Diving deeper: How system prompts evaluation keeps your application secure Separate system, user and assistant prompts with Azure OpenAI ChatCompletion API Azure OpenAI Service's Chat Completion API is a powerful tool designed to facilitate rich and interactive conversational experiences. Leveraging the capabilities of advanced language models, this API enables developers to create human-like chat interactions within their applications. By structuring conversations with distinct roles—system, user, and assistant—the API ensures clarity and context throughout the dialogue: [{"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request]}, {"role": "assistant", "content": [Model’s response] } ] This structured interaction model allows for enhanced user engagement across various use cases such as customer support, virtual assistants, and interactive storytelling. By understanding and predicting the flow of conversation, the Chat Completion API helps create not only natural and engaging user experiences but securer applications, driving innovation in communication technology. Anti-pattern explained When developers append their instructions to the user prompt. The model receives single input composed by two different sources: developer and user: {"role": "user", "content”: [Developer’s instructions] + [User’s request]} {"role": "assistant", "content": [Model’s response] } When developer instructions are mingled with user input, detection and content filtering systems often struggle to distinguish between the two. Anti-pattern resulting in less secured application This blurring of input roles can facilitate easier manipulation through both direct and indirect prompt injections, thereby increasing the risk of misuse and harmful content not being detected properly by security and safety systems. Developer instructions frequently contain security-related content, such as forbidden requests and responses, as well as lists of do's and don'ts. If these instructions are not conveyed using the system role, this important method for restricting model usage becomes less effective. Additionally, customers have reported that protection systems may misinterpret these instructions as malicious behavior, leading to a high rate of false positive alerts and the unwarranted blocking of benign content. In one case, a customer described forbidden behavior and appended it to the user role. The threat detection system then flagged it as malicious user activity. Moreover, developer instructions may contain private content and information related to the application's inner workings, such as available data sources and tools, their descriptions, and legitimate and illegitimate operations. Although it is not recommended, these instructions may also include information about the logged-in user, connected data sources and information related to the application's operation. Content within the system role enjoys higher privacy; a model can be instructed not to reveal it to the user, and a system prompt leak is considered a security vulnerability. When developer instructions are inserted together with user instructions, the probability of a system prompt leak is much higher, thereby putting our application at risk. 1: Good Protection vs Poor Protection Why do developers mingle their instructions with user input? In many cases, recurring instructions improve the overall user experience. During lengthy interactions, the model tends to forget earlier conversations, including the developer instructions provided in the system role. For example, a model instructed to role-play in an English teaching application or act as a medical assistant in a hospital support application may forget its assigned role by the end of the conversation. This can lead to poor user experience and potential confusion. To mitigate this issue, it is crucial to find methods to remind the model of its role and instructions throughout the interaction. One incorrect approach is to append the developer's instructions to user input by adding them to the User role. Although it keeps developers’ instructions fresh in the model's 'memory,' this practice can significantly impact security, as we saw earlier. Enjoy both user experience and secured application To enjoy both quality detection and filtering capabilities along with a maximal user experience throughout the entire conversation, one option is to refeed developer instructions using the system role several times as the conversation continues: {"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request 1]} {"role": "assistant", "content": [Model’s response 1] } {"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request 2]} {"role": "assistant", "content": [Model’s response 2] } By doing so, we achieve the best of both worlds: maintaining the best practice of separating developer instructions from user requests using the Chat Completion API, while keeping the instructions fresh in the model's memory. This approach ensures that detection and filtering systems function effectively, our instructions get the model's full attention, and our system prompt remains secure, all without compromising the user experience. To further enhance the protection of your AI applications and maximize detection and filtering capabilities, it is recommended to provide contextual information regarding the end user and the relevant application. Additionally, it is crucial to identify and mark various input sources and involved entities, such as grounding data, tools, and plugins. By doing so, our system can achieve a higher level of accuracy and efficacy in safeguarding your AI application. In our upcoming blog post, we will delve deeper into these critical aspects, offering detailed insights and strategies to further optimize the protection of your AI applications. Start secure and stay secure when building Gen-AI apps with Microsoft Defender for Cloud Structuring your prompts securely is the best-practice when designing chatbots. There are other lines of defense that must be put in place to fully secure your environment. Sign up and Enable the new Defender for cloud threat protection for AI for active threat detection (preview). Enable posture management to cover all your cloud security risks, including new AI posture features. Further Reading Microsoft Defender for Cloud (MDC). AI protection using MDC. Chat Completion API. Security challenges related to GenAI. How to craft effective System Prompt. The role of System Prompt in Chat Completion API. Responsible AI practices for Azure OpenAI models. Asaf Harari, Data Scientist, Microsoft Threat Protection Research. Shiran Horev, Principal Product Manager, Microsoft Defender for Cloud. Slava Reznitsky, Principal Architect, Microsoft Defender for Cloud.524Views2likes0CommentsStrengthen your data security posture in the era of AI with Microsoft Purview
Organizations face challenges with fragmented data security solutions and the amplified risks due to generative AI. We are now introducing Microsoft Purview Data Security Posture Management (DSPM) in public preview, which provides comprehensive visibility into sensitive data, contextual insights, and continuous risk assessment. DSPM is integrated with Microsoft 365 and Windows devices, leveraging generative AI through Security Copilot for deeper investigations and efficient risk management, and provides several capabilities across centralized visibility, actionable policy recommendations, and continuous risk assessment to enhance data security.15KViews3likes0CommentsUnleashing the power of Microsoft Purview with Security Copilot
With cyber threats escalating in scale and complexity, generative AI (GenAI) is redefining data security by enabling faster, smarter threat detection and response. Unlike traditional security systems, which often rely on rigid rules and past patterns, GenAI continuously learns and adapts, identifying anomalies and suspicious activities that would otherwise remain undetected. Recent research underscores this shift, showing that organizations using AI-powered security solutions can cut data breach costs by as much as 22%[1] and reduce incident response times by up to 50%[2], marking a major leap forward in protecting critical data. GenAI is also transforming the way investigations are conducted, helping security teams delve deeper into complex incidents with speed and precision. By automating the analysis of massive datasets, GenAI can uncover critical insights in minutes, rather than days. This rapid investigative power not only enhances response times but also strengthens predictive security measures, empowering organizations to stay ahead of emerging threats in an increasingly volatile cyber landscape. That’s why today we’re thrilled to announce the most recent integrations of Security Copilot with Microsoft Purview, taking data security teams’ experience and investigations to the next level. Fortifying data security posture with the power of generative AI Visibility into data and user activities is considered vital for most organizations to understand the efficacy of their data security programs. Today we are excited to announce the public preview of Microsoft Purview Data Security Posture Management (DSPM), that for the first time brings together insights from Microsoft Purview Information Protection, Data Loss Prevention, and Insider Risk Management in a centralized place, providing visibility into data security risks and recommending controls to protect data. DSPM offers contextual insights into data, its usage, and continuous risk assessment of your evolving data landscape, and it can be enhanced by Security Copilot for deeper investigations and uncovering unseen risks with AI-powered insights. With Security Copilot embedded in DSPM, organizations can gain more out of DSPM by accessing GenAI-powered insights in natural language. Data Security teams can conduct deeper investigations to better understand potential risks to their data. DSPM with the embedded Security Copilot capabilities will help teams get started and prioritize their efforts through: Starting suggested prompts: These are contextually relevant insights for the top data risks in your organizations such as ‘Which sensitive files were shared outside the org from SharePoint last week?”. Right in the DSPM experience, your teams can see five categories such as ‘alerts to prioritize’, ‘sensitive data leaks detected’, ‘devices at risk’, and ‘risky sequenced activity’. Suggested prompts: Building on the response to these starting prompts or user-entered open prompt, Copilot provides suggested prompts to guide you through a recommended path of investigation. Open prompts: You can further customize your analysis by using open prompts allowing you to explore investigations in many directions across data sets, alerts, users, and activities. Security Copilot in DSPM enables teams to discover previously unseen risks and accelerate data security by suggesting scenarios and prompts that can help triage and prioritize risks. Through these guided investigations, Copilot makes it easy to onboard newer team members and drive greater efficiency for experienced team members. Learn more about DSPM in our documentation and deep dive video. This capability will be available in public preview within the coming weeks. New enhancements to embedded Security Copilot experiences in Purview Data Loss Prevention We are also excited to announce new Security Copilot skills in public preview that are embedded in Purview DLP to assist admins. These capabilities augment the embedded & standalone Security Copilot-powered alert summarization experiences that are already available in Purview DLP. The new enhanced hunting prompts in Security Copilot allow for a deeper dive into DLP alert summaries (to complement enhanced hunting prompts in IRM summary that are already in preview) providing detailed exploration of data and users involved in incidents. This includes actions taken on the data and the specific sensitive information type (SIT) that triggered the alert. Additionally, Security Copilot now guides admins through analyzing insights within Activity Explorer. Pre-built prompts offer a birds-eye view of top activities detected over the past week, such as DLP rule matches or sensitive data used in M365 Copilot interactions. With Security Copilot, admins can also use natural language to apply the correct investigation filters to pinpoint specific activities or data. One of the persistent challenges for DLP admins has been quickly and easily grasping the full extent of their DLP policies' coverage across the environment. The new Security Copilot-powered policy insights skill addresses this by summarizing the intent, scope, and resulting matches of existing DLP policies in natural language. This skill provides insights such as the DLP policies deployed for each workload (like SharePoint or Exchange), the sensitive information types they aim to detect, and the number of rule matches associated with those policies. With this information, security admins can swiftly identify and address any protection gaps. You might ask something like “do my DLP policies cover my organization for PII information” or “What policies protect my OneDrive sites". Upskilling data security, compliance and governance with generative AI We are also thrilled to announce new Security Copilot and Purview capabilities for beyond just data security. The eDiscovery quick case summarization feature is designed to streamline case management by providing an intuitive, at-a-glance overview. This new capability allows users to quickly access a comprehensive summary of eDiscovery cases, holds, and searches, eliminating the need to navigate through multiple tabs. It consolidates information into a single, easy-to-understand summary, displaying status, statistics of completed actions, pending tasks, and ongoing jobs. This feature significantly reduces the time needed for investigations when dealing with large amounts of evidence data. eDiscovery also leverages AI to build search queries by generating keyword query language from natural language (NL2KeyQL) -already in Public Preview Other capability we’re making available now is the Knowledge Base Copilot, crafted to improve user experience by offering instant answers to general questions about the Purview platform and its solutions, utilizing public Microsoft documentation. The prompt cards are dynamically displayed based on the page context. It supports both open-prompt and zero-prompt interactions, allowing users to either submit any prompt they wish or engage with pre-defined prompts for immediate responses. This Copilot experience aims to resolve customer complaints about navigating documentation by providing direct answers to their questions, minimizing the need to open multiple tabs and search through links. Knowledge Base Copilot is a global capability accessible through the Purview portal and provides answers to queries related to all Purview solutions and capabilities. Get started Learn more about Copilot for Security in Purview with Microsoft Documentation. If you are a security partner interested in using Microsoft Security Copilot with your solutions, please sign up to join the Security Copilot Partner Ecosystem. Stay up to date on our Microsoft Purview features through the Microsoft 365 Roadmap for Microsoft Purview. Learn more about these solutions in the Microsoft Purview compliance portal. Visit your Microsoft Purview compliance portal to activate your free trial and begin using our new features. An active Microsoft 365 E3 subscription is required as a prerequisite to activate the free trial. Join the community - https://aka.ms/JoinCCP Get started with Microsoft Copilot for Security - Get started with Microsoft Copilot for Security - Training | Microsoft Learn Copilot for Security Ninja - How to Become a Microsoft Copilot for Security Ninja: The Complete Level 400 Training Microsoft Copilot for Security Community Github - GitHub - Azure/Copilot-For-Security: Microsoft Copilot for Security is a generative AI-powered security solution that helps increase the efficiency and capabilities of defenders to improve security outcomes at machine speed and scale, while remaining com [1] AI reduces data breach lifecycles and costs, Security Intelligence (2023) [2] Secureworks Threat Score Ushers In a New Age of Cybersecurity AI | Secureworks (2024)1.6KViews1like0CommentsStart learning how Copilot can help you by watching Microsoft Copilot for Security Flight School
Where traditional approaches to enterprise security can isolate security professionals from each other and business functions across highly fragmented environments, Microsoft Copilot for Security helps by redefining what security is and how security gets done. That’s why we’re thrilled to introduce Microsoft Copilot for Security Flight School! Building on the foundational learning in Learn Live: Get started with Microsoft Copilot for Security, host Ryan Munsch, Principal Tech Specialist at Microsoft, explores several intermediate technical topics (L200+) in our flight school videos—ranging from what Microsoft Copilot for Security is (and what it isn’t) to key capabilities, experiences, and how to extend Copilot to your ecosystem. Each topical video is 10 mins or less, aligning to relevant learning modules on Microsoft Learn. This can prove valuable for IT pros looking to enhance their ability to process security signals and protect at the speed and scale of AI. Training topics include: What is Microsoft Copilot for Security? AI orchestration Standalone and embedded experiences Copilot in Entra, Intune, and Purview Manage your plugins Prompting Copilot Prompt engineering Using promptbooks Logic apps Extending Copilot to your ecosystem Check out Microsoft Copilot for Security Flight School today.815Views2likes0CommentsLearn how to customize and optimize Copilot for Security with the custom Data Security plugin
This is a step-by-step guided walkthrough of how to use the custom Copilot for Security pack for Microsoft Data Security and how it can empower your organization to understand the cyber security risks in a context that allows them to achieve more.4.5KViews2likes1Comment