alerts
315 TopicsAlert FineTuning(Sev:Low): Vulnerability Scanner Detection
Hi, we are seeing a high number of "Vulnerability Scanner Detection" alerts and facing challenges during analysis: The alerts often show Microsoft IP addresses, and some of them appear malicious. Can we fine-tune this to capture the actual IP scanning the environment? How can we determine whether the scan was successful or failed, for example, by using status codes like 200 or 404? Is there a way to identify if the app service is using platforms like Joomla, Drupal, WordPress, or others? Looking forward to your support on this.Fetching alerts from Sentinel using logic apps
Hello everyone, I have a requirement to archive alerts from sentinel. To do that I need to do the following: Retrieve the alerts from Sentinel Send the data to an external file share As a solution, I decided to proceed with using logic apps where I will be running a script to automate this process. My questions are the following: -> Which API endpoints in sentinel are relevant to retrieve alerts or to run kql queries to get the needed data. -> I know that I will need some sort of permissions to interact with the API endpoint. What type of service account inside azure should I create and what permissions should I provision to it ? -> Is there any existing examples of logic apps interacting with ms sentinel ? That would be helpful for me as I am new to Azure. Any help is much appreciated !74Views0likes4CommentsHow does Defender detect file version limit default changes?
Hi all, I am currently reviewing a historic article that mentions a Cloud Ransomware attack where attackers can change the default number of file versions saved by default. They change this from the default 500 to 1 and then save over your files to make them unrecoverable. Apparently this doesn't need admin credentials, a standard user can do this themselves. All of the Microsoft guidance says that Microsoft is protected against cloud ransomware attacks of this type because of the file versioning feature, as well as being able to contact Microsoft for 14 days after such an incident and they can retrieve your data. My questions are: Where do I find what the current settings are for file version limit defaults? Is it in the OneDrive/SharePoint admin centres? How do I find out whether such a change has been made? Is there an alert already configured in Defender to detect such a change? If not, does anyone know how to set one up, e.g., KQL and a custom detection? I tried asking Copilot, but it just sends me to the official Microsoft documentation, so any help is greatly appreciated.44Views0likes4CommentsAffected rows stateful anomaly on database vs. Response rows stateful anomaly on database
Is there a difference between the two scheduled rules, "Affected rows stateful anomaly on database" and "Response rows stateful anomaly on database"? I can see that they have different descriptions: - Affected rows stateful anomaly on database - To detect anomalous data change/deletion. This query detects SQL queries that changed/deleted a large number of rows. - Response rows stateful anomaly on database - To detect anomalous data exfiltration. This query detects SQL queries that accessed a large number of rows. This tells me the alerts query should be different. However, when I compare the two they are exactly the same.Solved45Views0likes2CommentsMissing auditability on use of Explorer and Advanced Hunting
Considering Defender for Office's Explorer and Advanced Hunting can be used to get insight into very sensitive data we assumed this activity is auditable, but unfortunately not. A Microsoft Support request confirmed it's not, and we're confused as to why and would highly request Microsoft to implement audit tracking for any user, including queries used. Explorer gives access to email subjects and Advanced Hunting can be used to view users files etc so from a GDPR and tracking point of view we need to be able to audit our SOC team and other admins on when they access potential personal information.42Views0likes1CommentMS Defender Azure Arc Logic App
What is the best procedure for configuring a Logic App for Microsoft Defender in an Azure Arc environment? We had a very unexpected experience during onboarding—after configuring the Logic App, we missed setting a cap, and within a week, it consumed over $18K USD. I believe there must be a way to fine-tune the configuration to optimize costs. From my perspective, no organization would adopt an environment with such high costs for Microsoft Defender Plan 2 without better cost control measures in place. Could you suggest best practices or optimizations to prevent such excessive consumption?41Views0likes1CommentProtecting Azure AI Workloads using Threat Protection for AI in Defender for Cloud
Understanding Jailbreak attacks Evasion attacks involve subtly modifying inputs (images, audio files, documents, etc.) to mislead models at inference time, making them a stealthy and effective means of bypassing inherent security controls in the AI Service. Jailbreak can be considered a type of evasion attack. The attack involves crafting inputs that cause the AI model to bypass its safety mechanisms and produce unintended or harmful outputs. Attackers can use techniques like crescendo to bypass security filters for example creating a recipe for Molotov Cocktail. Due to the nature of working with human language, generative capabilities, and the data used in training the models, AI models are non-deterministic, i.e., the same input will not always produce the same outputs. A “classic” jailbreak happens when an authorized operator of the system crafts jailbreak inputs in order to extend their own powers over the system. Indirect prompt injection happens when a system processes data controlled by a third party (e.g., analyzing incoming emails or documents editable by someone other than the operator) who inserts a malicious payload into that data, which then leads to a jailbreak of the system. There are various types of jailbreak-like attacks. Some, like DAN, involve adding instructions to a single user input, while others, like Crescendo, operate over multiple turns, gradually steering the conversation towards a specific outcome. Therefore, jailbreaks should be seen not as a single technique but as a collection of methods where a guardrail can be circumvented by a carefully crafted input. Understanding Native protections against Jailbreak Defender for Cloud’s AI Threat Protection (https://learn.microsoft.com/en-us/azure/defender-for-cloud/ai-threat-protection) feature integrates with Azure Open AI and reviews the prompt and response for suspicious behavior (https://learn.microsoft.com/en-us/azure/defender-for-cloud/alerts-ai-workloads) In case of Jailbreak, the solution integrates with Azure Open AI’s Content Filter Prompt Shields (https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/content-filter), which uses an ensemble of multi-class classification models to detect four categories of harmful content (violence, hate, sexual, and self-harm) at four severity levels respectively (safe, low, medium, and high), and optional binary classifiers for detecting jailbreak risk, existing text, and code in public repositories. When Prompt Shield detects a Jailbreak attempt, it filters / annotate the user’s prompt. Defender for Cloud then picks up this information and makes it available to the security teams. Note that User Prompts are protected from Direct Attacks like Jailbreak by default. As a result, once you enable Threat Protection for AI in Defender for Cloud your security teams will have complete visibility on these. Fig 1. Threat Protection for AI alert Tangible benefits for your Security Teams Since the Defender for Cloud is doing the undifferentiated heavy lifting here your Security Governance, Architecture, and Operations all benefit like so, Governance Content is available out of the box and is enabled by default in several critical risk scenarios. This helps meet your AI security controls like OWASP LLM 01: Prompt Injection (https://genai.owasp.org/llmrisk/llm01-prompt-injection/) You can further refine the Content Filter levels for each model running in AI Foundry depending on the risk such as the data model accesses (RAG), public exposure, etc. The application of the control is enabled by default The Control reporting is available out of the box and can/will follow the existing workflow that you have set up for remainder of your cloud workloads Defender for Cloud provides Governance Framework Architecture Threat Protection for AI can be enabled at subscription level so the service scales with your workloads and provides coverage for any new deployments There is native integration with Azure Open AI so you do not need to write and manage custom patterns unlike a third party service The service is not in-line so you do not have to worry about downstream impact on the workload Since Threat Protection for AI is a capability within Defender for Cloud, you do not need to define specific RBAC permissions for users or service The alerts from the capability will automatically follow the export flow you have set up for the rest of the Defender for Cloud capabilities. Operations The alerts are already ingested in the Microsoft XDR portal so you can continue threat hunting without learning new tools there by maximizing your existing skills You can set up Workflow Automation to respond to AI alerts much like alerts from other capabilities like Defender for Storage. So, your overall logic app patterns can be reused with small tweaks Since your SOC analyst might still be learning Gen AI threats and your playbooks might not be up to date, the alerts (see Fig 1 above) contain steps that they should take to resolve The alerts are available in XDR portal, which you might already be familiar with so won’t have to learn a new solution Fig 2. Alerts in XDR Portal The alerts contain the prompt as an evidence in addition to other relevant attributes like IP, user details, targeted resource. This helps you quickly triage the alerts Fig 3. Prompt Evidence captured as part of the alert You can train the model using the detected prompts to block any future responses on similar user prompts Summary Threat Protection for AI: Provides holistic coverage of your Gen AI workloads Helps you maximize the investment in Microsoft Solutions Reduces the need for learning another solution to protect another new workloads Drives overall cost, time, and operational efficiencies Enroll in the preview https://learn.microsoft.com/en-us/azure/defender-for-cloud/ai-onboarding#enroll-in-the-limited-previewCreate a report that contains Alerts and raw events
Hello, is there a way to autoamtically create a report in sentinel that contains security incidents and alerts as well as the raw events that triggered these alerts and be able to send it somewhere for archival purposes? Any help is much appreciated !127Views0likes7CommentsAdvanced Hunting along with a Custom Detection Rule
Good afternoon, I need some help setting up a KQL query in Advanced Hunting along with a Custom Detection Rule to automatically isolate devices where a virus or ransomware is detected. The rule must run at NRT (Near Real-Time) frequency. We are using Microsoft Defender for Business, which is included in the Microsoft 365 Business Premium license. Would any kind community member be able to provide me with a starting point for this? Thank you in advance!Solved133Views1like2CommentsStop Defender isolating Domain Controllers
Recently we have experienced Defender isolating our Domain Controllers. It is always the same rule which causes the isolation "Suspected AD FS DKM key read". I edited the rule and set it to auto resolve if triggered by the DCs. I assumed this would then release the DCs from isolation but this doesn't seem to be the case. Manual intervention is still required. I either need to stop Defender alerting this particular rule against my DCs (not ideal) or i need to stop the rule isolating the DCs. Any help would be appreciated.107Views0likes6Comments