siem
512 TopicsUpdate content package Metadata
Hello Sentinel community and Microsoft. Ive been working on a script where i use this command: https://learn.microsoft.com/en-us/rest/api/securityinsights/content-package/install?view=rest-securityinsights-2024-09-01&tabs=HTTP Ive managed to successfully create everything from retrieving whats installed, uninstalling, reinstalling and lastly updating (updating needed to be "list, delete, install" however :'), there was no flag for "update available"). However, now to my issue. As this work like a charm through powershell, the metadata and hyperlinking is not being deployed - at all. So i have my 40 content packages successfully installed through the REST-api, but then i have to visit the content hub in sentinel in the GUI, filter for "installed" and mark them all, then press "install". When i do this the metadata and hyperlinking is created. (Its most noticeable that the analytic rules for the content hubs are not available under analytic rules -> Rule templates after installing through the rest api). But once you press install button in the GUI, they appear. So i looked in to the request that is made when pressing the button. It uses another API version, fine, i can add that to my script. But it also uses 2 variables that are not documented and encrypted-data. they are called c and t: Im also located in EU and it makes a request to SentinelUS. im OK with that, also as mentioned, another API version (2020-06-01) while the REST APi to install content packages above has 2024-09-01. NP. But i can not simulate this last request as the variables are encrypted and not available through the install rest api. They are also not possible to simulate. it ONLY works in the GUI when pressing install. Lastly i get another API version back when it successfully ran through install in GUI, so in total its 3 api versions. Here is my code snippet i tried (it is basically a mimic of the post request in the network tab of the browser then pressing "install" on the package in content hub, after i successfully installed it through the official rest api). function Refresh-WorkspaceMetadata { param ( [Parameter(Mandatory = $true)] [string]$SubscriptionId, [Parameter(Mandatory = $true)] [string]$ResourceGroup, [Parameter(Mandatory = $true)] [string]$WorkspaceName, [Parameter(Mandatory = $true)] [string]$AccessToken ) # Use the API version from the portal sample $apiVeri = "?api-version=" $RefreshapiVersion = "2020-06-01" # Build the batch endpoint URL with the query string on the batch URI $batchUri = "https://management.azure.com/\$batch$apiVeri$RefreshapiVersion" # Construct a relative URL for the workspace resource. # Append dummy t and c parameters to mimic the portal's request. $workspaceUrl = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroup/providers/Microsoft.OperationalInsights/workspaces/$WorkspaceName$apiVeri$RefreshapiVersion&t=123456789&c=dummy" # Create a batch payload with several GET requests $requests = @() for ($i = 0; $i -lt 5; $i++) { $requests += @{ httpMethod = "GET" name = [guid]::NewGuid().ToString() requestHeaderDetails = @{ commandName = "Microsoft_Azure_SentinelUS.ContenthubWorkspaceClient/get" } url = $workspaceUrl } } $body = @{ requests = $requests } | ConvertTo-Json -Depth 5 try { $response = Invoke-RestMethod -Uri $batchUri -Method Post -Headers @{ "Authorization" = "Bearer $AccessToken" "Content-Type" = "application/json" } -Body $body Write-Host "[+] Workspace metadata refresh triggered successfully." -ForegroundColor Green } catch { Write-Host "[!] Failed to trigger workspace metadata refresh. Error: $_" -ForegroundColor Red } } Refresh-WorkspaceMetadata -SubscriptionId $subscriptionId -ResourceGroup $resourceGroup -WorkspaceName $workspaceName -AccessToken $accessToken (note: i have variables higher up in my script for subscriptionid, resourcegroup, workspacename and token etc). Ive tried with and without mimicing the T and C variable. none works. So for me, currently, installing content hub packages for sentinel is always: Install through script to get all 40 packages Visit webpage, filter for 'Installed', mark them and press 'Install' You now have all metadata and hyperlinking available to you in your Sentinel (such as hunting rules, analytic rules, workbooks, playbooks -templates). Anyone else manage to get around this or is it "GUI" gated ? Greatly appreciated.14Views0likes1CommentAMPLS Restrictions Preventing Outbound API Calls in Logic Apps – Any Workarounds?
Hi everyone, I’m encountering an issue where Azure Monitor Private Link Scope (AMPLS) restrictions are preventing Azure Logic Apps from making any outbound API calls, even to Microsoft-owned outbound IP addresses. One specific problem is that when running KQL queries inside a Logic App, the Azure Monitor connector fails because it attempts to access Microsoft outbound IPs, which are blocked by AMPLS restrictions. Since this is happening within Logic Apps itself, I don’t have direct control over these outbound calls. Has anyone found a workaround to allow Logic Apps to function correctly while keeping AMPLS in place? Would Private Endpoints, VNET Integration, or any other configuration help resolve this? Any insights or solutions would be greatly appreciated!50Views0likes3CommentsFetching alerts from Sentinel using logic apps
Hello everyone, I have a requirement to archive alerts from sentinel. To do that I need to do the following: Retrieve the alerts from Sentinel Send the data to an external file share As a solution, I decided to proceed with using logic apps where I will be running a script to automate this process. My questions are the following: -> Which API endpoints in sentinel are relevant to retrieve alerts or to run kql queries to get the needed data. -> I know that I will need some sort of permissions to interact with the API endpoint. What type of service account inside azure should I create and what permissions should I provision to it ? -> Is there any existing examples of logic apps interacting with ms sentinel ? That would be helpful for me as I am new to Azure. Any help is much appreciated !76Views0likes4CommentsCannot stop CEF duplication to syslog when both processed by same Linux VM
We have a situation where we are sending CEF records from FortiGate firewall to Microsoft Sentinel via Common Event Format (CEF) via AMA Data connector and we also use Syslog via AMA Data connector (both on the same Ubuntu Linux VM using rsyslog) and result is that we are getting duplicates of the CEF records in the syslog. I've read a lot of articles about the duplication and possible ways to fix however I've had not success. My most recent attempt is to create a file /etc/rsyslog.d/05-filter-CEF.conf with the following entries: if ($programname == "CEF") then @@127.0.0.1:28330 & stop Unfortunately we still get duplicates. One article I read said to use @@127.0.0.1:25226 however then we don't get CEF records in a CommonSecurityLog or Syslog. Is there anyone that can help?65Views0likes2CommentsIntegrating Fluent Bit with Microsoft Sentinel
This guide will walk you through the steps required to integrate Fluent Bit with Microsoft Sentinel. Beware that in this article, we assume you already have a Sentinel workspace, a Data Collection Endpoint and a Data Collection Rule, an Entra ID application and finally a Fluent Bit installation. As mentioned above, log ingestion API supports ingestion both in custom tables as built-in tables, like CommonSecurityLog, Syslog, WindowsEvent and more. In case you need to check which tables are supported please the following article: https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-ingestion-api-overview#supported-tables Prerequisites: Before beginning the integration process, ensure you have the following: An active Azure subscription with Microsoft Sentinel enabled; Microsoft Entra ID Application taking note of the ClientID, TenantID and Client Secret – create one check this article: https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app?tabs=certificate A Data Collection Endpoint (DCE) – to create a data collection endpoint, please check this article: https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/data-collection-endpoint-overview?tabs=portal A Data Collection Rule (DCR) – fields from the Data Collection Rule need to match exactly to what exists in table columns and also the fields from the log source. To create a DCR please check this article: https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/data-collection-rule-create-edit?tabs=cli Depending on the source, it might require a custom table to be created or an existing table from log analytics workspace; Fluent Bit installed on your server or container – In case you haven’t yet installed Fluent Bit, in the following article you'll find the instructions per type of operating system: https://docs.fluentbit.io/manual/installation/getting-started-with-fluent-bit High level architecture: Step 1: Setting up Fluent Big configuration file Before we step-in into the configuration, Fluent Bit has innumerous output plugins and one of those is through Log Analytics Ingestion API both to supported Sentinel tables but also for custom tables. You can check more information about it here in Fluent Bit documentation: https://docs.fluentbit.io/manual/pipeline/outputs/azure_logs_ingestion Moving forwarder, in order to configure Fluent Bit to send logs into Sentinel log analytics workspace, please take note of the specific input plugin you are using or intend to use to receive logs and how can you use it to output the logs to Sentinel workspace. For example most of the Fluent Bit plugins allow to set a “tag” key which can be used within the output plugin so that there’s a match in which logs are intended to send. On the other hand, in a scenario where multiple input plugins are used and all are required send logs to Sentinel, then a match of type wildcard "*" could be used as well. Another example, in a scenario where there are multiple input plugins of type “HTTP” and you want to just send a specific one into Sentinel, then the “match” field must be set according to the position of the required input plugin, for example “match http.2”, if the input plugin would the 3 rd in the list of HTTP inputs. If nothing is specified in the "match" field, then it will assume "http.0" by default. For better understanding, here’s an example of how a Fluent Bit config file could look: First, the configuration file is located under the path ”/etc/fluent-bit/fluent-bit.conf” The first part is the definition of all “input plugins”, then follows the “filter plugins” which you can use for example to rename fields from the source to match for what exists within the data collection rule schema and Sentinel table columns and finally there’s the output plugins. Below is a screenshot of a sample config file: INPUT plugins section: In this example we’re going to use the “dummy input” to send sample messages to Sentinel. However, in your scenario you could leverage other’s input plugins within the same config file. After everything is configured in the input section, make sure to complete the “FILTER” section if needed, and then move forward to the output plugin section, screenshot below. OUTPUT plugins section: In this section, we have output plugins to write on a local file based on two tags “dummy.log” and “logger”, an output plugin that prints the outputs in json format and the output plugin responsible for sending data to Microsoft Sentinel. As you can see, this one is matching the “tag” for “dummy.log” where we’ve setup the message “{“Message”:”this is a sample message for testing fluent bit integration to Sentinel”, “Activity”:”fluent bit dummy input plugn”, “DeviceVendor”:”Ubuntu”}. Make sure you insert the correct parameters in the output plugin, in this scenario the "azure_logs_ingestion" plugin. Step 2: Fire Up Fluent Bit When the file is ready to be tested please execute the following: sudo /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf Fluent bit will start initialization all the plugins it has under the config file. Then you’re access token should be retrieved if everything is well setup under the output plugin (app registration details, data collection endpoint URL, data collection rule id, sentinel table and important to make sure the name of the output plugin is actually “azure_logs_ingestion”). In a couple of minutes you should see this data under your Microsoft Sentinel table, either an existing table or a custom table created for the specific log source purpose. Summary Integrating Fluent Bit with Microsoft Sentinel provides a powerful solution for log collection and analysis. By following this guide, hope you can set up a seamless integration that enhances your organization's ability to monitor and respond to security threats, just carefully ensure that all fields processed in Fluent Bit are mapped exactly to the fields in Data Collection Rule and Sentinel table within Log Analytics Workspace. Special thanks to “Bindiya Priyadarshini” that collaborated with me on this blog post. Cheers!736Views2likes1CommentMS Defender Azure Arc Logic App
What is the best procedure for configuring a Logic App for Microsoft Defender in an Azure Arc environment? We had a very unexpected experience during onboarding—after configuring the Logic App, we missed setting a cap, and within a week, it consumed over $18K USD. I believe there must be a way to fine-tune the configuration to optimize costs. From my perspective, no organization would adopt an environment with such high costs for Microsoft Defender Plan 2 without better cost control measures in place. Could you suggest best practices or optimizations to prevent such excessive consumption?41Views0likes1CommentWeird updates "Security Threat Intelligence" on desktop
Hi guys, my name is Mo and I am new to the XRD community 🥰 I m observing anomalous device behavior. Upon login or wake-up, multiple virtual machines are active, some exhibiting headless screen reader functionality. This issue emerged following the installation of Microsoft security threat intelligence updates. Considering Windows Defender's machine learning and predictive maintenance capabilities, I question the deployment of these updates to my system. Is this update a standard Windows component? The associated URL is currently inaccessible. I acknowledge the potential of XR, CDN, and Hologres technologies (and other Azure/cloud-enabled features) to alter user experience. Could someone provide clarification regarding these iterative security updates? My usage is limited to cloud platforms and reputable open-source software; I do not utilize malicious websites. Thank you. #misclassification?57Views0likes2CommentsIntroducing Threat Intelligence Ingestion Rules
Microsoft Sentinel just rolled out a powerful new public preview feature: Ingestion Rules. This feature lets you fine-tune your threat intelligence (TI) feeds before they are ingested to Microsoft Sentinel. You can now set custom conditions and actions on Indicators of Compromise (IoCs), Threat Actors, Attack Patterns, Identities, and their Relationships. Use cases include: Filter Out False Positives: Suppress IoCs from feeds known to generate frequent false positives, ensuring only relevant intel reaches your analysts. Extending IoC validity periods for feeds that need longer lifespans. Tagging TI objects to match your organization's terminology and workflows Get Started Today with Ingestion Rules To create new “Ingestion rule”, navigate to “Intel Management” and Click on “Ingestion rules” With the new Ingestion rules feature, you have the power to modify or remove indicators even before they are integrated into Sentinel. These rules allow you to act on indicators currently in the ingestion pipeline. > Click on “Ingestion rules” Note: It can take up to 15 minutes for the rule to take effect Use Case #1: Delete IOC’s with less confidence score while ingesting When ingesting IOC's from TAXII/Upload API/File Upload, indicators are imported continuously. With pre-ingestion rules, you can filter out indicators that do not meet a certain confidence threshold. Specifically, you can set a rule to drop all indicators in the pipeline with a confidence score of 0, ensuring that only reliable data makes it through. Use Case #2: Extending IOC’s The following rule can be created to automatically extend the expiration date for all indicators in the pipeline where the confidence score is greater than 75. This ensures that these high-value indicators remain active and usable for a longer duration, enhancing the overall effectiveness of threat detection and response. Use Case #3: Bulk Tagging Bulk tagging is an efficient way to manage and categorize large volumes of indicators based on their confidence scores. With pre-ingestion rules, you can set up a rule to tag all indicators in the pipeline where the confidence score is greater than 75. This automated tagging process helps in organizing indicators, making it easier to search, filter, and analyze them based on their tags. It streamlines the workflow and improves the overall management of indicators within Sentinel. Managing Ingestion rules In addition to the specific use cases mentioned, managing ingestion rules gives you control over the entire ingestion process. 1. Reorder Rules You can reorder rules to prioritize certain actions over others, ensuring that the most critical rules are applied first. This flexibility allows for a tailored approach to data ingestion, optimizing the system's performance and accuracy. 2. Create From Creating new ingestion rules from existing ones can save you a significant amount of time and offer the flexibility to incorporate additional logic or remove unnecessary elements. Effectively duplicating these rules ensures you can quickly adapt to new requirements, streamline operations, and maintain a high level of efficiency in managing your data ingestion process. 3. Delete Ingestion Rules Over time, certain rules may become obsolete or redundant as your organizational needs and security strategies evolve. It's important to note that each workspace is limited to a maximum of 25 ingestion rules. Having a clean and relevant set of rules ensures that your data ingestion process remains streamlined and efficient, minimizing unnecessary processing and potential conflicts. Deleting outdated or unnecessary rules allows for a more focused approach to threat detection and response. It reduces clutter, which can significantly enhance the performance. By regularly reviewing and purging obsolete rules, you maintain a high level of operational efficiency and ensure that only the most critical and up-to-date rules are in place. Conclusion By leveraging these pre-ingestion rules effectively, you can enhance the quality and reliability of the IOC’s ingested into Sentinel, leading to more accurate threat detection and an improved security posture for your organization.2.6KViews2likes2CommentsData at rest Europe
Why in the world would MDE and XDR default ro Europe when our entire cloud services host oir of eastus? Data at rest shows Europe instead of eastus which is oue default tenant. Also the fact that XDR setup failed to ask set region is biggest bug in this stack along with MDE. what would have caused these two to get setup in europe and is thia configurable somewhere defender portal or other portal? I have read all the docs with only option would be to redo the entire setup. If we decided to start from beginning who holds the key to set desired region for all these modules? Is this EA, Tenant Admin, Microsoft Support? also streaming logs inter continental from Europe to log analytics in eastus, whats the cost ingestion? I show several pricing model but with my use case i need ti know dollar amount per gig for both. Not happy how illusive defender operates if not careful during initial setup from admin perspective or it could have been microsoft that managed to click through without looking70Views0likes5Comments