Recent Discussions
Update content package Metadata
Hello Sentinel community and Microsoft. Ive been working on a script where i use this command: https://learn.microsoft.com/en-us/rest/api/securityinsights/content-package/install?view=rest-securityinsights-2024-09-01&tabs=HTTP Ive managed to successfully create everything from retrieving whats installed, uninstalling, reinstalling and lastly updating (updating needed to be "list, delete, install" however :'), there was no flag for "update available"). However, now to my issue. As this work like a charm through powershell, the metadata and hyperlinking is not being deployed - at all. So i have my 40 content packages successfully installed through the REST-api, but then i have to visit the content hub in sentinel in the GUI, filter for "installed" and mark them all, then press "install". When i do this the metadata and hyperlinking is created. (Its most noticeable that the analytic rules for the content hubs are not available under analytic rules -> Rule templates after installing through the rest api). But once you press install button in the GUI, they appear. So i looked in to the request that is made when pressing the button. It uses another API version, fine, i can add that to my script. But it also uses 2 variables that are not documented and encrypted-data. they are called c and t: Im also located in EU and it makes a request to SentinelUS. im OK with that, also as mentioned, another API version (2020-06-01) while the REST APi to install content packages above has 2024-09-01. NP. But i can not simulate this last request as the variables are encrypted and not available through the install rest api. They are also not possible to simulate. it ONLY works in the GUI when pressing install. Lastly i get another API version back when it successfully ran through install in GUI, so in total its 3 api versions. Here is my code snippet i tried (it is basically a mimic of the post request in the network tab of the browser then pressing "install" on the package in content hub, after i successfully installed it through the official rest api). function Refresh-WorkspaceMetadata { param ( [Parameter(Mandatory = $true)] [string]$SubscriptionId, [Parameter(Mandatory = $true)] [string]$ResourceGroup, [Parameter(Mandatory = $true)] [string]$WorkspaceName, [Parameter(Mandatory = $true)] [string]$AccessToken ) # Use the API version from the portal sample $apiVeri = "?api-version=" $RefreshapiVersion = "2020-06-01" # Build the batch endpoint URL with the query string on the batch URI $batchUri = "https://management.azure.com/\$batch$apiVeri$RefreshapiVersion" # Construct a relative URL for the workspace resource. # Append dummy t and c parameters to mimic the portal's request. $workspaceUrl = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroup/providers/Microsoft.OperationalInsights/workspaces/$WorkspaceName$apiVeri$RefreshapiVersion&t=123456789&c=dummy" # Create a batch payload with several GET requests $requests = @() for ($i = 0; $i -lt 5; $i++) { $requests += @{ httpMethod = "GET" name = [guid]::NewGuid().ToString() requestHeaderDetails = @{ commandName = "Microsoft_Azure_SentinelUS.ContenthubWorkspaceClient/get" } url = $workspaceUrl } } $body = @{ requests = $requests } | ConvertTo-Json -Depth 5 try { $response = Invoke-RestMethod -Uri $batchUri -Method Post -Headers @{ "Authorization" = "Bearer $AccessToken" "Content-Type" = "application/json" } -Body $body Write-Host "[+] Workspace metadata refresh triggered successfully." -ForegroundColor Green } catch { Write-Host "[!] Failed to trigger workspace metadata refresh. Error: $_" -ForegroundColor Red } } Refresh-WorkspaceMetadata -SubscriptionId $subscriptionId -ResourceGroup $resourceGroup -WorkspaceName $workspaceName -AccessToken $accessToken (note: i have variables higher up in my script for subscriptionid, resourcegroup, workspacename and token etc). Ive tried with and without mimicing the T and C variable. none works. So for me, currently, installing content hub packages for sentinel is always: Install through script to get all 40 packages Visit webpage, filter for 'Installed', mark them and press 'Install' You now have all metadata and hyperlinking available to you in your Sentinel (such as hunting rules, analytic rules, workbooks, playbooks -templates). Anyone else manage to get around this or is it "GUI" gated ? Greatly appreciated.2Views0likes0CommentsAMPLS Restrictions Preventing Outbound API Calls in Logic Apps – Any Workarounds?
Hi everyone, I’m encountering an issue where Azure Monitor Private Link Scope (AMPLS) restrictions are preventing Azure Logic Apps from making any outbound API calls, even to Microsoft-owned outbound IP addresses. One specific problem is that when running KQL queries inside a Logic App, the Azure Monitor connector fails because it attempts to access Microsoft outbound IPs, which are blocked by AMPLS restrictions. Since this is happening within Logic Apps itself, I don’t have direct control over these outbound calls. Has anyone found a workaround to allow Logic Apps to function correctly while keeping AMPLS in place? Would Private Endpoints, VNET Integration, or any other configuration help resolve this? Any insights or solutions would be greatly appreciated!6Views0likes0CommentsJuniper SRX 340 logs not read by rsyslog
I have configured Juniper SRX 340 Junos logs to be forwarded to a centralized syslog server before reaching Microsoft Sentinel. I can see the Juniper logs on the syslog server while doing a TCPDUMP but, the same logs are not ready by rsyslog. The same syslog server is also receiving the logs from Cisco ASA. The rsyslog is able to read the ASA logs with no issues and further forward them to Sentinel through AMA agent. I don't have any filters applied in rsyslog.conf file and I'm capturing everything (*.*) all syslog facility and severity to a log file but, still the Juniper logs are not recognized by rsyslog. Please help on resolving this issue37Views0likes0CommentsBug in stand-alone MS Sentinel MITRE tactics
I setup a new Analytic rule where I had selected multiple tactics/techniques combinations. When I create an incident from that rule, only one of the tactics/techniques actually show up in the stand-alone MS Sentinel UI as well as in the SecurityIncident table. It isn't even the first one I selected; it is the last one. I did double check the Analytic rule and all the tactics/techniques are selected. If I look at the incident using the MS Sentinel REST API, it does show that all the tactics/techniques are there as well as if I look in the M365 portal (I have my MS Sentinel instance linked). Heck, even the Graph Query will show them all (after expanding the incident to show the alerts as well). Has anyone noticed this recently? Is it a bug or another new "feature"?65Views0likes0CommentsCan we deploy Bicep through Sentinel repo
Hi there, Im new here, but 😅.... With the problem statement being "Deploying and managing sentinel infrastructure through git repository. I had looked into Sentinel Repository feature which is still in Preview. With added limitations of not being able to deploy watchlists or custom log analytical functions ( custom parsers ). There is also a limitation of deploying only ARM content My guess would be that the product folks at msft are working on this 😋 My hypothesized (just started the rnd, as of writing this) options would be to Fully go above and beyond with Bicep; Create bicep deployment files for both the rules as well as their dependencies like LAW functions, watchlists and the whole nine yards. Need to write pipelines for the deployment. The CI/CD would also need extra work to implement Hit that sweet spot; Deploy the currently supported resources using sentinel repo and write a pipeline to deploy the watchlists using Bicep. But not sure if this will be relevant to solutions to clients. When the whole shtick is that we are updating now so we dont have to later. Go back to the dark ages: Stick to the currently supported sentinel content through ARM & repo. And deploy the watchlists and dependencies using GUI 🙃 I will soon confirm the first two methods, but may take some time. As you know, I may or may not be new to sentinel...or devops.. But wanted to kick off the conversation, to see how close to being utterly wrong I am. 😎 Thanks, mal_sec50Views1like0CommentsWhy maximum supported DataFlow count is 10 in DCR?
Is there any technical reason why a DCR can support maximum 10 dataflows? There are already 10 ASim tables. If we want to combine standard tables with ASim tables in one DCR, that is currently not possible. It makes the process complicated. Also is that the same reason why designated ASim table count is currently 10? :)18Views0likes0CommentsUsing Playbook_ARM_Template_Generator
Hi, Trying to use the Playbook_ARM_Template_generator where a user assigned managed identity is used for connections. The generator doesn't seem to strip this out and then complains on deployment. Anyone had any success with this? Many thanks, Tim15Views0likes0CommentsSentinel IP for WEST EUROPE
Hi. I have this issue, where I have Sentinel and need the data connector setup for accessing Github. If my github Org do have IP Allow list enabled this do not work. So I need to find the IP's that the Connector talks out from Azure / Sentinel with when hitting the github service so I can whitelist those. If I take the IP scopes for Sentinel they are quite extensive and it cannot be that I need to whitelist every single Azure monitor/sentinel IP just to get those that Sentinel uses to talk to an API, but how can I find the needed IP's Or is there another way to get Audit logs from Github when there is IP restrictions enabled on the Github organization (in a github cloud enterprice setup)27Views0likes0CommentsIs it possible to set up this playbook for a specific rule incident alarm?
I was wondering if a specific playbook setting is possible for the rules below RuleName : New Azure Sentinel incident - Authentication Attempt from New Country Read UserPrincipalName, set_IPAddress value when alarm occurs Automatically send mail to each user by identifying the user-specific mail address with UserPrincipalName and changing the recipient, ip value according to the specified mail form16Views0likes0CommentsMicrosoft Power BI connector for Microsoft Sentinel
Since the Microsoft Power BI connector for Microsoft Sentinel currently does not support data collection rules (DCRs), how can we transform or filter the data and monitor the logs? Is there any documentation available on this?32Views0likes0CommentsRestApiPoller Paging Question
Hi, RestApiPoller Paging question from setting up a new Codeless Connector against one API. I'm currently polling this API with an Azure function and would like to cut it over to CCP. The API supports iterating through pages via querying it with pageNumber and pageSize parameters. For example, I can query pageNumber=1, pageNumber=2 and so forth. The API returns a pageCount value as part of a successful response. There is no next page or next link in the response. I can't see anything in the NextPageToken section of the API on how to handle this. Any suggestions? API is called by sending a POST with the following in the body. { "interval": "", "pageNumber": 0, "pageSize": 0 } Successful response received is: { "data": [ ], "pageSize": 0, "pageNumber": 0, "total": 0, "pageCount": 0 }25Views0likes0CommentsWorkspace Manager - Importing analytics to parent for children
Greetings, I have a Central workspace manager Sentinel (no data is ingested). However we have some Sentinel workspaces that have data connectors and data being ingested and are monitored by a SOC. We would like to be able to save analytics to this central workspace and deploy the analytics to the child workspaces. However we cannot save the rule in the central workspace as the table does not exist. For example I have an Okta analytic in a child workspace, where the query will query the Okta_CL table and some of the fields. I have exported it from the child and wish to import to the parent workspace so I can distribute to other children using Workspace manager. However I get an error because the Okta_CL table does not exist and does not have the fields. Does anyone have any ideas of how we can work around this to "force" the analytic to be present in the parent tenant? The children tenant CANNOT be linked in workspace manager. EDIT - Example error below. Status Message: Error in EntityMappings: The given column 'column_name' does not exist. (Code:BadRequest) Regards152Views0likes0CommentsSentinel - Analytic template - MFA Rejected by User
Hi, we are having a few issues with the Sentinel templated analytic rule - MFA Rejected by User (version 2.0.3) - https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Microsoft%20Entra%20ID/Analytic%20Rules/MFARejectedbyUser.yaml Over the last 30 days this analytic rule has generated 98 incidents which are all false positives. The analytic rule works on looking at Entra ID signinlogs against result type 500121 with one or more of the following additional details reported "MFA denied; user declined the authentication" or "fraud". It maps UEBA identity information then join the behavior analytics data summarised by IP Address. It's the summarising of the IP address data which has me questioning the code. When we get an event in the signin logs it also generates an event in the UEBA behavior analytic table along with a IP investigation score. If you have multiple events in the time period of the rules query period then the summarizing does a SUM() against the IP investigation data which can turn into a high which breaches the threshold. The default threshold is 20 but I have seen IP investigation scores summed again being between 60 and 100+ but the individual event record for the MFA rejection gives a score of 3 or 4. Anyone an expert with UEBA and KQL be able to tell me if the original code looks ok? - https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Microsoft%20Entra%20ID/Analytic%20Rules/MFARejectedbyUser.yaml Would to be better served by the following code? let riskScoreCutoff = 20; //Adjust this based on volume of results SigninLogs | where ResultType == 500121 | extend additionalDetails_ = tostring(Status.additionalDetails) | extend UserPrincipalName = tolower(UserPrincipalName) | where additionalDetails_ =~ "MFA denied; user declined the authentication" or additionalDetails_ has "fraud" | summarize StartTime = min(TimeGenerated), EndTime = max(TimeGenerated), UserId = any(UserId), AADTenantId=any(AADTenantId), DeviceName=any(DeviceDetail.displayName), IsManaged=any(DeviceDetail.isManaged), OS = any(DeviceDetail.operatingSystem) by UserPrincipalName, IPAddress, AppDisplayName | extend Name = tostring(split(UserPrincipalName,'@',0)[0]), UPNSuffix = tostring(split(UserPrincipalName,'@',1)[0]) | join kind=leftouter ( IdentityInfo | summarize LatestReportTime = arg_max(TimeGenerated, *) by AccountUPN | project AccountUPN, Tags, JobTitle, GroupMembership, AssignedRoles, UserType, IsAccountEnabled | summarize Tags = make_set(Tags, 1000), GroupMembership = make_set(GroupMembership, 1000), AssignedRoles = make_set(AssignedRoles, 1000), UserType = make_set(UserType, 1000), UserAccountControl = make_set(UserType, 1000) by AccountUPN | extend UserPrincipalName=tolower(AccountUPN) ) on UserPrincipalName | join kind=leftouter ( BehaviorAnalytics | where ActivityType in ("FailedLogOn", "LogOn") | where isnotempty(SourceIPAddress) | project UsersInsights, DevicesInsights, ActivityInsights, InvestigationPriority, SourceIPAddress | project-rename IPAddress = SourceIPAddress | summarize UsersInsights = make_set(UsersInsights, 1000), DevicesInsights = make_set(DevicesInsights, 1000) //IPInvestigationPriority = tostring(InvestigationPriority) by IPAddress, IPInvestigationPriority=InvestigationPriority) on IPAddress | extend UEBARiskScore = IPInvestigationPriority | where UEBARiskScore > riskScoreCutoff | sort by UEBARiskScore desc635Views0likes0Commentsoproblemas
First of all, thanks for the help. I'm trying an attack/detection of an azure activity rule and I have several questions. The attack is to register a new ADFS server. It is supposed that according to the rule, when I register it, it should appear in the AzureActivity table. I paste the rule: AzureActivity | where CategoryValue =~ 'Administrative' | where ResourceProviderValue =~ 'Microsoft.ADHybridHealthService' | where _ResourceId has 'AdFederationService' | where OperationNameValue =~ 'Microsoft.ADHybridHealthService/services/servicemembers/action' | extend claimsJson = parse_json(Claims) | extend AppId = tostring(claimsJson.appid), AccountName = tostring(claimsJson.name), Name = tostring(split(Caller,'@',0)[0]), UPNSuffix = tostring(split(Caller,'@',1)[0]) | project-away claimsJson But AzureActivity doesn't have any log of this type. I'm pasting the image where the event DOES appear. If I search for Msgraph api, these logs appear, I paste the search here. My conclusion is that the directory activity log is not being saved in AzureActivity, but I'm confused because the rule refers to that table. The rule is: 88f453ff-7b9e-45bb-8c12-4058ca5e44ee ( Microsoft Entra ID Hybrid Health AD FS New Server ) Can you help me with this case? Thanks!!150Views0likes0CommentsIngestion of AWS CloudWatch data to Microsoft Sentinel using S3 connector
Hello Guys, I hope you all are doing well. I already posted this as question but i wanted to start discussion since perhaps some of you maybe had better experience. I want to integrate CloudWatch logs to S3 bucket using Lambda function and then to send those logs to Microsoft Sentinel. As per Microsoft documentation provided: Ingest CloudWatch logs to Microsoft Sentinel - create a Lambda function to send CloudWatch events to S3 bucket | Microsoft Learn Connect Microsoft Sentinel to Amazon Web Services to ingest AWS service log data | Microsoft Learn there is a way to do this BUT, first link is from last year and when i try to ingest logs on way provided there is always an error in query "Unable to import module 'lambda_function': No module named 'pandas' ; Also, as i understood, Lambda Python script gives you the specified time range you need to set in order to export those logs - i want that logs be exported every day each few minutes and synchronized into Microsoft Sentinel. (Lambda function .py script was run in Python 3.9 as mentioned on Microsoft documentation, also all of the resources used were from github solution mentioned in Microsoft documents). When trying to run automation script provided i got created S3 bucket IAM role and SQS in AWS which is fine, but even then, the connector on AWS is still grey without any changes. I even tried to change IAM role in AWS by adding Lambda permissions and using it for Lambda queries i found on internet, created CloudWatch event bridge rule for it, but even though i can see some of .gz data ingested to S3 bucket, there is no data sent to Microsoft Sentinel. So is there anyone here that can describe full process needed to be preformed in order to ingest logs from CloudWatch to Sentinel successfully and maybe are there some people that had experience with this process - what are the things i need to take care of / maybe log ingestion data (to be cost effective) etc.. I want to mention that i am preforming this in my testing environment. Since automation script in powershell gives you capability to automatically create aws resources necessary, i tried this on test environment: Downloaded AWS CLI, ran aws config, provided keys necessary with default location of my resources. 2.Run Automation Script from powershell as documentation mentioned, filled out all fields necessary. 2.1 Automation script created: 2.1.1 S3 Bucket with Access policy: allow IAM role to read S3 bucket and s3GetObject from s3 bucket Allow CloudWatch to upload objects to bucket with S3PutObject, AWS Cloud Watch ACLCheck Allowed from CloudWatch to S3 bucket. 2.1.2 Notification event for S3 bucket to send all logs from specified S3 bucket to SQS for objects with suffix .gz (Later edited this manually and added all event types to make sure events are sent) 2.1.3 SQS Queue with Access Policy - Allow S3 bucket to SendMessage to SQS service. 2.1.4 IAM user with Sentinel Workspace ID and Sentinel RoleID Since this was deployed via Automation script, in order to send logs with CloudWatch it is necessary to configure Lambda function. Since script itself does not create these resources i have created it manually: Added IAM role assignments for Permission policies: S3 Full Access, AWS Lambda Execute, CloudWatchFullAccess, CloudWatchLogsFullAccess (even later i added: CloudWatchFullAccessV2, S3ObjectLambdaExecutionRolePolicy to try it out) 1.2 Added lambda.amazonaws.com in trust relationship policy so i can use this role for Lambda execution. Created a CloudWatch log group and log stream - created log group per subscription filter for lambda function 3.Created Lambda function as per Microsoft documentation - tried newest article https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/enhance-the-ingestion-of-aws-cloudwatch-logs-into-microsoft/ba-p/4100565 (Choose Lambda Python 3.12 , used existing role created above); (Took CloudWatchLambdaFunction_V2.py and there is an issue with pandas module, i managed to overcome this using the document: https://medium.com/@shandilya90apoorva/aws-cloud-pipeline-step-by-step-guide-241aaf059918 but even then i get error: Response { "errorMessage": "Unable to import module 'lambda_function': Error importing numpy: you should not try to import numpy from\n its source directory; please exit the numpy source tree, and relaunch\n your python interpreter from there.", "errorType": "Runtime.ImportModuleError", "requestId": "", "stackTrace": [] } Anyway this is what i tried and i eventually get to same error regarding lambda function provided from Microsoft.557Views1like0CommentsNew Survey: Your Input for the Microsoft Sentinel Ecosystem
Survey Link: https://forms.office.com/r/Yy7WWFGyeD Solutions and integrations in the Microsoft Sentinel ecosystem, such as those available in Content Hub, are pivotal in bolstering the security coverage of organizations. As our customers increasingly integrate Microsoft Sentinel with Microsoft Defender XDR, by enabling our unified SOC platform, the importance of this ecosystem only increases. In this brief survey, we seek your suggestions on improving Microsoft Sentinel's ecosystem. Whether it's a feature request, an idea for a new solution, or an enhancement to an existing one, we welcome your feedback. Feel free to submit multiple responses if you have multiple suggestions. Your insights will help us prioritize features that matter most to you. Thank you for your contributions! The Microsoft SIEM & XDR Team Microsoft respects your privacy. Review our online Privacy Statement here: https://privacy.microsoft.com/en-us/privacystatement221Views0likes0CommentsWorking around the "Analytics rule start time" for mass deploying using Workspace Manager
Hi! What are some of your thoughts/experiences of working around the "The Analytics rule start time must be between 10 minutes and 30 days from now" for mass deploying using Workspace Manager? Lets say I have 100 Analytic rules that I want to deploy to current customer using Workspace manager. This goes fine, but when a new customer arrives 1 month later, I have to redo the start time of all the Analytic rules that hasn't been changed in the last month. Doing this manually is just not an option. What I can see as a possible solution, is to use Repositories at the same time, where I would use a script to mass update the start time for all rules I need. Then sync back to Analytic rules and deploy again to new customer.152Views1like0CommentsGet entities for every alert that Sentinel Incident has with the REST API
Hi everyone, i want to try to follow up on this discussion - https://techcommunity.microsoft.com/t5/microsoft-sentinel/get-entities-for-a-sentinel-incidient-by-api/m-p/1422643 We are using the recommended in that post "expansionId" to fetch entities for specific alerts, as per documentation Sentinel Incidents API returns "summed" list of entities for Incidents (all entities from all alerts that are part of the same Incident). This is the expansion id we use for alert related entities: "98b974fd-cc64-48b8-9bd0-3a209f5b944b" I wanted to check, are there any updates regarding this"expansionId" option since? How safe is to still use the expansion ids and alert's entities is particular? Also, maybe there is a better way now to fetch entities per each alert in Incident via Sentinel REST API? Thanks in advance!572Views0likes0CommentsSlack slackbot messages using interactivity for Microsoft Sentinel incident actions
Hi, I am just wondering if anyone has managed to integrate Microsoft Sentinel Incidents with Slack to send slackbot messages using 'interactivity'. Similar to the Sentinel/MS Teams Adaptive Card feature where you get an adaptive card in teams and you can hit buttons with actions such as 'Change Severity', 'Change Status', 'Assign Owner' etc etc. I am wondering if anyone has managed to achieve this same functionality with Slack. The closest I have found is this GitHub repo which uses a Webhook: https://github.com/Azure/Azure-Sentinel/blob/master/Playbooks/Send-Slack-Message-Webhook/incident-trigger/images/SlackMessage.png I have tried this but to no avail. Any insights would be appreciated,339Views0likes0CommentsHow to copy a built-in Data connector in Global region to China region?
A lot of Sentinel solutions/data connectors are not available in China. For example: Dynamics 365 connector. Is it possible to get the source code of a built-in data connector (like Dynamics 365 connector) and create custom data connector in China?149Views0likes0Comments
Events
Recent Blogs
- This guide will walk you through the steps required to integrate Fluent Bit with Microsoft Sentinel. Beware that in this article, we assume you already have a Sentinel workspace, a Data Collection En...Feb 14, 2025736Views2likes1Comment
- Microsoft Sentinel just rolled out a powerful new public preview feature: Ingestion Rules. This feature lets you fine-tune your threat intelligence (TI) feeds before they are ingested to Microsoft Se...Feb 14, 20252.6KViews2likes2Comments