alerts
14 TopicsAre you getting the most out of your Azure Log Analytics Workspace (LAW) investment?
Using a LAW is a great way to consolidate various types of data (performance, events, security, etc.) and signals from multiple sources. That's the easy part - mining this data for actionable insights is often the real challenge. One way we did this was by surfacing events related to disks across our physical server estate. We were already sending event data to our LAW; it was just a matter of parsing it with KQL and adding to a Power Bi dashboard for additional visibility. The snippet from the Power Bi dashboard shows when the alert was first triggered and when the disk was eventually replaced. Here's the KQL query we came up with. let start_time=ago(30d); let end_time=now(); Event | where TimeGenerated > start_time and TimeGenerated < end_time | where EventLog contains 'System' | where Source contains 'Storage Agents' | where RenderedDescription contains 'Drive Array Physical Drive Status Change' | parse kind=relaxed RenderedDescription with * 'Drive Array Physical Drive Status Change. The ' Drive ' with serial number ""' Serial '"", has a new status of ' Status '. (Drive status values:'* | project Computer, Drive, Serial, Status, TimeGenerated, EventLevelName You can of course set up alerting with Alerts for Azure Monitor. I hope this example helps you get more value from your LAW.53Views1like2CommentsEffective Cloud Governance: Leveraging Azure Activity Logs with Power BI
We all generally accept that governance in the cloud is a continuous journey, not a destination. There's no one-size-fits-all solution and depending on the size of your Azure cloud estate, staying on top of things can be challenging even at the best of times. One way of keeping your finger on the pulse is to closely monitor your Azure Activity Log. This log contains a wealth of information ranging from noise to interesting to actionable data. One could set up alerts for delete and update signals however, that can result in a flood of notifications. To address this challenge, you could develop a Power Bi report, similar to this one, that pulls in the Azure Activity Log and allows you to group and summarize data by various dimensions. You still need someone to review the report regularly however consuming the data this way makes it a whole lot easier. This by no means replaces the need for setting up alerts for key signals, however it does give you a great view of what's happened in your environment. If you're interested, this is the KQL query I'm using in Power Bi let start_time = ago(24h); let end_time = now(); AzureActivity | where TimeGenerated > start_time and TimeGenerated < end_time | where OperationNameValue contains 'WRITE' or OperationNameValue contains 'DELETE' | project TimeGenerated, Properties_d.resource, ResourceGroup, OperationNameValue, Authorization_d.scope, Authorization_d.action, Caller, CallerIpAddress, ActivityStatusValue | order by TimeGenerated asc34Views0likes0CommentsAzure alert on multipel subscriptions
Hi all, i am not sure if this is the rigt place, but here goes. I am working on creating a monitoring solution, and are trying to create some dynamic alert rules. I need them to look on a lot of subscriptions, but when use chose scope, you can only chose one subscription. So i have exported the template and add'ed another subscription in the scopes section, but will it work? This is what the properties section looks like in the template, it is looking on cpu usage over time: "properties": { "description": "Dynamic warning on CPU ussage", "severity": 2, "enabled": true, "scopes": [ "/subscriptions/Sub1", "/subscriptions/Sub2" ], "evaluationFrequency": "PT15M", "windowSize": "PT1H", "criteria": { "allOf": [ { "alertSensitivity": "High", "failingPeriods": { "numberOfEvaluationPeriods": 4, "minFailingPeriodsToAlert": 4 }, "name": "Metric1", "metricNamespace": "microsoft.compute/virtualmachines", "metricName": "Percentage CPU", "operator": "GreaterOrLessThan", "timeAggregation": "Average", "criterionType": "DynamicThresholdCriterion" } ], "odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria" }, Any input is more than welcome š Regards Jan L Dam797Views0likes3CommentsBest practice for monitoring Azure VMs using Monitor
Hey members, I am curious to know what approach are you guys following related to setting up alerts on Azure Monitor for Compute in particular. Like once the VMs are connected via DCR to LAW, is it advisable to create alert targeting LA workspace or should we choose individual VM to setup alarms? In other words, if the hearbeat/Perf/SysLogs all are ingesting to LAW, then alerts should also be created targeting LAW not individual resource, am I right? What is the best practice to setup alarms? Thank you614Views0likes1CommentAKS Monitoring - Questions about Azure Monitor, Grafana and Prometheus
Hi, We have a Kubernetes platform on which we are developing our microservices. To start working on monitoring, we have some important questions. We want to monitor from the status of the nodes of the AKS clusters as the status of the pods (status, if they are not started, etc.). If we go to an AKS cluster to 'Monitoring - Insights' we see a lot of information, but we want to have dashboards and, most importantly, alerts. On the one hand, it is possible to create alerts in the Azure portal itself and also dashboards. On the other hand, we see that the Microsoft documentation indicates how to configure 'Azure Managed Grafana'. And finally, we have Prometheus which in turn displays dashboards using Grafana. Our biggest question is: What do both Grafana and Prometheus contribute? Do we get more information with Prometheus than we get with Azure Insights? Grafana we see that it already brings many pre-created dashboards for many parts of Azure, as well as pre-created alerts. Is it worth using 'Azure Grafana Managed' or if you don't want to pay for the service, use Azure Monitor for everything? Thanks!!1.4KViews2likes1CommentLog Analytics Alert Filtered Query
I made this query: AzureDevOpsAuditing | where ActorUPN != "Azure DevOps Service" | where Area == "Release" | summarize Count = count() by OperationName, bin(TimeGenerated, 1440min), ActorUPN, Details, IpAddress, ScopeDisplayName | summarize sum(Count) | where sum_Count > 0 And I made an alert that takes that query, evaluates the results of one day and sends an email. But when the filtered results come out it doesnt show the other columns that im looking for (ActorUPN, details ...) The query: AzureDevOpsAuditing | where ActorUPN != "Azure DevOps Service" | where Area == "Release" | summarize Count = count() by OperationName, bin_at(TimeGenerated, 1440min, datetime(2022-08-09T20:09:14.0000000Z)), ActorUPN, Details, IpAddress, ScopeDisplayName | summarize sum(Count) | where sum_Count > 0 | extend TimeGenerated = column_ifexists('TimeGenerated', datetime(2022-08-08T20:09:14.0000000Z)) | summarize AggregatedValue = sum(sum_Count) by bin_at(TimeGenerated, 1440m, datetime(2022-08-09T20:09:14.0000000Z)) and show TimeGenerated and AggregatedValue, nothing else.1.4KViews0likes1CommentActivity Alerts (set up via https://security.microsoft.com/managealerts) are not being received
To test activity alerts, I modified 10 different files in my SharePoint tenant and, though I set the alerts to detect those specific modifications and email me for each one, I only received 2 out of 10 emails. The first two email notifications were received within 10 minutes but, after 8 hours, the other 8 alerts have not been received. I do not see the activity for any of the 10 modifications in the activity log either. The metadata in SharePoint confirms that I made the changes, but the alerts are not being triggered. What can be done to ensure that these activity alerts are triggered with consistency?803Views0likes0CommentsAlert Suppression
Hey there, I think what I'm looking for is alert suppression but as available in Azure Monitor now it doesn't seem to do what I want. I have an event that shows up in the log and, once it gets started, it repeats a lot. What I want is to look for an event in the logs and send an alert when it first occurs. After that I only want an alert every hour or every 4 hours or something. I've always thought of this as a form of suppression but I don't see a way to do this. Thoughts? TIA ~DGM~811Views0likes1CommentFiltering data from azure alerts in postman
I am using postman to collect triggered alerts from Microsoft azure, but Iām having trouble filtering the API in postman. Does anyone know how to filter the data, so that I only keep the data if isSuppressed is false? 90% of all the data I am collecting from the API has isSupressed true, but I am only interested in the cases where isSupressed is false. I have added two examples of elements from the API below (without the keys). The URL I am using looks like this: https://management.azure.com/{scope}/providers/Microsoft.AlertsManagement/alerts?api-version=2019-03-01 { "properties": { "essentials": { "severity": "Sev0", "signalType": "Log", "alertState": "New", "monitorCondition": "Fired", "monitorService": "Log Analytics", "actionStatus": { "isSuppressed": true }, } }, }, { "properties": { "essentials": { "severity": "Sev0", "signalType": "Log", "alertState": "New", "monitorCondition": "Fired", "monitorService": "Log Analytics", "actionStatus": { "isSuppressed": false }, } }, },817Views0likes0Comments