Workbooks
14 TopicsAzure Monitor AMA Migration helper workbook question for subscriptions with AKS clusters
Hi, In an ongoing project, I've been looking into helping a customer updating their agents from the Microsoft Monitoring Agent (MMA) to the new Azure Monitoring Agent (AMA) that consolidates installation and the previous Log Analytics agent, Telegraf agent, diagnostics extension in Azure Event Hubs, Storage etc., and then configure Data Collection Rules (DCRs) to collect data using the new agent. One of the first steps is of course to identify which resources are affected and that needs to be migrated. There are multiple tools to identify the resources such as this PowerShell script as well as the built-in AMA Migration workbook in Azure Monitor, which is what I used as the initial option at the start of the AMA migration process. When running the notebook, it will list all VMs, VMSSs etc. in the subscription that do not have the AMA agent installed, e.g., through an Azure Policy or automatically by having configured a DCR, or that do have the old MMA installed, and thus needs to be migrated. In Azure, Azure Kubernetes Services (AKS), as Kubernetes is a rather specific hosting service, almost like its own mini-ecosystem in regard to networking, storage, scaling etc., enables access and control of the underlying infrastructure composing the cluster created by the AKS and its master node, providing the potential fine-grain and granular control of these resources for IT administrators, power users etc. However, in most typical use cases the underlying AKS infrastructure resources should not be modified as it could break configured SLOs. When running the Azure Monitor built-in AMA migration workbook, it includes all resources by default that do not have the AMA installed already, no matter what type of resource it is, including potential underlying cluster infrastructure resources created by AKS in the "MC_" resource group(s), such as virtual machine scale sets handling the creation and scaling of nodes and node pools of an AKS cluster. Perhaps the underlying AKS infrastructure resources could be excluded from the AMA migration results of the Azure Monitor workbook by default, or if underlying non-AMA migrated AKS infrastructure resources are found, perhaps accompanied with a text describing potential remediation steps for AMA agent migration for AKS cluster infrastructure resources. Has anyone encountered the same issue and if so how did you work around it? Would be great to hear some input and if there's already some readily available solutions/workaround out there already (if not, I've been thinking perhaps making a proposed PR here with a filter and exclusion added to the default workbook e.g. here https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Monitor/Agents/Migration%20Tools/Migration%20Helper%20Workbook). Thanks!40Views0likes1CommentAdding Autoscaling information into a Workbook
Hello, Using: az aks show --name .... --resource-group ... I can get various information about the agentPoolProfiles of a given AKS Cluster. From this information I am specially interested in the maximum number of nodes configured for autoscaling. Is it possible to get this Information by a Azure Resource Graph or Azure Resource Manager REST Call? What would be the path to this information. Thank you for your help Cheers, Markus936Views0likes1CommentColumn as link adds timestamp to query
I have workbook, that returns table with requests data. In table in Column settings I have column "sessionId" set as link. In link settings I set "Session timeline"(screen 1). But when I click on link it automatically adds timestamp of last 24 hours to the query(screen 2). Can you please help me how to remote that timestamp parameter, because with this I can only see requests/pageviews that happen in last 24hours. Thank you.475Views0likes0CommentsAzure Monitor log: How to add parameters to a function
I want to create my first function in Logs (Azure Monitor) The tutorial of Microsoft here is pretty simple: But I have trouble with adding parameters to my own function. You can see in the tutorials screen how it should look like to store a function with paremters But the parameters are missing for me if I want to save my own function. I should mention that the scope I selected is targeting Application Insights (not classic, but using an analytics workspace). If I target something else I can create functions with parameters, but I need Application insights. Is that not supported? Or... what am I doing wrong?3.4KViews0likes4CommentsUnable to set default Time Range in Azure dashboard(All Users) created from Azure Monitor Workbooks
I have used Azure monitor workbooks to create dashboards for Alerts history and to view active alerts. Also added Time Picker parameter to set the time range from the dashboard and it works without any issues. But seems like Users who are having read-only rights to the dashboard are unable to set the time range due to read only permissions. Anyone had this experience? Can we achieve this without providing write access permissions for dashboard users? or Is it possible to set default time range to Azure dashboard (May be from workbook(Not in query) and pin the tiles in Azure dashboard)), so that it'll be applied for all the read only users and rest may be who ever having write access they can choose the time range. Your input on this is really helpful. Thanks in advance1.8KViews0likes0CommentsNeed to join the Azure diagnostics and Resource specific
Dear Team, I am looking for getting the result of both tables (Azure diagnostics and Resource specific) in a single query. we have configured with both options in the log analytics workspace server I have a query about Azure diagnostics. 1. To get failed backup job let Events = AzureDiagnostics | where Category == "AzureBackupReport"; Events | extend JobOperationSubType_s = columnifexists("JobOperationSubType_s", "") | where OperationName == "Job" and JobOperation_s == "Backup" and JobStatus_s == "Failed" and JobOperationSubType_s != "Log" and JobOperationSubType_s != "Recovery point_Log" | distinct JobUniqueId_g, BackupItemUniqueId_s, JobStatus_s, Resource, JobFailureCode_s, JobStartDateTime_s | project BackupItemUniqueId_s, JobStatus_s, Resource, JobFailureCode_s, JobStartDateTime_s | join kind=leftouter ( Events | where OperationName == "BackupItem" | distinct BackupItemUniqueId_s, BackupItemFriendlyName_s | project BackupItemUniqueId_s , BackupItemFriendlyName_s ) on BackupItemUniqueId_s | project BackupItemFriendlyName_s, BackupItemUniqueId_s, JobStatus_s, Resource, JobFailureCode_s, JobStartDateTime_s | extend Vault= Resource | extend dt = todatetime(JobStartDateTime_s) | summarize count() by BackupItemFriendlyName_s, JobStatus_s,JobFailureCode_s, Vault, BackupItemUniqueId_s, NewDateTime=dt, JobStartDateTime_s 2 Backup History for Selected VM let Events = AzureDiagnostics | where TimeGenerated > ago(30d) | where Category == "AzureBackupReport"; Events | extend JobOperationSubType_s = columnifexists("JobOperationSubType_s", "") | where OperationName == "Job" and JobOperation_s == "Backup" and JobOperationSubType_s != "Log" and JobOperationSubType_s != "Recovery point_Log" | distinct JobUniqueId_g, BackupItemUniqueId_s, JobStatus_s, Resource, JobFailureCode_s, JobStartDateTime_s | project BackupItemUniqueId_s, JobStatus_s, Resource, JobFailureCode_s, JobStartDateTime_s | join kind=leftouter ( Events | where OperationName == "BackupItem" | distinct BackupItemUniqueId_s, BackupItemFriendlyName_s | project BackupItemUniqueId_s , BackupItemFriendlyName_s ) on BackupItemUniqueId_s | project BackupItemFriendlyName_s, BackupItemUniqueId_s, JobStatus_s, Resource, JobFailureCode_s, JobStartDateTime_s | extend Vault= Resource | extend dt = todatetime(JobStartDateTime_s) | where BackupItemFriendlyName_s in ("GZ-xxxxxxx","GX-xxxxxxxxx") | summarize count() by BackupItemFriendlyName_s, JobStatus_s,JobFailureCode_s, Vault, NewDateTime=dt, JobStartDateTime_s 3 Restore History for Selected VM let Events = AzureDiagnostics | where TimeGenerated > ago(300d) | where Category == "AzureBackupReport"; Events | extend JobOperationSubType_s = columnifexists("JobOperationSubType_s", "") | where OperationName == "Job" | where JobOperation_s == "Restore" or JobOperation_s == "Recovery" | distinct JobUniqueId_g, BackupItemUniqueId_s, JobStatus_s, Resource, JobFailureCode_s, JobStartDateTime_s , JobOperation_s | project BackupItemUniqueId_s, JobStatus_s, Resource, JobFailureCode_s, JobStartDateTime_s, JobOperation_s | join kind=leftouter ( Events | where OperationName == "BackupItem" | distinct BackupItemUniqueId_s, BackupItemFriendlyName_s | project BackupItemUniqueId_s , BackupItemFriendlyName_s ) on BackupItemUniqueId_s | project BackupItemFriendlyName_s, BackupItemUniqueId_s, JobStatus_s, Resource, JobFailureCode_s, JobStartDateTime_s , JobOperation_s | extend Vault= Resource | extend dt = todatetime(JobStartDateTime_s) | where BackupItemFriendlyName_s in ("GZ-xxxxxxxxx","GX-xxxxxxxxxx") | summarize count() by BackupItemFriendlyName_s, JobStatus_s,JobFailureCode_s, Vault, NewDateTime=dt, JobStartDateTime_s ,JobOperation_s2.4KViews0likes6CommentsLog Analytics or Resource Graph Explorer Queries
Hello all. I am looking to try and create some queries to show me a few things so that I can create a couple workbooks that might be useful to myself and team and having some troubles finding the right path and am curious if anyone has setup something similar. I would like to have a query in Log Analytics or Resource Graph Explorer that would show me A) Disabled alert rules and B) Current Action rules that are setup for alert suppression. So far none of my searching has lead me to anything useful. Thanks in advance! Thank you!2.3KViews0likes0CommentsAzure Admin Monitoring Dashboard
Hello, I have recently just finished setting up and sharing a dashboard I have created based on several workbooks I have created for some of my team members to quickly look at see the status on specific resources within Azure. These include the status of specific services that are not running, heartbeats missed within the last 5 minutes, free disk space, failed backups and missing critical updates. The dashboard itself is I would say a bit clunky and not ideal to look at and I would in no means say user-friendly but it does get the job done. I was curious if anyone has had any experience with setting something like this up and had anything they could share, if similar, to what I have done. I assume I am not the first and only to have done something like this. I am faced with a few challenges in this effort. I would prefer to do this natively within Azure and not use any third-party tools. My experience with KQL is limited, however I have no problem learning more. I also am working with several different subscriptions across lighthouse which means I am faced with only being able to query across 100 different LA workspaces at one time. I look forward to any recommendations or knowledge sharing the community might be able to provide! Thank you!2KViews0likes4Comments