azure monitor managed service for prometheus
14 TopicsIngestion of Managed Prometheus metrics from a private AKS cluster using private link
This article describes the end-to-end instructions on how to configure Managed Prometheus for data ingestion from your private Azure Kubernetes Service (AKS) cluster to an Azure Monitor Workspace. Azure Private Link enables you to access Azure platform as a service (PaaS) resources to your virtual network by using private endpoints. An Azure Monitor Private Link Scope (AMPLS) connects a private endpoint to a set of Azure Monitor resources to define the boundaries of your monitoring network. Using private endpoints for Managed Prometheus and your Azure Monitor workspace you can allow clients on a virtual network (VNet) to securely ingest Prometheus metrics over a Private Link. Conceptual overview A private endpoint is a special network interface for an Azure service in your Virtual Network (VNet). When you create a private endpoint for your Azure Monitor workspace, it provides secure connectivity between clients on your VNet and your workspace. For more details, see Private Endpoint. An Azure Private Link enables you to securely link Azure platform as a service (PaaS) resource to your virtual network by using private endpoints. Azure Monitor uses a single private link connection called Azure Monitor Private Link Scope or AMPLS, which enables each client in the virtual network to connect with all Azure Monitor resources like Log Analytics Workspace, Azure Monitor Workspace etc. (instead of creating multiple private links). For more details, see Azure Monitor Private Link Scope (AMPLS) To set up ingestion of Managed Prometheus metrics from virtual network using private endpoints into Azure Monitor Workspace, follow these high-level steps: Create an Azure Monitor Private Link Scope (AMPLS) and connect it with the Data Collection Endpoint of the Azure Monitor Workspace. Connect the AMPLS to a private endpoint that is set up for the virtual network of your private AKS cluster. Prerequisites A private AKS cluster with Managed Prometheus enabled. As part of Managed Prometheus enablement, you will also have an Azure Monitor Workspace that is set up. For more information, see Enable Managed Prometheus in AKS. 1. Create an AMPLS for Azure Monitor Workspace Metrics collected with Azure Managed Prometheus is ingested and stored in Azure Monitor workspace, so you must make the workspace accessible over a private link. For this, create an Azure Monitor Private Link Scope or AMPLS. In the Azure portal, search for "Azure Monitor Private Link Scopes", and then click "Create". Enter the resource group and name, select Private Only for Ingestion Access Mode. Click on "Review + Create" to create the AMPLS. For more details on setup of AMPLS, see Configure private link for Azure Monitor. 2. Connect the AMPLS to the Data Collection Endpoint of Azure Monitor Workspace Private links for data ingestion for Managed Prometheus are configured on the Data Collection Endpoints (DCE) of the Azure Monitor workspace that stores the data. To identify the DCEs associated with your Azure Monitor workspace, select Data Collection Endpoints from your Azure Monitor workspace in the Azure portal. In the Azure portal, search for the Azure Monitor Workspace that you created as part of enabling Managed Prometheus for your private AKS cluster. Note the Data Collection Endpoint name. Now, in the Azure portal, search for the AMPLS that you created in the previous step. Go to the AMPLS overview page, click on Azure Monitor Resources, click Add, and then connect the DCE of the Azure Monitor Workspace that you noted in the previous step. 2a. Configure DCEs Note: If your AKS cluster isn't in the same region as your Azure Monitor Workspace, then you need to configure the Data Collection Rule for the Azure Monitor Workspace. Follow the steps below only if your AKS cluster is not in the same region as your Azure Monitor Workspace. If your cluster is in the same region, skip this step and move to step 3. Create a Data Collection Endpoint in the same region as the AKS cluster. Go to your Azure Monitor Workspace, and click on the Data collection rule (DCR) on the Overview page. This DCR has the same name as your Azure Monitor Workspace. From the DCR overview page, click on Resources -> + Add, and then select the AKS cluster. Once the AKS cluster is added (you might need to refresh the page), click on the AKS cluster, and then Edit Data Collection of Endpoint. On the blade that opens, select the Data Collection Endpoint that you created in step 1 of this section. This DCE should be in the same region as the AKS cluster. 3. Connect AMPLS to private endpoint of AKS cluster A private endpoint is a special network interface for an Azure service in your Virtual Network (VNet). We will now create a private endpoint in the VNet of your private AKS cluster and connect it to the AMPLS for secure ingestion of metrics. In the Azure portal, search for the AMPLS that you created in the previous steps. Go to the AMPLS overview page, click on Configure -> Private Endpoint connections, and then select + Private Endpoint. Select the resource group and enter a name of the private endpoint, then click Next. In the Resource section, select Microsoft.Monitor/accounts as the Resource type, the Azure Monitor Workspace as the Resource, and then select prometheusMetrics. Click Next. In the Virtual Network section, select the virtual network of your AKS cluster. You can find this in the portal under AKS overview -> Settings -> Networking -> Virtual network integration. 4. Verify if metrics are ingested into Azure Monitor Workspace Verify if Prometheus metrics from your private AKS cluster are ingested into Azure Monitor Workspace: In the Azure portal, search for the Azure Monitor Workspace, and go to Monitoring -> Metrics. In the Metrics Explorer, query for metrics and verify that you are able to query. Next steps Use private endpoints for Managed Prometheus and Azure Monitor workspace for details on how to configure private link to query data from your Azure Monitor workspace using workbooks.245Views0likes0CommentsPublic Preview: The New AKS Monitoring Experience
We're excited to announce the public preview of our enhanced Monitoring experience for Azure Kubernetes Service (AKS). This redesign of the existing Insights experience brings comprehensive monitoring capabilities into a single, streamlined view, addressing some of the most common challenges users face when managing their AKS clusters. Our new Monitoring experience provides both basic (free) and detailed insights (with enabled Prometheus metrics and logging), offering a unified, single-pane-of-glass experience. The basic experience is available for all AKS users with no configuration required at all. A significant benefit of this new experience is in diagnosing pod deployment failures. In the past, identifying pending or failed pods could be a cumbersome process. With the new KPI Card for Pod Status, you can now quickly pinpoint and address these issues before they escalate, ensuring smoother deployments and reduced downtime. Another key scenario where this enhanced view shines is investigating node resource issues. Understanding node readiness and capacity is crucial for efficient cluster management. The Node Readiness Status card, along with detailed CPU and memory usage metrics, provides clear insights into whether your nodes are fully prepared to host pods. This helps prevent resource bottlenecks and optimizes the overall performance of your cluster. Ensuring cluster health during a scaling operation has never been easier. The new Summary Card for Events helps you monitor Kubernetes warning events and pending pod states, making it simple to track and respond to spikes. This ensures your cluster scales smoothly and efficiently, without unexpected hitches that could disrupt your services. Additionally, troubleshooting latency and connectivity issues in AKS is now more straightforward. With enhanced insights into node saturation metrics, including VMSS OS Disk Bandwidth and IOPS consumption, you can quickly identify and resolve issues causing latency. Detailed ETCD monitoring and Load Balancer metrics, such as % SNAT Port Usage, provide critical data to maintain optimal cluster performance, keeping your applications running smoothly. The following comparison table highlights what data comes out of the box for free for ALL AKS users. When you upgrade, you get all the same data collected in the newer Prometheus format as well as access to more rich metrics and logs for your core troubleshooting scenarios. Basic tier metrics Additional metrics in upgraded experience Alert summary card Historical Kubernetes events (30 days) Events summary card Warning events by reason Pod status KPI card Namespace CPU and memory % Node status KPI card Container logs by volume Node CPU and memory % Top five controllers by logs volume VMSS OS disk bandwidth consumed % (max) Packets dropped I/O VMSS OS disk IOPS consumed % (max) Load balancer SNAT port usage We’re committed to providing you with the tools you need to manage and optimize your AKS clusters effectively. Explore the new Monitoring experience in the Azure portal today and experience the future of AKS monitoring!1.1KViews2likes0CommentsAzure Managed Grafana Brings Grafana 11 and More
We’re thrilled to announce the public preview of Grafana 11 and several feature enhancements in Azure Managed Grafana based on your feedback. We continue to evolve our service to deliver what matters most to our customers. Grafana 11 This annual major update to Grafana includes new functionality and improvements across dashboards, panels, queries, and alerts. The current preview in Managed Grafana offers Grafana v11.2. It includes the following key features: Explore Metrics Scenes powered dashboards Subfolders Numerous improvements to canvas visualization and alerting For more information on Grafana 11, please refer What’s new in Grafana v11.0, v11.1, and v11.2 and consider how the breaking changes may impact your specific use cases. You’ll need to create a new Managed Grafana instance to use Grafana 11 preview. Upgrading from Grafana 10 directly isn’t supported yet. You can copy over dashboards from your current Managed Grafana instance by following the steps in Migrate to Azure Managed Grafana. Please note that not all Grafana 11 features are available in Managed Grafana at present; if applicable, more features will be added over time. Azure Monitor Updates for Grafana 11 Improved Azure Monitor Logs visualizations This update extends Azure Monitor logs visualizations to support Basic Logs. This enables you to view Azure Monitor Log tables that have been configured with the lower cost Basic Log tier in Explore and dashboard panels. Additionally, Azure Monitor Logs details can now be viewed in Grafana Explore and Logs panels. You can filter query results by column values, run ad-hoc statistics and choose which column to display using simple point and click interaction without needing to modify the query text. Explore views also include options to view JSON data in dynamic columns. Azure Kubernetes Service users can leverage these views in a new Container Log dashboard. Prometheus Exemplars support for Azure Monitor Application Insight traces You can now drill down from Prometheus exemplars to Application Insights traces in Grafana. Using Exemplars in your troubleshooting workflow improves triage and analysis response times by allowing you to navigate from metrics to sample traces related to errors and exceptions and easily compare performance of transactions. To take advantage of this capability, the application needs to be instrumented to emit Prometheus metrics with Exemplars and traces to Azure Monitor Application Insights. Sign up for the Private Preview of Exemplars support in your Azure Monitor Workspace. User-Assigned Managed Identity Since its inception, Managed Grafana sets up a system-assigned managed identity for a new Grafana workspace by default. You can use this managed identity as the security principal to access backend data sources connected to your workspace. While it’s convenient to use, system-assigned managed identity isn’t always suitable. Enterprise customers who have stricter identity management policies typically create and manage all Entra ID identities by themselves. Managed Grafana now allows these customers to use identities defined in their Entra ID tenants instead. With the user-assigned managed identity feature, you can select an existing Entra ID identity to be used for authentication and authorization with your data sources. Please note that you can choose only one type of managed identity for each workspace. You can’t enable both system-assigned and user-assigned managed identities simultaneously. Grafana Settings Grafana server settings allow you to customize specific server behaviors. Managed Grafana configures and manages these settings automatically, so you don’t have to deal with them. There are some settings where usage varies from user to user. Managed Grafana now gives you the option to change their default values. The currently supported ones are: viewers_can_edit – determines whether users with the Grafana Viewer role can edit dashboards external_enabled – controls the public sharing of snapshots Grafana Migration Tool If you have a self-hosted Grafana server on-premises or in the cloud that you’d like to migrate to Managed Grafana, you can perform this operation with one command in the Azure CLI. The new az grafana migrate command automates the process of copying your existing dashboards from any Grafana server to your Managed Grafana workspace. It supports several options that control how the content migration should be conducted as well as a dry-run option for you to test and see the migration results before committing to the operation. Let Us Know How We’re Doing If you’re a current user of Managed Grafana, we’d love to hear from you. Please take a moment and fill out this online survey. It will help us further improve our service to better serve you. Thank you!790Views2likes2CommentsOperator/CRD support with Azure Monitor managed service for Prometheus is now Generally Available
We are excited to announce that custom resource definitions (CRD) support with Azure Monitor managed service for Prometheus is now generally available. Azure Monitor managed service for Prometheus is a component of Azure Monitor Metrics, allowing you to collect and analyze metrics at scale using a Prometheus-compatible monitoring solution, based on the Prometheus project from the Cloud Native Computing Foundation. This fully managed service enables using the Prometheus query language (PromQL) to analyze and alert on the performance of monitored infrastructure and workloads. What's new? With this new update, customers can customize scraping targets using Custom Resources (Pod Monitors and Service Monitors), similar to the OSS Prometheus Operator. Enabling Managed Prometheus add-on in an AKS cluster will deploy the Pod and Service Monitor custom resource definitions to allow you to create your own custom resources. If you are already using Prometheus Service and Pod monitors to collect metrics from your workloads, you can simply change the apiVersion in the Service/Pod monitor definitions to use them with Azure Managed Prometheus. Earlier, customers who did not have access to kube-system namespace were not able to customize metrics collection. With this update, customers can create custom resources to enable custom configuration of scrape jobs in any namespace. This is especially useful in a multitenancy scenario where customers are running workloads in different namespaces. Note: Support for Custom Resources (Pod Monitors and Service Monitors) is currently not available with Azure Monitor Managed Service for Prometheus for Azure ARC-enabled Kubernetes. Here is how a leading Public Sector Banking and Financial Services and Insurance (BFSI) company in India has used Service and Pod monitors custom resources to enable monitoring of GPU metrics with Azure Managed Prometheus, DCGM Exporter, and Azure Managed Grafana. “Azure Monitor managed service for Prometheus provides a production-grade solution for monitoring without the hassle of installation and maintenance. By leveraging these managed services, we can focus on extracting insights from your metrics and logs rather than managing the underlying infrastructure. The integration of essential GPU metrics—such as Framebuffer Memory Usage, GPU Utilization, Tensor Core Utilization, and SM Clock Frequencies—into Azure Managed Prometheus and Grafana enhances the visualization of actionable insights. This integration facilitates a comprehensive understanding of GPU consumption patterns, enabling more informed decisions regarding optimization and resource allocation.” -A leading public sector BFSI company in India Get started today! To use CRD support with Azure Managed Prometheus, enable Managed Prometheus add-on on your AKS cluster. This will automatically deploy the custom resource definitions (CRD) for service and pod monitors. To add Prometheus exporters to collect metrics from third-party workloads or other applications, and to see a list of workloads which have curated configurations and instructions, see Integrate common workloads with Azure Managed Prometheus - Azure Monitor | Microsoft Learn. For more details refer to this article, or our documentation. We would love to hear from you - Please share your feedback and suggestions in Azure Monitor · Community.2.4KViews1like2CommentsMonitoring GPU Metrics in AKS with Azure Managed Prometheus, DCGM Exporter and Managed Grafana
Azure Monitor managed service for Prometheus provides a production-grade solution for monitoring without the hassle of installation and maintenance. By leveraging these managed services, we can focus on extracting insights from your metrics and logs rather than managing the underlying infrastructure. The integration of essential GPU metrics—such as Framebuffer Memory Usage, GPU Utilization, Tensor Core Utilization, and SM Clock Frequencies—into Azure Managed Prometheus and Grafana enhances the visualization of actionable insights. This integration facilitates a comprehensive understanding of GPU consumption patterns, enabling more informed decisions regarding optimization and resource allocation. Azure Managed Prometheus recently announced general availability of Operator and CRD support, which will enable customers to customize metrics collection and add scraping of metrics from workloads and applications using Service and Pod Monitors, similar to the OSS Prometheus Operator. This blog will demonstrate how we leveraged the CRD/Operator support in Azure Managed Prometheus and used the Nvidia DCGM Exporter and Grafana to enable GPU monitoring. GPU monitoring As the use of GPUs has skyrocketed for deploying large language models (LLMs) for both inference and fine-tuning, monitoring these resources becomes critical to ensure optimal performance and utilization. Prometheus, an open-source monitoring and alerting toolkit, coupled with Grafana, a powerful dashboarding and visualization tool, provides an excellent solution for collecting, visualizing, and acting on these metrics. Essential metrics such as Framebuffer Memory Usage, GPU Utilization, Tensor Core Utilization, and SM Clock Frequencies serve as fundamental indicators of GPU consumption, offering invaluable insights into the performance and efficiency of graphics processing units, and thereby enabling us to reduce our COGs and improve operations. Using Nvidia’s DGCM Exporter with Azure Managed Prometheus The DGCM Exporter is a tool developed by Nvidia to collect and export GPU metrics. It runs as a pod on Kubernetes clusters and gathers various metrics from Nvidia GPUs, such as utilization, memory usage, temperature, and power consumption. These metrics are crucial for monitoring and managing the performance of GPUs. You can integrate this exporter with Azure Managed Prometheus. The section below in blog describes the steps and changes needed to deploy the DCGM Exporter successfully. Prerequisites Before we jump straight to the installation, ensure your AKS cluster meets the following requirements: GPU Node Pool: Add a node pool with the required VM SKU that includes GPU support. GPU Driver: Ensure the NVIDIA Kubernetes device plugin driver is running as a DaemonSet on your GPU nodes. Enable Azure Managed Prometheus and Azure Managed Grafana on your AKS cluster. Refactoring Nvidia DCGM Exporter for AKS: Code Changes and Deployment Guide Updating API Versions and Configurations for Seamless Integration As per the official documentation, the best way to get started with DGCM Exporter is to install it using Helm. When installing over AKS with Managed Prometheus, you might encounter the below error: Error: Installation Failed: Unable to build Kubernetes objects from release manifest: resource mapping not found for name: "dcgm-exporter-xxxxx" namespace: "default" from "": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1". Ensure CRDs are installed first. To resolve this, follow these steps to make necessary changes in the DCGM code: Clone the Project: Go to the GitHub repository of the DCGM Exporter and clone the project or download it to your local machine. Navigate to the Template Folder: The code used to deploy the DCGM Exporter is located in the template folder within the deployment folder. Modify the service-monitor.yaml File: Find the file service-monitor.yaml. The apiVersion key in this file needs to be updated from monitoring.coreos.com/v1 to azmonitoring.coreos.com/v1. This change allows the DCGM Exporter to use the Azure managed Prometheus CRD. apiVersion: azmonitoring.coreos.com/v1 4. Handle Node Selectors and Tolerations: GPU node pools often have tolerations and node selector tags. Modify the values.yaml file in the deployment folder to handle these configurations: nodeSelector: accelerator: nvidia tolerations: - key: "sku" operator: "Equal" value: "gpu" effect: "NoSchedule" Helm: Packaging, Pushing, and Installation on Azure Container Registry We followed the MS Learn documentation for pushing and installing the package through Helm on Azure Container Registry. For a comprehensive understanding, you can refer to the documentation. Here are the quick steps for installation: After making all the necessary changes in the deployment folder on the source code, be on that directory to package the code. Log in to your registry to proceed further. 1. Package the Helm chart and login to your container registry: helm package . helm registry login <container-registry-url> --username $USER_NAME --password $PASSWORD 2. Push the Helm Chart to the Registry: helm push dcgm-exporter-3.4.2.tgz oci://<container-registry-url>/helm 3. Verify that the package has been pushed to the registry on Azure portal. 4. Install the chart and verify the installation: helm install dcgm-nvidia oci://<container-registry-url>/helm/dcgm-exporter -n gpu-resources #Check the installation on your AKS cluster by running: helm list -n gpu-resources #Verify the DGCM Exporter: Kubectl get po -n gpu-resources Kubectl get ds -n gpu-resources You can now check that the DGCM Exporter is running on the GPU nodes as a DaemonSet. Exporting GPU Metrics and Configuring Azure Managed Grafana Dashboard Once the DGCM Exporter DaemonSet is running across all GPU node pools, you need to export the GPU metrics generated by this workload to Azure Managed Prometheus. This is accomplished by deploying a PodMonitor resource. Follow these steps: Deploy the PodMonitor: Apply the following YAML configuration to deploy the PodMonitor: apiVersion: azmonitoring.coreos.com/v1 kind: PodMonitor metadata: name: nvidia-dcgm-exporter labels: app.kubernetes.io/name: nvidia-dcgm-exporter spec: selector: matchLabels: app.kubernetes.io/name: nvidia-dcgm-exporter podMetricsEndpoints: - port: metrics interval: 30s podTargetLabels: 2. Check if the PodMonitor is deployed and running by executing: kubectl get podmonitor -n <namespace> 3. Verify Metrics export: Ensure that the metrics are being exported to Azure Managed Prometheus on the portal by navigating to the "Metrics" page on your Azure Monitor Workspace. Create the DGCM Dashboard on Azure Managed Grafana The GitHub repository for the DGCM Exporter includes a JSON file for the Grafana dashboard. Follow the MS Learn documentation to import this JSON into your Managed Grafana instance. After importing the JSON, the dashboard displaying GPU metrics will be visible on Grafana.3.5KViews0likes0CommentsIntroducing Query editor: Empowering Users with PromQL in Azure Monitor Metrics!
We're thrilled to announce the public preview launch of Query Editor in Azure Monitor Metrics, an advanced feature that allows customers to query and write PromQL queries directly within their Azure Monitor workspace (AMW). This long-awaited addition comes as a direct response to the growing demand from our customers, and we're excited to finally deliver this capability to you. What’s new? Unlocking the Power of PromQL: Prometheus Query Language (PromQL) has emerged as a standard in the realm of monitoring and observability, offering users flexibility and expressiveness in querying metric data. With the Query Editor in Azure Monitor Metrics, users can now harness the full potential of PromQL to derive actionable insights for their resources. Previously, users in the Azure portal were unable to query their Prometheus metrics on AKS or Arc-enabled clusters sent to Azure Monitor Workspace via Azure Managed Prometheus in the portal. With this new capability, users can now query Prometheus metrics for their AKS resource or Arc-enabled clusters directly in the Query editor within the portal. Seamless Querying Experience: With the Query Editor, users can compose and execute PromQL queries directly within their Azure Monitor workspace that they are emitting metrics to. This streamlines the monitoring workflow, enabling users to stay focused and productive without the hassle of context switching while querying different types of metric data. Benefits of Query editor with PromQL: Rich Query Language: PromQL offers a rich set of functions and operators for querying metric data, allowing users to perform complex aggregations, transformations, and calculations with ease. Familiarity and Interoperability: For users familiar with Prometheus-based monitoring solutions, the Query Editor provides a familiar environment for querying Azure metrics, facilitating a smoother transition and interoperability between platforms. How it works? Using the Query Editor is simple. Just navigate to your Azure Monitor workspace (AMW), select the Azure Monitor Metrics Query Editor, and start writing your PromQL queries. Get Started Today: The public preview of Query Editor in Azure Monitor Metrics is now available, and we invite you to try it out and share your feedback with us. Your input is invaluable as we continue to refine and improve this feature to better serve your monitoring and analytics needs. Please note, currently, the Query editor only supports querying metrics stored in an Azure Monitor Workspace. We plan to offer future support for platform metrics. https://aka.ms/queryEditorPreview https://learn.microsoft.com/en-Us/azure/azure-monitor/essentials/azure-monitor-workspace-overview?tabs=azure-portal Stay tuned for more updates and enhancements as we work towards delivering even more value to our valued Azure customers.3.8KViews3likes1Comment