azure monitor
1279 TopicsAzure Monitor Network Security Perimeter - Features available in 56 Public Cloud Regions
What is Network Security Perimeter? The Network Security Perimeter is a feature designed to enhance the security of Azure PaaS resources by creating a logical network isolation boundary. This allows Azure PaaS resources to communicate within an explicit trusted boundary, ensuring that external access is limited based on network controls defined across all Private Link Resources within the perimeter. Azure Monitor - Network Security Perimeter - Public Cloud Regions - Update We are pleased to announce the expansion of Network Security Perimeter features in Azure Monitor services from 6 to 56 Azure regions. This significant milestone enables us to reach a broader audience and serve a larger customer base. It underscores our continuous growth and dedication to meeting the security needs of our global customers. The Network Security Perimeter feature, now available in these additional regions, is designed to enhance the security and monitoring capabilities of our customers' networks. By utilizing our solution, customers can achieve a more secure and isolated network environment, which is crucial in today's dynamic threat landscape. Currently, NSP is in Public Preview with Azure Global customers, and we have expanded Azure Monitor region support for NSP from 6 regions to 56 regions. The region rollout has enabled our customers to meet their network isolation and monitoring requirements for implementing the Secure Future Initiative (SFI) security waves. Key Benefits to Azure Customers The Network Security Perimeter (NSP) provides several key benefits for securing and managing Azure PaaS resources: Enhances security by allowing communication within a trusted boundary and limiting external access based on network controls. Provides centralized management, enabling administrators to define network boundaries and configure access controls through a uniform API in Azure Core Network. Offers granular access control with NSP rules based on IP addresses or subscriptions. Includes logging and monitoring capabilities for visibility into traffic patterns, aiding in auditing, compliance, and threat identification. Integrates seamlessly with other Azure services and supports complex network setups by associating multiple Private Link Resources with a single perimeter. These characteristics highlight NSP as an excellent instrument for enhancing network security and ensuring data integrity based on the network isolation configuration. For detailed information on configuring Azure Monitor with a Network Security Perimeter, please refer to the following link: Configure Azure Monitor with Network Security Perimeter (Preview) Reference documentation links: Network Security Perimeter - Concepts Transition to a network security perimeter in Azure Have a Question / Any Feedback? Reach us at AzMon-NSP-Scrum@microsoft.com86Views0likes0CommentsIngestion of Managed Prometheus metrics from a private AKS cluster using private link
This article describes the end-to-end instructions on how to configure Managed Prometheus for data ingestion from your private Azure Kubernetes Service (AKS) cluster to an Azure Monitor Workspace. Azure Private Link enables you to access Azure platform as a service (PaaS) resources to your virtual network by using private endpoints. An Azure Monitor Private Link Scope (AMPLS) connects a private endpoint to a set of Azure Monitor resources to define the boundaries of your monitoring network. Using private endpoints for Managed Prometheus and your Azure Monitor workspace you can allow clients on a virtual network (VNet) to securely ingest Prometheus metrics over a Private Link. Conceptual overview A private endpoint is a special network interface for an Azure service in your Virtual Network (VNet). When you create a private endpoint for your Azure Monitor workspace, it provides secure connectivity between clients on your VNet and your workspace. For more details, see Private Endpoint. An Azure Private Link enables you to securely link Azure platform as a service (PaaS) resource to your virtual network by using private endpoints. Azure Monitor uses a single private link connection called Azure Monitor Private Link Scope or AMPLS, which enables each client in the virtual network to connect with all Azure Monitor resources like Log Analytics Workspace, Azure Monitor Workspace etc. (instead of creating multiple private links). For more details, see Azure Monitor Private Link Scope (AMPLS) To set up ingestion of Managed Prometheus metrics from virtual network using private endpoints into Azure Monitor Workspace, follow these high-level steps: Create an Azure Monitor Private Link Scope (AMPLS) and connect it with the Data Collection Endpoint of the Azure Monitor Workspace. Connect the AMPLS to a private endpoint that is set up for the virtual network of your private AKS cluster. Prerequisites A private AKS cluster with Managed Prometheus enabled. As part of Managed Prometheus enablement, you will also have an Azure Monitor Workspace that is set up. For more information, see Enable Managed Prometheus in AKS. 1. Create an AMPLS for Azure Monitor Workspace Metrics collected with Azure Managed Prometheus is ingested and stored in Azure Monitor workspace, so you must make the workspace accessible over a private link. For this, create an Azure Monitor Private Link Scope or AMPLS. In the Azure portal, search for "Azure Monitor Private Link Scopes", and then click "Create". Enter the resource group and name, select Private Only for Ingestion Access Mode. Click on "Review + Create" to create the AMPLS. For more details on setup of AMPLS, see Configure private link for Azure Monitor. 2. Connect the AMPLS to the Data Collection Endpoint of Azure Monitor Workspace Private links for data ingestion for Managed Prometheus are configured on the Data Collection Endpoints (DCE) of the Azure Monitor workspace that stores the data. To identify the DCEs associated with your Azure Monitor workspace, select Data Collection Endpoints from your Azure Monitor workspace in the Azure portal. In the Azure portal, search for the Azure Monitor Workspace that you created as part of enabling Managed Prometheus for your private AKS cluster. Note the Data Collection Endpoint name. Now, in the Azure portal, search for the AMPLS that you created in the previous step. Go to the AMPLS overview page, click on Azure Monitor Resources, click Add, and then connect the DCE of the Azure Monitor Workspace that you noted in the previous step. 2a. Configure DCEs Note: If your AKS cluster isn't in the same region as your Azure Monitor Workspace, then you need to configure the Data Collection Rule for the Azure Monitor Workspace. Follow the steps below only if your AKS cluster is not in the same region as your Azure Monitor Workspace. If your cluster is in the same region, skip this step and move to step 3. Create a Data Collection Endpoint in the same region as the AKS cluster. Go to your Azure Monitor Workspace, and click on the Data collection rule (DCR) on the Overview page. This DCR has the same name as your Azure Monitor Workspace. From the DCR overview page, click on Resources -> + Add, and then select the AKS cluster. Once the AKS cluster is added (you might need to refresh the page), click on the AKS cluster, and then Edit Data Collection of Endpoint. On the blade that opens, select the Data Collection Endpoint that you created in step 1 of this section. This DCE should be in the same region as the AKS cluster. 3. Connect AMPLS to private endpoint of AKS cluster A private endpoint is a special network interface for an Azure service in your Virtual Network (VNet). We will now create a private endpoint in the VNet of your private AKS cluster and connect it to the AMPLS for secure ingestion of metrics. In the Azure portal, search for the AMPLS that you created in the previous steps. Go to the AMPLS overview page, click on Configure -> Private Endpoint connections, and then select + Private Endpoint. Select the resource group and enter a name of the private endpoint, then click Next. In the Resource section, select Microsoft.Monitor/accounts as the Resource type, the Azure Monitor Workspace as the Resource, and then select prometheusMetrics. Click Next. In the Virtual Network section, select the virtual network of your AKS cluster. You can find this in the portal under AKS overview -> Settings -> Networking -> Virtual network integration. 4. Verify if metrics are ingested into Azure Monitor Workspace Verify if Prometheus metrics from your private AKS cluster are ingested into Azure Monitor Workspace: In the Azure portal, search for the Azure Monitor Workspace, and go to Monitoring -> Metrics. In the Metrics Explorer, query for metrics and verify that you are able to query. Next steps Use private endpoints for Managed Prometheus and Azure Monitor workspace for details on how to configure private link to query data from your Azure Monitor workspace using workbooks.244Views0likes0CommentsPublic Preview: The New AKS Monitoring Experience
We're excited to announce the public preview of our enhanced Monitoring experience for Azure Kubernetes Service (AKS). This redesign of the existing Insights experience brings comprehensive monitoring capabilities into a single, streamlined view, addressing some of the most common challenges users face when managing their AKS clusters. Our new Monitoring experience provides both basic (free) and detailed insights (with enabled Prometheus metrics and logging), offering a unified, single-pane-of-glass experience. The basic experience is available for all AKS users with no configuration required at all. A significant benefit of this new experience is in diagnosing pod deployment failures. In the past, identifying pending or failed pods could be a cumbersome process. With the new KPI Card for Pod Status, you can now quickly pinpoint and address these issues before they escalate, ensuring smoother deployments and reduced downtime. Another key scenario where this enhanced view shines is investigating node resource issues. Understanding node readiness and capacity is crucial for efficient cluster management. The Node Readiness Status card, along with detailed CPU and memory usage metrics, provides clear insights into whether your nodes are fully prepared to host pods. This helps prevent resource bottlenecks and optimizes the overall performance of your cluster. Ensuring cluster health during a scaling operation has never been easier. The new Summary Card for Events helps you monitor Kubernetes warning events and pending pod states, making it simple to track and respond to spikes. This ensures your cluster scales smoothly and efficiently, without unexpected hitches that could disrupt your services. Additionally, troubleshooting latency and connectivity issues in AKS is now more straightforward. With enhanced insights into node saturation metrics, including VMSS OS Disk Bandwidth and IOPS consumption, you can quickly identify and resolve issues causing latency. Detailed ETCD monitoring and Load Balancer metrics, such as % SNAT Port Usage, provide critical data to maintain optimal cluster performance, keeping your applications running smoothly. The following comparison table highlights what data comes out of the box for free for ALL AKS users. When you upgrade, you get all the same data collected in the newer Prometheus format as well as access to more rich metrics and logs for your core troubleshooting scenarios. Basic tier metrics Additional metrics in upgraded experience Alert summary card Historical Kubernetes events (30 days) Events summary card Warning events by reason Pod status KPI card Namespace CPU and memory % Node status KPI card Container logs by volume Node CPU and memory % Top five controllers by logs volume VMSS OS disk bandwidth consumed % (max) Packets dropped I/O VMSS OS disk IOPS consumed % (max) Load balancer SNAT port usage We’re committed to providing you with the tools you need to manage and optimize your AKS clusters effectively. Explore the new Monitoring experience in the Azure portal today and experience the future of AKS monitoring!1.1KViews2likes0CommentsLog Analytics Simple Mode is Now Generally Available
Over the past few months, we gradually rolled out the new Log Analytics experience to our users. The feedback has been positive, and the telemetry shows that users are more successful at working with their data. Today, we’re excited to announce that the new Log Analytics experience, including Simple Mode and other improvements, is now fully available and enabled by default. How simple is it? Here are two quick examples: Investigate Workspace Usage: Double-click the Usage table to load the latest data. Add an Aggregate operation to sum the Quantity column by DataType. Add a Sort operation by Quantity, and instantly see the results organized. At the top-right, click the three dots and create a New Alert Rule. Troubleshoot Kubernetes Pods: Select the KubePodInventory table and click Run to view the latest data. Filter the PodStatus column to Pending. Add an Aggregate operator to count the failed pods by Name. Click Share and export the results to CSV. That’s it - just a few clicks, and you’ve gained meaningful insights! Seamless Transition for Advanced Users If you’re comfortable with Kusto Query Language (KQL), you can switch to KQL Mode, edit the auto-generated query, and dive deeper. Once done, you can switch back to Simple Mode to continue exploring with updated results. You can also set your preferred default mode through the Settings menu for a customized experience. Improved Usability The interface includes organized menus for key actions like Save, Share, and Export, and a collapsible pane for quick access to tables, saved queries, examples, and more. To dive deeper into Simple Mode and other recent updates, visit our official documentation. Your Feedback Matters We’re committed to continuously improving Log Analytics to meet our users’ needs. Your input is invaluable in shaping its capabilities and user experience. For questions or feedback, feel free to reach out to Noyablanga@microsoft.com or use the Give Feedback form directly in Logs.1.1KViews2likes0CommentsAzure Managed Grafana Brings Grafana 11 and More
We’re thrilled to announce the public preview of Grafana 11 and several feature enhancements in Azure Managed Grafana based on your feedback. We continue to evolve our service to deliver what matters most to our customers. Grafana 11 This annual major update to Grafana includes new functionality and improvements across dashboards, panels, queries, and alerts. The current preview in Managed Grafana offers Grafana v11.2. It includes the following key features: Explore Metrics Scenes powered dashboards Subfolders Numerous improvements to canvas visualization and alerting For more information on Grafana 11, please refer What’s new in Grafana v11.0, v11.1, and v11.2 and consider how the breaking changes may impact your specific use cases. You’ll need to create a new Managed Grafana instance to use Grafana 11 preview. Upgrading from Grafana 10 directly isn’t supported yet. You can copy over dashboards from your current Managed Grafana instance by following the steps in Migrate to Azure Managed Grafana. Please note that not all Grafana 11 features are available in Managed Grafana at present; if applicable, more features will be added over time. Azure Monitor Updates for Grafana 11 Improved Azure Monitor Logs visualizations This update extends Azure Monitor logs visualizations to support Basic Logs. This enables you to view Azure Monitor Log tables that have been configured with the lower cost Basic Log tier in Explore and dashboard panels. Additionally, Azure Monitor Logs details can now be viewed in Grafana Explore and Logs panels. You can filter query results by column values, run ad-hoc statistics and choose which column to display using simple point and click interaction without needing to modify the query text. Explore views also include options to view JSON data in dynamic columns. Azure Kubernetes Service users can leverage these views in a new Container Log dashboard. Prometheus Exemplars support for Azure Monitor Application Insight traces You can now drill down from Prometheus exemplars to Application Insights traces in Grafana. Using Exemplars in your troubleshooting workflow improves triage and analysis response times by allowing you to navigate from metrics to sample traces related to errors and exceptions and easily compare performance of transactions. To take advantage of this capability, the application needs to be instrumented to emit Prometheus metrics with Exemplars and traces to Azure Monitor Application Insights. Sign up for the Private Preview of Exemplars support in your Azure Monitor Workspace. User-Assigned Managed Identity Since its inception, Managed Grafana sets up a system-assigned managed identity for a new Grafana workspace by default. You can use this managed identity as the security principal to access backend data sources connected to your workspace. While it’s convenient to use, system-assigned managed identity isn’t always suitable. Enterprise customers who have stricter identity management policies typically create and manage all Entra ID identities by themselves. Managed Grafana now allows these customers to use identities defined in their Entra ID tenants instead. With the user-assigned managed identity feature, you can select an existing Entra ID identity to be used for authentication and authorization with your data sources. Please note that you can choose only one type of managed identity for each workspace. You can’t enable both system-assigned and user-assigned managed identities simultaneously. Grafana Settings Grafana server settings allow you to customize specific server behaviors. Managed Grafana configures and manages these settings automatically, so you don’t have to deal with them. There are some settings where usage varies from user to user. Managed Grafana now gives you the option to change their default values. The currently supported ones are: viewers_can_edit – determines whether users with the Grafana Viewer role can edit dashboards external_enabled – controls the public sharing of snapshots Grafana Migration Tool If you have a self-hosted Grafana server on-premises or in the cloud that you’d like to migrate to Managed Grafana, you can perform this operation with one command in the Azure CLI. The new az grafana migrate command automates the process of copying your existing dashboards from any Grafana server to your Managed Grafana workspace. It supports several options that control how the content migration should be conducted as well as a dry-run option for you to test and see the migration results before committing to the operation. Let Us Know How We’re Doing If you’re a current user of Managed Grafana, we’d love to hear from you. Please take a moment and fill out this online survey. It will help us further improve our service to better serve you. Thank you!788Views2likes2CommentsAccelerate your observability journey with Azure Monitor pipeline (preview)
In the ever-evolving landscape of digital infrastructure, transparency in resource and application performance is imperative. Success hinges on visibility, and that’s true whether you’re operating on Azure, on-premise, or at the edge. As organizations scale their infrastructures and applications, the volume of observability data naturally increases. This surge can complicate the management of networking, data storage and ingestion, often forcing a trade-off between cost management and observability. The complexity doesn’t end there. The very tools designed to ingest, process, and route this data can be both costly and complex, adding layers of operational challenges. Moreover, edge infrastructure is deployed near IoT devices for optimal data processing, high availability, and reduced latency. This adds its own set of challenges when it comes to collecting telemetry from such constrained environments. Recognizing these challenges, our team has been focused on providing a robust, highly scalable, and secure data ingestion solution through Azure Monitor. We are thrilled to announce the preview of the Azure Monitor pipeline at edge. What is Azure Monitor pipeline? Azure Monitor pipeline, similar to ETL (Extract, Transform, Load) process, enhances traditional data collection methods. It streamlines data collection from various sources through a unified ingestion pipeline and utilizes a standardized configuration approach that is more efficient and scalable. This is particularly beneficial for cloud-based monitoring in Azure. We are now extending our Azure Monitor pipeline capabilities from the cloud to the edge, enabling high-scale data ingestion with centralized configuration management. What is Azure Monitor pipeline at edge? Azure Monitor pipeline at edge is a powerful solution designed to facilitate high-scale data ingestion and routing from edge environments to Azure Monitor for observability. It leverages the robust capabilities of the vendor-agnostic tool - OpenTelemetry Collector, which is used by enterprises worldwide to manage high volumes of telemetry each month. With the Azure Monitor pipeline at edge, organizations can tap into the same highly scalable platform with a standardized configuration and reliability. Whether dealing with petabytes of data or seeking consistent observability experience across Azure, edge, and multi-cloud, this solution empowers organizations to reliably collect telemetry and drive operational excellence. The Azure Monitor pipeline at edge is equipped with out-of-the-box capabilities to receive telemetry from a diverse range of resources and route it to Azure Monitor. Here are some key features: High scale data ingestion: Customers have various devices and resources at edge, emitting high volume of data. With Azure Monitor pipeline at edge, you can seamlessly scale to support ingestion of high volume of data in the cloud. Azure Monitor pipeline can be deployed on your on-premises Kubernetes cluster as an Arc Kubernetes cluster extension. This allows it to adapt to your data scaling needs by running multiple replica sets and provides you with full control to define workflows and route high-volume data to Azure Monitor. Observing resources in isolated environments: In the manufacturing sector, resources are often located in isolated network zones without direct cloud connectivity, posing challenges for telemetry collection. With the Azure Monitor pipeline at edge, combined with Azure IoT Layered Network Management, you can facilitate a connection between Azure and Kubernetes clusters in isolated networks, deploy the Azure Monitor pipeline at edge, collect data from resources in segmented networks, and route it to Azure Monitor for comprehensive observability. Reliable data ingestion and prevent data loss: Edge environments frequently encounter intermittent connectivity, leading to potential data loss and disrupting data continuity. The Azure Monitor pipeline at edge allows you to cache logs during periods of intermittent connectivity. When connectivity is re-established, your data is synchronized with Azure Monitor, preventing data loss. Getting started It’s super easy to get started! You need to deploy the Azure Monitor pipeline on a single Arc-enabled Kubernetes cluster in your environment. Once that is done, you can configure your resources to emit the telemetry to Azure Monitor pipeline at edge and ingest into Azure Monitor for observability. Once you Arc-enable your on-prem Kubernetes cluster and the prerequisites are met, go the Extension section, select Azure Monitor pipeline extension (preview) and create the instance. Alternatively, from the search bar in the Azure portal, select Azure Monitor pipeline and then click Create. Enter the information related to the pipeline instance. The Dataflow tab allows you to create and edit dataflows for the pipeline instance. Configure your resources to emit the telemetry to the Azure Monitor pipeline. Learn more in our documentation. Pricing There is no additional cost to use Azure Monitor pipeline to send data to Azure Monitor. You will be only charged for data ingestion as per the current pricing. FAQ What telemetry can be collected using Azure Monitor pipeline? Currently, in public preview, you can collect syslogs and OTLP logs using Azure Monitor pipeline at edge. We will keep expanding the data collection capabilities based on your feedback and requirements. How can I perform transformations on the telemetry that is collected? You can certainly transform your telemetry! Since this is an extension of Azure Monitor pipeline, you can perform the data collection transformations in the Azure Monitor pipeline at cloud. Is this another agent for data collection? Azure Monitor pipeline at edge is engineered to function in environments where installing agents on resources is not feasible, whether due to technical limitations or warranty concerns. It enables you to get the telemetry from these resources and acts as a central forwarding component to ingest high volume data. I have 100 Linux servers in my on-prem environment. Do I need to deploy Azure Monitor pipeline at edge on all of them? You need to deploy the Azure Monitor pipeline at edge on a single Arc-enabled Kubernetes cluster and configure it to ingest data into Azure Monitor. Once that is completed, you can configure your Linux servers to emit telemetry to the Azure Monitor pipeline at edge instance.10KViews7likes3CommentsAnnouncing the Public Preview of Azure Monitor – Network Security Perimeter Features
Azure Monitor services now extend support to Network Security Perimeter (NSP) features, enabling Azure PaaS resources to communicate securely within a trusted boundary. The integration of NSP features in Azure Monitor services enhances security and monitoring capabilities across 6 Azure cloud regions (East US, East US 2, North Central US, South Central US, West US, West US 2).1.1KViews0likes2Comments