The article provides a comprehensive guide on using the .NET Profiler Trace feature in Microsoft Azure App Service. See training videos at the bottom.
Summary
The article provides guidance on using the .NET Profiler Trace feature in Microsoft Azure App Service to diagnose performance issues in ASP.NET applications. It explains how to configure and collect the trace by accessing the Azure Portal, navigating to the Azure App Service, and selecting the "Collect .NET Profiler Trace" feature. Users can choose between "Collect and Analyze Data" or "Collect Data only" and must select the instance to perform the trace on. The trace stops after 60 seconds but can be extended up to 15 minutes. After analysis, users can view the report online or download the trace file for local analysis, which includes information like slow requests and CPU stacks. The article also details how to analyze the trace using Perf View, a tool available on GitHub, to identify performance issues. Additionally, it provides a table outlining scenarios for using .NET Profiler Trace or memory dumps based on various factors like issue type and symptom code. This tool is particularly useful for diagnosing slow or hung ASP.NET applications and is available only in Standard or higher SKUs with the Always On setting enabled.
In this article
- How to configure and collect the .NET Profiler Trace
- How to download the .NET Profiler Trace
- How to analyze a .NET Profiler Trace
- When to use .NET Profilers tracing vs. a memory dump
The tool is exceptionally suited for scenarios where an ASP.NET application is performing slower than expected or gets hung. As shown in Figure 1, this feature is available only in Standard or higher Stock Keeping Unit (SKU) and Always On is enabled. If you try to configure .NET Profiler Trace, without both configurations the following messages is rendered.
Azure App Service Diagnose and solve problems blade in the Azure Portal error messages
Error – This tool is supported only on Standard, Premium, and Isolated Stock Keeping Unit (SKU) only with AlwaysOn setting enabled to TRUE.
Error – We determined that the web app is not "Always-On" enabled and diagnostic does not work reliably with Auto Heal. Turn on the Always-On setting by going to the Application Settings for the web app and then run these tools.
How to configure and collect the .NET Profiler Trace
To configure a .NET Profiler Trace access the Azure Portal and navigate to the Azure App Service which is experiencing a performance issue. Select Diagnose and solve problems and then the Diagnostic Tools tile.
Azure App Service Diagnose and solve problems blade in the Azure Portal
Select the "Collect .NET Profiler Trace" feature on the Diagnostic Tools blade and the following blade is rendered. Notice that you can only select Collect and Analyze Data or Collect Data only. Choose the one you prefer but do consider having the feature perform the analysis. You can download the trace for offline analysis if necessary. Also notice that you need to **select the instance** on which you want to perform the trace. In the scenario, there is only one, so the selection is simple. However, if your app runs on multiple instances, either select them all or if you identify a specific instance which is behaving slowly, select only that one. You realize the best results if you can isolate a single instance enough so that the request you sent is the only one received on that instance. However, in a scenario where the request or instance is not known, the trace adds value and insights. Adding a thread report provides list of all the threads in the process is also collected at the end of the profiler trace. The thread report is useful especially if you are troubleshooting hung processes, deadlocks, or requests taking more than 60 seconds. This pauses your process for a few seconds until the thread dump is generated. CAUTION: a thread report is NOT recommended if you are experiencing High CPU in your application, you may experience issues during trace analysis if CPU consumption is high.
Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace blade in the Azure Portal
There are a few points called out in the previous image which are important to read and consider. Specifically the .NET Profiler Trace will stop after 60 seconds from the time that it is started. Therefore, if you can reproduce the issue, have the reproduction steps ready before you start the profiling. If you are not able to reproduce the issue, then you may need to run the trace a few times until the slowness or hang occurs. The collection time can be increased up to 15 minutes (900 seconds) by adding an application setting named IIS_PROFILING_TIMEOUT_IN_SECONDS with a value of up to 900. After selecting the instance to perform the trace on, press the Collect Profiler Trace button, wait for the profiler to start as seen here, then reproduce the issue or wait for it to occur.
Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace status starting window
After the issue is reproduced the .NET Profiler Trace continues to the next step of stopping as seen here.
Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace status stopping window
Once stopped, the process continues to the analysis phase if you selected the Collect and Analyze Data option, as seen in the following image, otherwise you are provided a link to download the file for analysis on your local machine. The analysis can take some time, so be patient.
Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace status analyzing window
After the analysis is complete, you can either view the Analysis online or download the trace file for local development.
How to download the .NET Profiler Trace
Once the analysis is complete you can view the report by selecting the link in the Reports column, as seen here.
Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace status complete window
Clicking on the report you see the following. There is some useful information in this report, like a list of slow requests, Failed Requests, Thread Call stacks, and CPU stacks. Also shown is a breakdown of where the time was spent during the response generation into categories like Application Code, Platform, and Network. In this case, all the time is spent in the Application code.
Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace review the Report
To find out specifically where in the Application Code this request performs the analysis of the trace locally.
How to analyze a .NET Profiler Trace
After downloading the network trace by selecting the link in the Data column, you can use a tool named Perf View which is downloadable on GitHub here. Begin by opening Perf View and double-clicking on the ".DIAGSESSION" file, after some moments expand it to render the Event Trace Log (ETL) file, as shown here.
Analyze Azure App Service .NET Profiler Trace with Perf View
Double-click on the Thread Time (with startStop Activities) Stacks which open up a new window similar to shown next. If your App Service is configured as out-of-process select the dotnet process which is associated to your app code. If your App Service is in-process select the w3wp process.
Analyze Azure App Service .NET Profiler Trace with Perf View, dotnet out-of-process
Double-click on dotnet and another window is rendered, as shown here. From the previous image, .NET Profiler Trace reviews the Report, it is clear where slowness is coming from, find that in the Name column or search for it by entering the page name into the Find text box.
Analyze Azure App Service .NET Profiler Trace with Perf View, dotnet out-of-process, method, and class discovery
Once found right-click on the row and select Drill Into from the pop-up menu, shown here. Select the Call Tree tab and the reason for the issue renders showing which request was performing slow.
Analyze Azure App Service .NET Profiler Trace with Perf View, dotnet out-of-process, root cause
This example is relatively. As you analyze more performance issues using Perf View to analyze a .NET Profiler Trace your ability to find the root cause of more complicated performance issues can be realized.
When to use .NET Profilers tracing vs. a memory dump
That same issue is seen in a memory dump, however there are some scenarios where a .NET Profile trace would be best. Here is a table, Table 1, which describes scenarios for when to capture a .NET profile trace or to capture a memory dump.
Issue Type | Symptom Code | Symptom | Stack | Startup Issue | Intermittent | Scenario |
Performance | 200 | Requests take 500 ms to 2.5 seconds, or takes <= 60 seconds | ASP.NET/ASP.NET Core | No | No | Profiler |
Performance | 200 | Requests take > 60 seconds & < 230 seconds | ASP.NET/ASP.NET Core | No | No | Dump |
Performance | 502.3/500.121/503 | Requests take >=120 to <= 230 seconds | ASP.NET | No | No | Dump, Profiler |
Performance | 502.3/500.121/503 | Requests timing out >=230 | ASP.NET/ASP.NET Core | Yes/No | Yes/No | Dump |
Performance | 502.3/500.121/503 | App hangs or deadlocks (ex: due to async anti-pattern) | ASP.NET/ASP.NET Core | Yes/No | Yes/No | Dump |
Performance | 502.3/500.121/503 | App hangs on startup (ex: caused by nonasync deadlock issue) | ASP.NET/ASP.NET Core | No | Yes/No | Dump |
Performance | 502.3/500.121 | Request timing out >=230 (time out) | ASP.NET/ASP.NET Core | No | No | Dump |
Availability | 502.3/500.121/503 | High CPU causing app downtime | ASP.NET | No | No | Profiler, Dump |
Availability | 502.3/500.121/503 | High Memory causing app downtime | ASP.NET/ASP.NET Core | No | No | Dump |
Availability | 500.0[121]/503 | SQLException or Some Exception causes app downtime | ASP.NET | No | No | Dump, Profiler |
Availability | 500.0[121]/503 | App crashing due to fatal exception at native layer | ASP.NET/ASP.NET Core | Yes/No | Yes/No | Dump |
Availability | 500.0[121]/503 | App crashing due to exit code (ex: 0xC0000374) | ASP.NET/ASP.NET Core | Yes/No | Yes/No | Dump |
Availability | 500.0 | App begin nonfatal exceptions (during a context of a request) | ASP.NET | No | No | Profiler, Dump |
Availability | 500.0 | App begin nonfatal exceptions (during a context of a request) | ASP.NET/ASP.NET Core | No | Yes/No | Dump |
Table 1, when to capture a .NET Profiler Trace or a Memory Dump on Azure App Service, Diagnose and solve problems
Use this list as a guide to help decide how to approach the solving of performance and availability applications problems which are occurring in your application source code. Here are some descriptions regarding the column heading.
- Issues Type – Performance means that a request to the app is responding or processing the response but not at a speed in which it is expected to. Availability means that the request is failing or consuming more resources than expected.
- Symptom Code – the HTTP Status and/or sub status which is returned by the request.
- Symptom – a description of the behavior experienced while engaging with the application.
- Stack – this table targets .NET, specifically ASP.NET, and ASP.NET Core applications.
- Startup Issue – if "No" then the Scenario can or should be used, "No" represents that the issue is not at startup. If "Yes/No" it means the Scenario is useful for troubleshooting startup issues.
- Intermittent – if "No" then the Scenario can or should be used, "No" means the issue is not intermittent or that it can be reproduced. If "Yes/No" it means the Scenario is useful if the issue happens randomly or cannot be reproduced. Meaning that the tool can be set to trigger on a specific event or left running for a specific amount of time until the exception happens.
- Scenario – "Profiler" means that the collection of a .NET Profiler Trace would be recommended. "Dump" means that a memory dump would be your best option. If both are provided, then both can be useful when the given symptoms and system codes are present.
You might find the videos in Table 2 useful which instruct you how to collect and analyze a memory dump or .NET Profiler Trace.
Product | Stack | Hosting | Symptom | Capture | Analyze | Scenario |
App Service | Windows | in | High CPU | link | link | Dump |
App Service | Windows | in | High Memory | link | link | Dump |
App Service | Windows | in | Terminate | link | link | Dump |
App Service | Windows | in | Hang | link | link | Dump |
App Service | Windows | out | High CPU | link | link | Dump |
App Service | Windows | out | High Memory | link | link | Dump |
App Service | Windows | out | Terminate | link | link | Dump |
App Service | Windows | out | Hang | link | link | Dump |
App Service | Windows | in | High CPU | link | link | Dump |
Function App | Windows | in | High Memory | link | link | Dump |
Function App | Windows | in | Terminate | link | link | Dump |
Function App | Windows | in | Hang | link | link | Dump |
Function App | Windows | out | High CPU | link | link | Dump |
Function App | Windows | out | High Memory | link | link | Dump |
Function App | Windows | out | Terminate | link | link | Dump |
Function App | Windows | out | Hang | link | link | Dump |
Azure WebJob | Windows | in | High CPU | link | link | Dump |
App Service | Windows | in | High CPU | link | link | .NET Profiler |
App Service | Windows | in | Hang | link | link | .NET Profiler |
App Service | Windows | in | Exception | link | link | .NET Profiler |
App Service | Windows | out | High CPU | link | link | .NET Profiler |
App Service | Windows | out | Hang | link | link | .NET Profiler |
App Service | Windows | out | Exception | link | link | .NET Profiler |
Table 2, short video instructions on capturing and analyzing dumps and profiler traces
Here are a few other helpful videos for troubleshooting Azure App Service Availability and Performance issues:
Prior to capturing and analyzing memory dumps, consider viewing this short video: Setting up WinDbg to analyze Managed code memory dumps and this blog post titled: Capture memory dumps on the Azure App Service platform.
Question & Answers
- Q: What are the prerequisites for using the .NET Profiler Trace feature in Azure App Service?
A: To use the .NET Profiler Trace feature in Azure App Service, the application must be running on a Standard or higher Stock Keeping Unit (SKU) with the Always On setting enabled. If these conditions are not met, the tool will not function, and error messages will be displayed indicating the need for these configurations.
- Q: How can you extend the default collection time for a .NET Profiler Trace beyond 60 seconds?
A: The default collection time for a .NET Profiler Trace is 60 seconds, but it can be extended up to 15 minutes (900 seconds) by adding an application setting named IIS_PROFILING_TIMEOUT_IN_SECONDS with a value of up to 900. This allows for a longer duration to capture the necessary data for analysis.
- Q: When should you use a .NET Profiler Trace instead of a memory dump for diagnosing performance issues in an ASP.NET application?
A: A .NET Profiler Trace is recommended for diagnosing performance issues where requests take between 500 milliseconds to 2.5 seconds or less than 60 seconds. It is also useful for identifying high CPU usage causing app downtime. In contrast, a memory dump is more suitable for scenarios where requests take longer than 60 seconds, the application hangs or deadlocks, or there are issues related to high memory usage or app crashes due to fatal exceptions.
Keywords
Microsoft Azure, Azure App Service, .NET Profiler Trace, ASP.NET performance, Azure debugging tools, .NET performance issues, Azure diagnostic tools, Collect .NET Profiler Trace, Analyze .NET Profiler Trace, Azure portal, Performance troubleshooting, ASP.NET application, Slow ASP.NET app, Azure Standard SKU, Always On setting, Memory dump vs profiler trace, Perf View analysis, Azure performance diagnostics, .NET application profiling, Diagnose ASP.NET slowness, Azure app performance, High CPU usage ASP.NET, Azure app diagnostics, .NET Profiler configuration, Azure app service performance
Updated Mar 05, 2025
Version 1.0csharpguitar
Microsoft
Joined November 20, 2018
Apps on Azure Blog
Follow this blog board to get notified when there's new activity