.net core
174 TopicsAzure App Service Auto-Heal: Capturing Relevant Data During Performance Issues
Introduction Azure App Service is a powerful platform that simplifies the deployment and management of web applications. However, maintaining application performance and availability is crucial. When performance issues arise, identifying the root cause can be challenging. This is where Auto-Heal in Azure App Service becomes a game-changer. Auto-Heal is a diagnostic and recovery feature that allows you to proactively detect and mitigate issues affecting your application’s performance. It enables automatic corrective actions and helps capture vital diagnostic data to troubleshoot problems efficiently. In this blog, we’ll explore how Auto-Heal works, its configuration, and how it assists in diagnosing performance bottlenecks. What is Auto-Heal in Azure App Service? Auto-Heal is a self-healing mechanism that allows you to define custom rules to detect and respond to problematic conditions in your application. When an issue meets the defined conditions, Auto-Heal can take actions such as: Recycling the application process Collecting diagnostic dumps Logging additional telemetry for analysis Triggering a custom action By leveraging Auto-Heal, you can minimize downtime, improve reliability, and reduce manual intervention for troubleshooting. Configuring Auto-Heal in Azure App Service To set up Auto-Heal, follow these steps: Access Auto-Heal Settings Navigate to the Azure Portal. Go to your App Service. Select Diagnose and Solve Problems. Search for Auto-Heal or go to Diagnostic tools tile and select Auto-Heal. Define Auto-Heal Rules Auto-Heal allows you to define rules based on: Request Duration: If a request takes too long, trigger an action. Memory Usage: If memory consumption exceeds a certain threshold. HTTP Status Codes: If multiple requests return specific status codes (e.g., 500 errors). Request Count: If excessive requests occur within a defined time frame. Configure Auto-Heal Actions Once conditions are set, you can configure one or more of the following actions: Recycle Process: Restart the worker process to restore the application. Log Events: Capture logs for further analysis. Custom Action: You can do the following: Run Diagnostics: Gather diagnostic data (Memory Dump, CLR Profiler, CLR Profiler with Threads Stacks, Java Memory Dump, Java Thread Dump) for troubleshooting. Run any Executable: Run scripts to automate corrective measures. Capturing Relevant Data During Performance Issues One of the most powerful aspects of Auto-Heal is its ability to capture valuable diagnostic data when an issue occurs. Here’s how: Collecting Memory Dumps Memory dumps provide insights into application crashes, high CP or high memory usage. These can be analyzed using WinDbg or DebugDiag. Enabling Logs for Deeper Insights Auto-Heal logs detailed events in Kudu Console, Application Insights, and Azure Monitor Logs. This helps identify patterns and root causes. Collecting CLR Profiler traces CLR Profiler traces capture call stacks and exceptions, providing a user-friendly report for diagnosing slow responses and HTTP issues at the application code level. In this article, we will cover the steps to configure an Auto-Heal rule for the following performance issues: To capture a .NET Profiler/CLR Profiler trace for Slow responses. To capture a .NET Profiler/CLR Profiler trace for HTTP 5XX Status codes. To capture Memory dump for a High Memory usage. Auto-Heal rule to capture .NET Profiler trace for Slow response: 1. Navigate to your App Service on Azure Portal, and click on Diagnose and Solve problems: 2. Search for Auto-Heal or go to Diagnostic tools tile and select Auto-Heal: 3. Click on 'On': 4. Select Request Duration and click on Add Slow Request rule: 5. Add the following information with respect to how much slowness you are facing: After how many slow requests you want this condition to kick in? - After how many slow requests you want this Auto-Heal rule to start writing/capturing relevant data. What should be minimum duration (in seconds) for these slow requests? - How many seconds should the request take to be considered as a slow request. What is the time interval (in seconds) in which the above condition should be met? - In how many seconds, the above defined slow request should occur. What is the request path (leave blank for all requests)? - If there is a specific URL which is slow, you can add that in this section or leave it as blank. In the below screenshot, the rule is set for this example "1 request taking 30 seconds in 5 minutes/300 seconds should trigger this rule" Add the values in the text boxes available and click "Ok" 6. Select Custom Action and select CLR Profiler with Thread Stacks option: 7. The tool options provide three choices: CollectKillAnalyze: If this option is selected, the tool will collect the data, analyze and generate the report and recycle the process. CollectLogs: If this option is selected, the tool will collect the data only. It will not analyze and generate the report and recycle the process. Troubleshoot: If this option is selected, the tool will collect the data and analyze and generate the report, but it will not recycle the process. Select the option, according to your scenario: Click on "Save". 8. Review the new settings of the rule: Clicking on "Save" will cause a restart as this is a configuration level change and for this to get in effect a restart is required. So, it is advised to make such changes in non-business hours. 9. Click on "Save". Once you click on Save, the app will get restarted and the rule will become active and monitor for Slow requests. Auto-Heal rule to capture .NET Profiler trace for HTTP 5XX Status code: For this scenario, Steps 1, 2, 3 will remain the same as above (from the Slow requests scenario). There will be following changes: 1. Select Status code and click on Add Status Code rule 2. Add the following value with respect to what Status code or range of status code you want this rule to be triggered by: Do you want to set this rule for a specific status code or a range of status codes? - Is it single status code you want to set this rule for or a range of status code. After how many requests you want this condition to kick in? - After how many requests throwing the concerned status code you want this Auto-Heal rule to start writing/capturing relevant data. What should be the status code for these requests? - Mention the status code here. What should be the sub-status code for these requests? - Mention the sub-status code here, if any, else you can leave this blank. What should be the win32-status code for these requests? - Mention the win32-status code here, if any, else you can leave this blank. What is the time interval (in seconds) in which the above condition should be met? - In how many seconds, the above defined status code should occur. What is the request path (leave blank for all requests)? - If there is a specific URL which is throwing that status code, you can add that in this section or leave it as blank. Add the values according to your scenario and click on "Ok" In the below screenshot, the rule is set for this example "1 request throwing HTTP 500 status code in 60 seconds should trigger this rule" After adding the above information, you can follow the Steps 6, 7 ,8, 9 from the first scenario (Slow Requests) and the Auto-Heal rule for the status code will become active and monitor for this performance issue. Auto-Heal rule to capture Memory dump for High Memory usage: For this scenario, Steps 1, 2, 3 will remain the same as above (from the Slow requests scenario). There will be following changes: 1. Select Memory Limit and click on Configure Private Bytes rule: 2. According to your application's memory usage, add the Private bytes in KB at which this rule should be triggered: In the below screenshot, the rule is set for this example "The application process using 2000000 KB (~2 GB) should trigger this rule" Click on "Ok" 3. In Configure Actions, select Custom Action and click on Memory Dump: 4. The tool options provide three choices: CollectKillAnalyze: If this option is selected, the tool will collect the data, analyze and generate the report and recycle the process. CollectLogs: If this option is selected, the tool will collect the data only. It will not analyze and generate the report and recycle the process. Troubleshoot: If this option is selected, the tool will collect the data and analyze and generate the report, but it will not recycle the process. Select the option, according to your scenario: 5. For the memory dumps/reports to get saved, you will have to select either an existing Storage Account or will have to create a new one: Click on Select: Create a new one or choose existing: 6. Once the storage account is set, click on "Save". Review the rule settings and click on "Save". Clicking on "Save" will cause a restart as this is a configuration level change and for this to get in effect a restart is required. So, it is advised to make such changes in non-business hours. Best Practices for Using Auto-Heal Start with Conservative Rules: Avoid overly aggressive auto-restarts to prevent unnecessary disruptions. Monitor Performance Trends: Use Azure Monitor to correlate Auto-Heal events with performance metrics. Regularly Review Logs: Periodically analyze collected logs and dumps to fine-tune your Auto-Heal strategy. Combine with Application Insights: Leverage Application Insights for end-to-end monitoring and deeper diagnostics. Conclusion Auto-Heal in Azure App Service is a powerful tool that not only helps maintain application stability but also provides critical diagnostic data when performance issues arise. By proactively setting up Auto-Heal rules and leveraging its diagnostic capabilities, you can minimize downtime and streamline troubleshooting efforts. Have you used Auto-Heal in your application? Share your experiences and insights in the comments! Stay tuned for more Azure tips and best practices!433Views2likes0CommentsCapture .NET Profiler Trace on the Azure App Service platform
Summary The article provides guidance on using the .NET Profiler Trace feature in Microsoft Azure App Service to diagnose performance issues in ASP.NET applications. It explains how to configure and collect the trace by accessing the Azure Portal, navigating to the Azure App Service, and selecting the "Collect .NET Profiler Trace" feature. Users can choose between "Collect and Analyze Data" or "Collect Data only" and must select the instance to perform the trace on. The trace stops after 60 seconds but can be extended up to 15 minutes. After analysis, users can view the report online or download the trace file for local analysis, which includes information like slow requests and CPU stacks. The article also details how to analyze the trace using Perf View, a tool available on GitHub, to identify performance issues. Additionally, it provides a table outlining scenarios for using .NET Profiler Trace or memory dumps based on various factors like issue type and symptom code. This tool is particularly useful for diagnosing slow or hung ASP.NET applications and is available only in Standard or higher SKUs with the Always On setting enabled. In this article How to configure and collect the .NET Profiler Trace How to download the .NET Profiler Trace How to analyze a .NET Profiler Trace When to use .NET Profilers tracing vs. a memory dump The tool is exceptionally suited for scenarios where an ASP.NET application is performing slower than expected or gets hung. As shown in Figure 1, this feature is available only in Standard or higher Stock Keeping Unit (SKU) and Always On is enabled. If you try to configure .NET Profiler Trace, without both configurations the following messages is rendered. Azure App Service Diagnose and solve problems blade in the Azure Portal error messages Error – This tool is supported only on Standard, Premium, and Isolated Stock Keeping Unit (SKU) only with AlwaysOn setting enabled to TRUE. Error – We determined that the web app is not "Always-On" enabled and diagnostic does not work reliably with Auto Heal. Turn on the Always-On setting by going to the Application Settings for the web app and then run these tools. How to configure and collect the .NET Profiler Trace To configure a .NET Profiler Trace access the Azure Portal and navigate to the Azure App Service which is experiencing a performance issue. Select Diagnose and solve problems and then the Diagnostic Tools tile. Azure App Service Diagnose and solve problems blade in the Azure Portal Select the "Collect .NET Profiler Trace" feature on the Diagnostic Tools blade and the following blade is rendered. Notice that you can only select Collect and Analyze Data or Collect Data only. Choose the one you prefer but do consider having the feature perform the analysis. You can download the trace for offline analysis if necessary. Also notice that you need to **select the instance** on which you want to perform the trace. In the scenario, there is only one, so the selection is simple. However, if your app runs on multiple instances, either select them all or if you identify a specific instance which is behaving slowly, select only that one. You realize the best results if you can isolate a single instance enough so that the request you sent is the only one received on that instance. However, in a scenario where the request or instance is not known, the trace adds value and insights. Adding a thread report provides list of all the threads in the process is also collected at the end of the profiler trace. The thread report is useful especially if you are troubleshooting hung processes, deadlocks, or requests taking more than 60 seconds. This pauses your process for a few seconds until the thread dump is generated. CAUTION: a thread report is NOT recommended if you are experiencing High CPU in your application, you may experience issues during trace analysis if CPU consumption is high. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace blade in the Azure Portal There are a few points called out in the previous image which are important to read and consider. Specifically the .NET Profiler Trace will stop after 60 seconds from the time that it is started. Therefore, if you can reproduce the issue, have the reproduction steps ready before you start the profiling. If you are not able to reproduce the issue, then you may need to run the trace a few times until the slowness or hang occurs. The collection time can be increased up to 15 minutes (900 seconds) by adding an application setting named IIS_PROFILING_TIMEOUT_IN_SECONDS with a value of up to 900. After selecting the instance to perform the trace on, press the Collect Profiler Trace button, wait for the profiler to start as seen here, then reproduce the issue or wait for it to occur. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace status starting window After the issue is reproduced the .NET Profiler Trace continues to the next step of stopping as seen here. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace status stopping window Once stopped, the process continues to the analysis phase if you selected the Collect and Analyze Data option, as seen in the following image, otherwise you are provided a link to download the file for analysis on your local machine. The analysis can take some time, so be patient. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace status analyzing window After the analysis is complete, you can either view the Analysis online or download the trace file for local development. How to download the .NET Profiler Trace Once the analysis is complete you can view the report by selecting the link in the Reports column, as seen here. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace status complete window Clicking on the report you see the following. There is some useful information in this report, like a list of slow requests, Failed Requests, Thread Call stacks, and CPU stacks. Also shown is a breakdown of where the time was spent during the response generation into categories like Application Code, Platform, and Network. In this case, all the time is spent in the Application code. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace review the Report To find out specifically where in the Application Code this request performs the analysis of the trace locally. How to analyze a .NET Profiler Trace After downloading the network trace by selecting the link in the Data column, you can use a tool named Perf View which is downloadable on GitHub here. Begin by opening Perf View and double-clicking on the ".DIAGSESSION" file, after some moments expand it to render the Event Trace Log (ETL) file, as shown here. Analyze Azure App Service .NET Profiler Trace with Perf View Double-click on the Thread Time (with startStop Activities) Stacks which open up a new window similar to shown next. If your App Service is configured as out-of-process select the dotnet process which is associated to your app code. If your App Service is in-process select the w3wp process. Analyze Azure App Service .NET Profiler Trace with Perf View, dotnet out-of-process Double-click on dotnet and another window is rendered, as shown here. From the previous image, .NET Profiler Trace reviews the Report, it is clear where slowness is coming from, find that in the Name column or search for it by entering the page name into the Find text box. Analyze Azure App Service .NET Profiler Trace with Perf View, dotnet out-of-process, method, and class discovery Once found right-click on the row and select Drill Into from the pop-up menu, shown here. Select the Call Tree tab and the reason for the issue renders showing which request was performing slow. Analyze Azure App Service .NET Profiler Trace with Perf View, dotnet out-of-process, root cause This example is relatively. As you analyze more performance issues using Perf View to analyze a .NET Profiler Trace your ability to find the root cause of more complicated performance issues can be realized. When to use .NET Profilers tracing vs. a memory dump That same issue is seen in a memory dump, however there are some scenarios where a .NET Profile trace would be best. Here is a table, Table 1, which describes scenarios for when to capture a .NET profile trace or to capture a memory dump. Issue Type Symptom Code Symptom Stack Startup Issue Intermittent Scenario Performance 200 Requests take 500 ms to 2.5 seconds, or takes <= 60 seconds ASP.NET/ASP.NET Core No No Profiler Performance 200 Requests take > 60 seconds & < 230 seconds ASP.NET/ASP.NET Core No No Dump Performance 502.3/500.121/503 Requests take >=120 to <= 230 seconds ASP.NET No No Dump, Profiler Performance 502.3/500.121/503 Requests timing out >=230 ASP.NET/ASP.NET Core Yes/No Yes/No Dump Performance 502.3/500.121/503 App hangs or deadlocks (ex: due to async anti-pattern) ASP.NET/ASP.NET Core Yes/No Yes/No Dump Performance 502.3/500.121/503 App hangs on startup (ex: caused by nonasync deadlock issue) ASP.NET/ASP.NET Core No Yes/No Dump Performance 502.3/500.121 Request timing out >=230 (time out) ASP.NET/ASP.NET Core No No Dump Availability 502.3/500.121/503 High CPU causing app downtime ASP.NET No No Profiler, Dump Availability 502.3/500.121/503 High Memory causing app downtime ASP.NET/ASP.NET Core No No Dump Availability 500.0[121]/503 SQLException or Some Exception causes app downtime ASP.NET No No Dump, Profiler Availability 500.0[121]/503 App crashing due to fatal exception at native layer ASP.NET/ASP.NET Core Yes/No Yes/No Dump Availability 500.0[121]/503 App crashing due to exit code (ex: 0xC0000374) ASP.NET/ASP.NET Core Yes/No Yes/No Dump Availability 500.0 App begin nonfatal exceptions (during a context of a request) ASP.NET No No Profiler, Dump Availability 500.0 App begin nonfatal exceptions (during a context of a request) ASP.NET/ASP.NET Core No Yes/No Dump Table 1, when to capture a .NET Profiler Trace or a Memory Dump on Azure App Service, Diagnose and solve problems Use this list as a guide to help decide how to approach the solving of performance and availability applications problems which are occurring in your application source code. Here are some descriptions regarding the column heading. - Issues Type – Performance means that a request to the app is responding or processing the response but not at a speed in which it is expected to. Availability means that the request is failing or consuming more resources than expected. - Symptom Code – the HTTP Status and/or sub status which is returned by the request. - Symptom – a description of the behavior experienced while engaging with the application. - Stack – this table targets .NET, specifically ASP.NET, and ASP.NET Core applications. - Startup Issue – if "No" then the Scenario can or should be used, "No" represents that the issue is not at startup. If "Yes/No" it means the Scenario is useful for troubleshooting startup issues. - Intermittent – if "No" then the Scenario can or should be used, "No" means the issue is not intermittent or that it can be reproduced. If "Yes/No" it means the Scenario is useful if the issue happens randomly or cannot be reproduced. Meaning that the tool can be set to trigger on a specific event or left running for a specific amount of time until the exception happens. - Scenario – "Profiler" means that the collection of a .NET Profiler Trace would be recommended. "Dump" means that a memory dump would be your best option. If both are provided, then both can be useful when the given symptoms and system codes are present. You might find the videos in Table 2 useful which instruct you how to collect and analyze a memory dump or .NET Profiler Trace. Product Stack Hosting Symptom Capture Analyze Scenario App Service Windows in High CPU link link Dump App Service Windows in High Memory link link Dump App Service Windows in Terminate link link Dump App Service Windows in Hang link link Dump App Service Windows out High CPU link link Dump App Service Windows out High Memory link link Dump App Service Windows out Terminate link link Dump App Service Windows out Hang link link Dump App Service Windows in High CPU link link Dump Function App Windows in High Memory link link Dump Function App Windows in Terminate link link Dump Function App Windows in Hang link link Dump Function App Windows out High CPU link link Dump Function App Windows out High Memory link link Dump Function App Windows out Terminate link link Dump Function App Windows out Hang link link Dump Azure WebJob Windows in High CPU link link Dump App Service Windows in High CPU link link .NET Profiler App Service Windows in Hang link link .NET Profiler App Service Windows in Exception link link .NET Profiler App Service Windows out High CPU link link .NET Profiler App Service Windows out Hang link link .NET Profiler App Service Windows out Exception link link .NET Profiler Table 2, short video instructions on capturing and analyzing dumps and profiler traces Here are a few other helpful videos for troubleshooting Azure App Service Availability and Performance issues: View Application EventLogs Azure App Service Add Application Insights To Azure App Service Prior to capturing and analyzing memory dumps, consider viewing this short video: Setting up WinDbg to analyze Managed code memory dumps and this blog post titled: Capture memory dumps on the Azure App Service platform. Question & Answers - Q: What are the prerequisites for using the .NET Profiler Trace feature in Azure App Service? A: To use the .NET Profiler Trace feature in Azure App Service, the application must be running on a Standard or higher Stock Keeping Unit (SKU) with the Always On setting enabled. If these conditions are not met, the tool will not function, and error messages will be displayed indicating the need for these configurations. - Q: How can you extend the default collection time for a .NET Profiler Trace beyond 60 seconds? A: The default collection time for a .NET Profiler Trace is 60 seconds, but it can be extended up to 15 minutes (900 seconds) by adding an application setting named IIS_PROFILING_TIMEOUT_IN_SECONDS with a value of up to 900. This allows for a longer duration to capture the necessary data for analysis. - Q: When should you use a .NET Profiler Trace instead of a memory dump for diagnosing performance issues in an ASP.NET application? A: A .NET Profiler Trace is recommended for diagnosing performance issues where requests take between 500 milliseconds to 2.5 seconds or less than 60 seconds. It is also useful for identifying high CPU usage causing app downtime. In contrast, a memory dump is more suitable for scenarios where requests take longer than 60 seconds, the application hangs or deadlocks, or there are issues related to high memory usage or app crashes due to fatal exceptions. Keywords Microsoft Azure, Azure App Service, .NET Profiler Trace, ASP.NET performance, Azure debugging tools, .NET performance issues, Azure diagnostic tools, Collect .NET Profiler Trace, Analyze .NET Profiler Trace, Azure portal, Performance troubleshooting, ASP.NET application, Slow ASP.NET app, Azure Standard SKU, Always On setting, Memory dump vs profiler trace, Perf View analysis, Azure performance diagnostics, .NET application profiling, Diagnose ASP.NET slowness, Azure app performance, High CPU usage ASP.NET, Azure app diagnostics, .NET Profiler configuration, Azure app service performance265Views1like0CommentsDotnet core Oracle 23c SSL connection not working on Linux environment and works on Windows
Our data direct ADO.NET oracle driver is having an issue with SSL connection on Linux platform and the same connection is working on windows environment with Oracle23c server. Are there any known limitations with dotnet core on linux environment with SSL/TLS connections. Trace : InnerException: System.IO.IOException Message: Unable to read data from the transport connection: Connection reset by peer. Source: System.Net.Sockets Stack Trace at System.Net.Sockets.NetworkStream.Read(Span`1 buffer) at System.Net.Security.SslStream.EnsureFullTlsFrameAsync[TIOAdapter](TIOAdapter adapter) at System.Net.Security.SslStream.ReadAsyncInternal[TIOAdapter](TIOAdapter adapter, Memory`1 buffer) at System.Net.Security.SslStream.Read(Byte[] buffer, Int32 offset, Int32 count) The same application works on Windows environment.270Views0likes3Comments🚀 Kickstart Your .NET Journey with Microsoft Learn!
Hey .NET enthusiasts! Are you new to .NET or looking to refresh your skills? Microsoft Learn has an Introduction to .NET module that walks you through the fundamentals of .NET, .NET Core, and the .NET Framework. This is a free, hands-on learning experience designed to get you started quickly. What you'll learn: The basics of .NET and why it's powerful How .NET supports multiple platforms Running your first .NET application Start learning here: Microsoft Learn Introduction to .NET Have you used .NET in your projects? Share your experiences and let’s discuss below!14Views0likes0CommentsHow to find out the type of a generic class?
Exmple: create a new List<string> instance: $a = [System.Collections.Generic.List[string]]::new() With: $a.GetType() I only get: IsPublic IsSerial Name BaseType -------- -------- ---- -------- True True List`1 System.Object How can I tell what type the list is (in my case, string)?Solved30Views0likes2CommentsGiving Granular Controls to Azure EasyAuth
When working with web applications, implementing authentication and authorization is crucial for security. Traditionally, there have been various ways from simple username/password inputs to OAuth-based approaches. However, implementing authentication and authorization logics can be quite complex and challenging. Fortunately, Azure offers a feature called EasyAuth, which simplifies this process. EasyAuth is built in Azure PaaS services like App Service, Functions, Container Apps, and Static Web Apps. The best part of using EasyAuth is that you don't have to modify existing code for the EasyAuth integration. Having said that, since EasyAuth follows one of the cloud architecture patterns – Sidecar pattern, it secures the entire web application. However, fine-tuned control – such as protecting only specific areas or excluding certain parts – can be tricky to achieve. To address this limitation, the developer community has explored various solutions: EasyAuth for App Service by Maxime Rouiller EasyAuth for Azure Container Apps by John Reilly EasyAuth for Azure Static Web Apps by Anthony Chu These approaches are still valid today but require updates to align with the latest .NET features. Throughout this post, I'm going to explore how to transform the authentication token generated by EasyAuth in Azure App Service and Azure Container Apps into a format usable by Blazor and other ASP.NET Core web applications. Please refer to my previous post about EasyAuth on Azure Static Web Apps. Download NuGet Package A few NuGet packages are available for ASP.NET Core web applications to integrate the Client Principal token generated by EasyAuth. Currently, the packages support Microsoft Entra ID and GitHub OAuth. NOTE: These packages are not the Microsoft official ones. Therefore, use these packages with your own care. Package Version Downloads Aliencube.Azure.Extensions.EasyAuth Aliencube.Azure.Extensions.EasyAuth.EntraID Aliencube.Azure.Extensions.EasyAuth.GitHub You can find the source code of these NuGet packages on this GitHub repository. Getting Started with Azure Developer CLI The NuGet package repository has sample apps for you to deploy ASP.NET Core web apps to both Azure App Service and Azure Container Apps. Clone the repository. Make sure you have the latest version of Azure CLI, Azure Bicep CLI and Azure Developer CLI. Login to Azure by running the azd auth login command. Deploy the sample apps by running azd up command. You might be asked to provide both Client ID and Client Secret values of a GitHub OAuth app. If you don't have it, leave them blank. Apply NuGet Package to Web App The example uses a Blazor web app, but the same approach applies to any ASP.NET Core web application. Microsoft Entra ID Add the NuGet package to your Blazor application. dotnet add package Aliencube.Azure.Extensions.EasyAuth.EntraID NOTE: At the time of writing, it's still in preview. Therefore, use the --prerelease tag at the end of the command like dotnet add package Aliencube.Azure.Extensions.EasyAuth.EntraID --prerelease . Open the Program.cs file and find the code line, var app = builder.Build(); . Then, add dependencies for authentication and authorisation. // 👇👇👇 Add dependencies for authentication/authorisation builder.Services.AddAuthentication(EasyAuthAuthenticationScheme.Name) .AddAzureEasyAuthHandler<EntraIDEasyAuthAuthenticationHandler>(); builder.Services.AddAuthorization(); // 👆👆👆 Add dependencies for authentication/authorisation var app = builder.Build(); In the same Program.cs file and find the code line, app.Run(); . Then activate authentication and authorisation. // 👇👇👇 Activate authentication/authorisation app.UseAuthentication(); app.UseAuthorization(); // 👆👆👆 Activate authentication/authorisation app.Run(); Apply the Authorize attribute to a specific page component. PAGE "/random-page-url" @* 👇👇👇 Apply authorisation *@ @using Aliencube.Azure.Extensions.EasyAuth @using Microsoft.AspNetCore.Authorization @attribute [Authorize(AuthenticationSchemes = EasyAuthAuthenticationScheme.Name)] @* 👆👆👆 Apply authorisation *@ Deploy the app to either Azure App Service or Azure Container Apps. After deployment, configure the authentication settings like: Open your web browser and navigate the page component you applied authorisation, and see the 401 (Unauthorized) error. Sign into the web app through /.auth/login/aad and visite the page again, and verify you're able to see the content on that page. GitHub OAuth It's mostly the same as using Microsoft Entra ID, except using a different NuGet package. Install the GitHub-specific NuGet package. dotnet add package Aliencube.Azure.Extensions.EasyAuth.GitHub NOTE: At the time of writing, it's still in preview. Therefore, use the --prerelease tag at the end of the command like dotnet add package Aliencube.Azure.Extensions.EasyAuth.GitHub --prerelease . Open the Program.cs file and find the code line, var app = builder.Build(); . Then, add dependencies for authentication and authorisation. Make sure this time you use GitHubEasyAuthAuthenticationHandler instead of EntraIDEasyAuthAuthenticationHandler . // 👇👇👇 Add dependencies for authentication/authorisation builder.Services.AddAuthentication(EasyAuthAuthenticationScheme.Name) .AddAzureEasyAuthHandler<GitHubEasyAuthAuthenticationHandler>(); builder.Services.AddAuthorization(); // 👆👆👆 Add dependencies for authentication/authorisation var app = builder.Build(); Follow the same steps as above. How Does It Work? When authenticated through Azure EasyAuth, a request header X-MS-CLIENT-PRINCIPAL is created. It contains a base-64 encoded JSON object, which needs to be decoded and transformed into a ClaimsPrincipal instance for role-based access control. Here's the JSON object. For brevity, most claims are omitted except two. Here's the JSON object generated from Microsoft Entra ID authentication. { "auth_typ": "aad", "name_typ": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress", "role_typ": "http://schemas.microsoft.com/ws/2008/06/identity/claims/role", "claims": [ { "typ": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress", "val": "john.doe@contoso.com" }, { "typ": "roles", "val": "User" } ] } And, here's the JSON object generated from GitHub authentication. { "auth_typ": "github", "name_typ": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name", "role_typ": "http://schemas.microsoft.com/ws/2008/06/identity/claims/role", "claims": [ { "typ": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name", "val": "John Doe" }, { "typ": "urn:github:type", "val": "User" } ] } This JSON object should be converted into the ClaimsPrincipal instance for granular control on each page. Let's understand the JSON object. auth_typ : It indicates the authentication method currently used. For example, Microsoft Entra ID says aad and GitHub says github . name_typ : After the authentication, the claim with this key is bound to HttpContext.User.Identity.Name . role_typ : They claim key for authorisation by roles. claims : A collection of claims generated by either Microsoft Entra ID or GitHub. But the challenge on the X-MS-CLIENT-PRINCIPAL token is that it has no role claims matching to the role_typ key. The Entra ID one contains the roles claim and GitHub one contains the urn:github:type claim, which is not following the OpenID Connect standard, while ClaimsPrincipal object on ASP.NET Core web apps do follow the standard. The NuGet packages above convert the role values. By adding and activating the conversion handler, users are now able to login to the web app with EasyAuth and access to certain pages with proper permissions. Here's the example that uses the AuthenticationSchemes property to let users access to pages as long as they sign in with EasyAuth. PAGE "/random-page-url" @attribute [Authorize(AuthenticationSchemes = EasyAuthAuthenticationScheme.Name)] This is another example that uses the Roles property to let users having the User permission to access to the given page. PAGE "/random-page-url" @attribute [Authorize(Roles = "User")] NOTE: Make sure that X-MS-CLIENT-PRINCIPAL token from Microsoft Entra ID doesn't contain the roles claims out-of-the-box because it really depends on each tenant's policy. Roadmap Azure EasyAuth supports multiple identity providers, but the NuGet packages currently support Microsoft Entra ID and GitHub OAuth. ✅ Microsoft Entra ID ✅ GitHub ⏹️ OpenID Connect ⏹️ Google ⏹️ Facebook ⏹️ X ⏹️ Apple Contributions are always welcome! So far, we've walked through how to transform Azure EasyAuth authentication token to the ClaimsPrincipal object to be used within ASP.NET Core web apps. With these packages, you can control individual pages based on the users' role and increase apps' security with less efforts. Learn More For further details on Azure EasyAuth, check out: Azure App Service EasyAuth Azure Functions EasyAuth Azure Container Apps EasyAuth Azure Static Web Apps EasyAuth Azure EasyAuth Extensions (Code) Azure EasyAuth Samples (Code)573Views1like1CommentNew Features in Azure Container Apps VS Code extension
👆 Install VS Code extension Summary of Major Changes New Managed Identity Support for connecting container apps to container registries. This is now the preferred method for securing these resources, provided you have sufficient privileges. New Container View: Introduced with several commands for easier editing of container images and environment variables. One-Click Deployment: Deploy to Container App... added to the top-level container app node. This supports deployments from a workspace project or container registry. To manage multiple applications in a workspace project or enable faster deployments with saved settings, use Deploy Project from Workspace. It can be accessed via the workspace view. Improved Activity Log Output: All major commands now include improved activity log outputs, making it easier to track and manage your activities. Quickstart Image for Container App Creation: The "Create container app..." command now initializes with a quickstart image, simplifying the setup process. New Commands and Enhancements Managed Identity support for new connections to container registries New command Deploy to Container App... found on the container app item. This one-click deploy command allows deploying from a workspace project or container registry while in single revision mode. New Container view under the container app item allows direct access to the container's image and environment variables. New command Edit Container Image... allows editing of container images without prompting to update environment variables. Environment Variable CRUD Commands: Multiple new commands for creating, reading, updating, and deleting environment variables. Convert Environment Variable to Secret: Quickly turn an environment variable into a container app secret with this new command. Changes and Improvements Command Create Container App... now always starts with a quickstart image. Renamed the Update Container Image... command to Edit Container.... This command is now found on the container item. When running Deploy Project from Workspace..., if remote environment variables conflict with saved settings, prompt for update. Add new envPath option useRemoteConfiguration. Deploying an image with the Docker extension now allows targeting specific revisions and containers. When deploying a new image to a container app, only show ingress prompt when more than the image tag is changed. Improved ACR selection dropdowns, providing better pick recommendations and sorting by resource group. Improved activity log outputs for major commands. Changed draft deploy prompt to be a quick pick instead of a pop-up window. We hope these new features and improvements will simplify deployments and make your Azure Container Apps experience even better. Stay tuned for more updates, and as always, we appreciate your feedback! Try out these new features today and let us know what you think! Your feedback is invaluable in helping us continue to improve and innovate. Azure Container Apps VS Code Extension Full changelog:533Views1like0Comments[Suggestion 1][Allow Language Selection During .NET Installation]
Hi, 1. Context: Currently, when installing the .NET SDK or runtime, multiple language resource folders are automatically included in the installation directory. 2. Problem: Many developers only require a single language, typically English and the additional localization files take up unnecessary disk space and clutter the installation directory. Currently, the installer does not provide an option to select which languages should be installed. Removing these manually is time-consuming and could potentially break application dependencies if done incorrectly. 3. Proposed Solution: Introduce an option in the .NET installer that allows users to select or deselect language packs during installation. This feature could be similar to the "Individual components" selection available in the Visual Studio Installer, where users can choose exactly what they need, ensuring a more streamlined installation. This would help: Reduce disk space usage. Improve installation customization. Provide a cleaner development environment. 4. What do you think about this suggestion ? Thank you for considering this request and I’m happy to provide additional details or insights if needed.29Views0likes0Commentspositioning a set of powershell windows
Greetings, I'm completely new to this area. I have a requirement to display 3 separate powershell windows that are created from nodejs on my server, in essentially this format, ------- ---------------------------- | 1 | | 2 | ------- ----------------------------- ------------------------------------- | 3 | -------------------------------------- is the monitoring server app is the adminController app is the mainServerHub for all client connections to main app. is started manually on the server and automatically positions itself on the screen is created by the monitoring server app (1) and should be positioned on the right of screen 1 is also created by the monitoring server app (1) and should be positioned under both 1 and 2 I have searched many descriptions and links to resolve this challenge. I'm successful in creating all of the powershell windows from nodejs, but it has quite a challenge trying to manipulate their positions, and I'm looking for the appropriate cmdlets to accomplish this task. Can anyone point me in the right direction? Thanks Scott.57Views0likes1CommentConnection between .NET and AUTOCAD ELECTRICAL
I want to connect .NET with Autocad Electrical . i am using , AutoCAD Electrical 2025.0.2 , Product Version : 22.0.81.0 , Built on: V.154.0.0 AutoCAD 2025.1.1 Visual Studio Community 2022 - 17.12.3 my system is x64 bit I am creating a console application in .NET 8.0 Runtime with C# language , Added dll accoremgd , acdbmgd, acmgd, Autodesk.AutoCAD.Interop . Also in .NET in Solution Explorer , in project references , i have made "Copy Local" property as False. using System; using Autodesk.AutoCAD.Interop; using Autodesk.AutoCAD.Runtime; using System.Runtime.InteropServices; using Autodesk.AutoCAD.ApplicationServices; namespace AutoCADElecDemo { class Program { static void Main(string[] args) { AcadApplication acadApp = null; const string progId = "AutoCAD.Application.25"; // Adjust for your AutoCAD version try { // Get a running instance of AutoCAD acadApp = GetActiveAutoCAD(progId); } catch (System.Exception ex) { Console.WriteLine($"Error initializing AutoCAD: {ex.Message}"); return; } try { // Ensure AutoCAD is visible acadApp.Visible = true; Console.WriteLine("AutoCAD is now running."); // Register for the BeginQuit event Application.BeginQuit += OnBeginQuit; // Keep the application running to monitor AutoCAD's state Console.WriteLine("Press Enter to exit..."); Console.ReadLine(); } catch (System.Exception ex) { Console.WriteLine($"An error occurred: {ex.Message}"); } finally { // Unregister the event handler before exiting Application.BeginQuit -= OnBeginQuit; } } static AcadApplication GetActiveAutoCAD(string progId) { try { var comObject = Marshal.GetActiveObject(progId); Console.WriteLine($"Connected to AutoCAD: {progId}"); return (AcadApplication)comObject; } catch (System.Exception ex) { Console.WriteLine($"Error accessing AutoCAD: {ex.Message}"); throw; } } static void OnBeginQuit(object sender, EventArgs e) { Console.WriteLine("AutoCAD is being closed."); } } } For above code, my application comes in break mode "Your app has entered a break state, but no code is currently executing that is supported by the selected debug engine (e.g. only native runtime code is executing)." and i am getting below error : System.IO.FileNotFoundException: 'Could not load file or assembly 'accoremgd, Version=25.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified.' dll path is also correct. i also tried same code with .NET Framework 4.7.2 and 4.8 targeting pack , but getting same error. How to load accoremgd.dll properly so that this application can use autocad accoremgd functions properly.46Views0likes0Comments