enterprise integration
42 TopicsScaling mechanism in hybrid deployment model for Azure Logic Apps Standard
Hybrid Logic Apps offer a unique blend of on-premises and cloud capabilities, making them a versatile solution for various integration scenarios. A key feature of hybrid deployment models is their ability to scale efficiently to manage different workloads. This capability enables customers to optimize their compute costs during peak usage by scaling up to handle temporary spikes in demand and then scaling down to reduce costs when the demand decreases. This blog will explore the scaling mechanism in hybrid deployment models, focusing on the role of the KEDA operator and its integration with other components.Introducing GenAI Gateway Capabilities in Azure API Management
We are thrilled to announce GenAI Gateway capabilities in Azure API Management – a set of features designed specifically for GenAI use cases. Azure OpenAI service offers a diverse set of tools, providing access to advanced models like GPT3.5-Turbo to GPT-4 and GPT-4 Vision, enabling developers to build intelligent applications that can understand, interpret, and generate human-like text and images. One of the main resources you have in Azure OpenAI is tokens. Azure OpenAI assigns quota for your model deployments expressed in tokens-per-minute (TPMs) which is then distributed across your model consumers that can be represented by different applications, developer teams, departments within the company, etc. Starting with a single application integration, Azure makes it easy to connect your app to Azure OpenAI. Your intelligent application connects to Azure OpenAI directly using API Key with a TPM limit configured directly on the model deployment level. However, when you start growing your application portfolio, you are presented with multiple apps calling single or even multiple Azure OpenAI endpoints deployed as Pay-as-you-go or Provisioned Throughput Units (PTUs) instances. That comes with certain challenges: How can we track token usage across multiple applications? How can we do cross charges for multiple applications/teams that use Azure OpenAI models? How can we make sure that a single app does not consume the whole TPM quota, leaving other apps with no option to use Azure OpenAI models? How can we make sure that the API key is securely distributed across multiple applications? How can we distribute load across multiple Azure OpenAI endpoints? How can we make sure that PTUs are used first before falling back to Pay-as-you-go instances? To tackle these operational and scalability challenges, Azure API Management has built a set of GenAI Gateway capabilities: Azure OpenAI Token Limit Policy Azure OpenAI Emit Token Metric Policy Load Balancer and Circuit Breaker Import Azure OpenAI as an API Azure OpenAI Semantic Caching Policy (in public preview) Azure OpenAI Token Limit Policy Azure OpenAI Token Limit policy allows you to manage and enforce limits per API consumer based on the usage of Azure OpenAI tokens. With this policy you can set limits, expressed in tokens-per-minute (TPM). This policy provides flexibility to assign token-based limits on any counter key, such as Subscription Key, IP Address or any other arbitrary key defined through policy expression. Azure OpenAI Token Limit policy also enables pre-calculation of prompt tokens on the Azure API Management side, minimizing unnecessary request to the Azure OpenAI backend if the prompt already exceeds the limit. Learn more about this policy here. Azure OpenAI Emit Token Metric Policy Azure OpenAI enables you to configure token usage metrics to be sent to Azure Applications Insights, providing overview of the utilization of Azure OpenAI models across multiple applications or API consumers. This policy captures prompt, completions, and total token usage metrics and sends them to Application Insights namespace of your choice. Moreover, you can configure or select from pre-defined dimensions to split token usage metrics, enabling granular analysis by Subscription ID, IP Address, or any custom dimension of your choice. Learn more about this policy here. Load Balancer and Circuit Breaker Load Balancer and Circuit Breaker features allow you to spread the load across multiple Azure OpenAI endpoints. With support for round-robin, weighted (new), and priority-based (new) load balancing, you can now define your own load distribution strategy according to your specific requirements. Define priorities within the load balancer configuration to ensure optimal utilization of specific Azure OpenAI endpoints, particularly those purchased as PTUs. In the event of any disruption, a circuit breaker mechanism kicks in, seamlessly transitioning to lower-priority instances based on predefined rules. Our updated circuit breaker now features dynamic trip duration, leveraging values from the retry-after header provided by the backend. This ensures precise and timely recovery of the backends, maximizing the utilization of your priority backends to their fullest. Learn more about load balancer and circuit breaker here. Import Azure OpenAI as an API New Import Azure OpenAI as an API in Azure API management provides an easy single click experience to import your existing Azure OpenAI endpoints as APIs. We streamline the onboarding process by automatically importing the OpenAPI schema for Azure OpenAI and setting up authentication to the Azure OpenAI endpoint using managed identity, removing the need for manual configuration. Additionally, within the same user-friendly experience, you can pre-configure Azure OpenAI policies, such as token limit and emit token metric, enabling swift and convenient setup. Learn more about Import Azure OpenAI as an API here. Azure OpenAI Semantic Caching policy Azure OpenAI Semantic Caching policy empowers you to optimize token usage by leveraging semantic caching, which stores completions for prompts with similar meaning. Our semantic caching mechanism leverages Azure Redis Enterprise or any other external cache compatible with RediSearch and onboarded to Azure API Management. By leveraging the Azure OpenAI Embeddings model, this policy identifies semantically similar prompts and stores their respective completions in the cache. This approach ensures completions reuse, resulting in reduced token consumption and improved response performance. Learn more about semantic caching policy here. Get Started with GenAI Gateway Capabilities in Azure API Management We’re excited to introduce these GenAI Gateway capabilities in Azure API Management, designed to empower developers to efficiently manage and scale their applications leveraging Azure OpenAI services. Get started today and bring your intelligent application development to the next level with Azure API Management.33KViews10likes14CommentsInbound private endpoint for Standard v2 tier of Azure API Management
Standard v2 was announced in general availability on April 1st, 2024. Customers can now configure an inbound private endpoint (preview) for your API Management Standard v2 instance to allow clients in your private network to securely access the API Management gateway over Azure Private Link. The private endpoint uses an IP address from an Azure virtual network in which it's hosted. Network traffic between a client on your private network and API Management traverses over the virtual network and a Private Link on the Microsoft backbone network, eliminating exposure from the public internet. Further, you can configure custom DNS settings or an Azure DNS private zone to map the API Management hostname to the endpoint's private IP address. Inbound private endpoint With a private endpoint and Private Link, you can: Create multiple Private Link connections to an API Management instance. Use the private endpoint to send inbound traffic on a secure connection. Use policy to distinguish traffic that comes from the private endpoint. Limit incoming traffic only to private endpoints, preventing data exfiltration. Combine with outbound virtual network integration to provide end-to-end network isolation of your API Management clients and backend services. Preview limitations Today, only the API Management instance’s Gateway endpoint supports inbound private link connections. In addition, each API management instance can support at most 100 private link connections. To participate in the preview and add an inbound private endpoint to your Standard v2 instance, you must complete a request form. The Azure API Management team will review your request and respond via email within five business days. Learn more API Management v2 tiers FAQ API Management v2 tiers documentation API Management overview documentationAutomating Logic Apps connections to Dynamics 365 using Bicep
I recently worked with a customer to show the ease of integration between Logic Apps and the Dataverse as part of Dynamics 365 (D365). The flows of integrations we looked at included: Inbound: D365 updates pushed in near real-time into a Logic Apps HTTP trigger. Outbound: A Logic App sending HTTP requests to retrieve data from D365. The focus of this short post will be on the outbound use case, showing how to use the Microsoft Dataverse connector with Bicep automation. A simple use case The app shown here couldn't be much simpler: it's a Timer recurrence which uses the List Rows action to retrieve data from D365, here's an snip from an execution: Impressed? 🤣 Getting this setup clicking-through the Azure Portal is fairly simple. The connector example uses a Service Principal to authenticate the Logic App to D365 (OAuth being an alternative), so several parameters are needed: Additionally you'll be required to configure an Environment parameter for D365, which is a URL for the target environment, e.g. https://meaingful-url-for-your-org.crm.dynamics.com. Configuring the Service Principal may be the most troublesome part and is outside of the scope of this Bicep automation, and would be considered a separate task per-environment. This page may help you complete the required identity creation. So... what about the Bicep? You can see the Bicep files in the GitHub repository here. We have to deploy 2 resources: resource laworkflow 'Microsoft.Logic/workflows@2019-05-01' = { } ... resource commondataserviceApiConnection 'Microsoft.Web/connections@2016-06-01' = { } ... The first Microsoft.Logic/workflows resource deploys the app configuration, and the second Microsoft.Web/connections resource deploys the Dataverse connection used by the app. The relationship between resources after deployment will be: The Bicep for such a simple example took some trial and error to get right and the documentation is far from clear, something I will try to get improved. In hindsight it seems straight forward, these snippets outline where I struggled. A snip from the connections resource: resource commondataserviceApiConnection 'Microsoft.Web/connections@2016-06-01' = { name: 'commondataservice' ... properties: { displayName: 'la-to-d365-commondataservice' api: { id: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Web/locations/${location}/managedApis/commondataservice' ... The property at path properties.api.id is all important here. Now looking at the workflows resource: resource laworkflow 'Microsoft.Logic/workflows@2019-05-01' = { name: logicAppName ... parameters: { '$connections': { value: { commondataservice: { connectionName: 'commondataservice' connectionId: resourceId('Microsoft.Web/connections', 'commondataservice') id: commondataserviceApiConnection.properties.api.id } } } ... Here we see the important parameters for the connection configuration, creating the relationship between the resources: connectionName: reference the name of the connection as specified in the resource. connectionId: uses the Bicep resourceId function to obtain the deployed Azure resource ID. id: references the properties.api.id value specified earlier. So fairly simple, but understanding what value is required where isn't straight forward and that's where documentation improvement is needed. Secret Management An extra area I looked at was improved secret management in Bicep. Values required for the Service Principal must be handled securely, so how do you achieve this? The approach I took was to use the az.getSecret Bicep function within the .bicepparm file, allowing for a secret to be read from an Azure KeyVault at deployment time. This has the advantage of separating the main template file from the parameters it uses. The KeyVault used is pre-provisioned which stores the Service Principal secrets and not deployed as part of this Bicep code. using './logicapps.bicep' ... param commondataserviceEnvironment = getSecret( readEnvironmentVariable('AZURE_KV_SUBSCRIPTION_ID'), readEnvironmentVariable('AZURE_KV_RESOURCE_GROUP'), readEnvironmentVariable('AZURE_KV_NAME'), 'commondataserviceClientSecret') This example obtains the commondataserviceClientSecret parameter value from Key Vault at the given Subscription, Resource Group, Key Vault name, and secret name. You must grant Azure Resource Manager access to the Key Vault, enabled by the setting shown below: The Subscription ID, Resource Group name, and Key Vault name are read from environment variables using the readEnvironmentVariable function, showing another possibility for configuration alongside individual .bicepparm file per-environment. In Summary While this was a very simple Logic Apps use case, I hope it ties together the areas of connector automation, configuration, and security, helping you accelerate the time to a working solution. Happy integrating!Cross-tenant secure integration of Azure resources based on logic app standard and virtual WAN
Cross-Tenant Secure Integration of Azure Resources Based on Logic App Standard and Virtual WAN In today's interconnected world, enterprise-level systems often need to integrate resources across different Azure tenants securely. This blog will explore how to achieve cross-tenant secure integration of Azure resources using Logic App Standard and Azure Virtual WAN. Introduction Cross-tenant integration is essential for organizations that operate in multiple Azure tenants or need to collaborate with partners and customers. By leveraging Azure Logic Apps and Virtual WAN, you can create secure, scalable, and efficient integrations across tenant boundaries. Why Cross-Tenant Integration? Cross-tenant integration allows organizations to: Collaborate with partners and customers securely through Azure private network; Centralize management and monitoring of resources Enhance security by using role-based access control (RBAC) and cross-tenant access control in Azure Entra ID Architecture Overview In this article, we'll demonstrate the private integration that moves the file from the provider-tenant storage account to consumer-tenant storage account as an example. The architecture for cross-tenant integration involves several key components: Azure Logic App Standard: A PaaS Service that provides an automate workflow which can integrate services across tenants. Its v-net integration networking feature and in-app connector can make sure the secure, private traffic throughput. Virtual WAN: Provide a unified and secure network architecture. You can connect cross-tenant VNets to a Virtual WAN hub. Private Endpoints: This network interface connects you privately and securely to a service that's powered by Azure Private Link. By enabling a private endpoint, you're bringing the service (such as the storage account) into your virtual network. Setting Up Cross-Tenant Integration Step 1: Configure Virtual WAN Please set up a Virtual WAN in the provider tenant. Specifically, we need to create a virtual hub and build the V-net connection to the provider tenant's virtual network: Please refer to the document for more details: https://learn.microsoft.com/en-us/training/modules/design-implement-hybrid-networking/6-connect-remote-resources-by-using-azure-virtual-wans Step 2: Allow Tenant Access Let's allow the cross-tenant access resources from the consumer tenant's Entra ID settings. This eliminates the need to manage credentials and enhances security: Please follow the configuration in the consumer's tenant: Microsoft Entra ID->Manage->External Identities->Cross-tenant access settings->Organizational Settings->Add Organization: Add the provider Tenant ID as below: 2. Configure the inbound access and outbound access to allow B2B direct connect as below: Step 3: Implement RBAC In the subscription of the virtual network in the consumer tenant, add the Contributor role assignment to the administrator (the user who administers the provider tenant's virtual hub). Contributor permissions will enable the administrator to modify and access the virtual networks in the consumer tenant. You can use either Azure CLI or the Azure portal to assign this role. See the following articles for steps: https://learn.microsoft.com/en-us/azure/role-based-access-control/role-assignments-cli https://learn.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal Step 4: Connect the consumer-tenant's V-Net to the provider-tenant‘s hub In the following steps, you'll use commands to switch between the context of the two subscriptions as you link the consumer's virtual network to the provider's virtual WAN hub: Replace the example values to reflect your own environment. (We take the Azure CLI command as the example, please kindly install the Azure CLI to the local machine https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-windows?tabs=azure-cli. From our test, we found there's some bugs on Azure Cloud Shell to conduct these commands ) 1. Run the following command to add the remote tenant subscription and the parent tenant subscription to the current session of console. If you're signed in to the parent, you need to run the command for only the remote tenant: az login --tenant "[tenant ID]" 2. Verify that the role assignment is successful. Sign in to Azure CLI (if not already) by using the parent credentials and run the following command: az account list -o table If the permissions have successfully propagated to the parent and have been added to the session, the subscriptions owned by the parent and the remote tenant will both appear in the output of the command. 3. Make sure you're in the context of your virtual hub account: az account set --subscription "[virtual hub subscription]" 4. Connect the virtual network to the hub: (just use my personal environment as an example, please replace the detailed information with your info): az network vhub connection create --resource-group "SerenaGroup" --name "test1225" --vhub-name "SerenaVirtualHub" --remote-vnet "/subscriptions/8ce89da3-601d-4349-9c84-c374bcfbf3ed/resourceGroups/NetworkingDirection/providers/Microsoft.Network/virtualNetworks/Cross-Tenant-Network" A successful build should be as below: From the provider-tenant virtual WAN side: From the consumer-tenant V-NET side: 5.Check if the routes have been propagated from this connection successfully across both tenants: Provider-tenant V-Net range: 10.0.0.0/16 Provider-tenant V-Net range: 172.0.0.0/16 Consumer-tenant V-Net range: 192.168.0.0/16 We can check the effective routes on the Azure VM hosted in the consumer-tenant's V-net: Step 5: Connect Both Storage Accounts into Virtual Networks by Private Endpoints In this use case, we enabled the private endpoints for the storage's blob service from both provider and consumer tenant. Specifically, the PE is integrated with a Private DNS zone that ensure the correct DNS resolution. For more details, please refer to https://learn.microsoft.com/en-us/azure/private-link/tutorial-private-endpoint-storage-portal?tabs=dynamic-ip Step 6: Add the DNS record for consumer-tenant's PE from the provider-tenant's private DNS zone: Since we can't add the V-NET link of private DNS's zone to the Azure virtual WAN/virtual network in other tenants, it's suggested to manually add the DNS record for consumer-tenant's PE in provider's private DNS zone, or configure the custom DNS. Step 7: Create the Logic App and enable V-Net integration 1.Start by creating a Logic App standard in the provider tenant and integrate it to the provider-tenant's V-NET: 2.Try to resolve the consumer's tenant's blob PE from logic app's Kudu site. Please ensure the PE can be resolved to the correct private IP: 3.Please use the built-in blob connector which works in the logic app's runtime. This logic app will move the blob file from provide-tenant's storage to the consumer-tenant's storage through private network: Conclusion Cross-tenant secure integration of Azure resources using Logic App Standard and Virtual WAN provides a robust and scalable solution for organizations. By following best practices and leveraging Azure's capabilities, you can achieve seamless and secure integrations across tenant boundaries. References https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-overview https://learn.microsoft.com/en-us/azure/private-link/private-endpoint-overview https://learn.microsoft.com/en-us/azure/virtual-wan/virtual-wan-site-to-site-portal#vnet https://learn.microsoft.com/en-us/entra/external-id/cross-tenant-access-overview https://learn.microsoft.com/en-us/azure/virtual-wan/cross-tenant-vnet-az-cli641Views2likes0CommentsIntroducing Azure API Management Policy Toolkit
We’re excited to announce the early release of the Azure API Management Policy Toolkit, a set of libraries and tools designed to change how developers work with API Management policies, making policy management more approachable, testable, and efficient for developers. Empowering developers with Azure API Management Policy Toolkit Policies have always been at the core of Azure API Management, offering powerful capabilities to secure, change behavior, and transform requests and responses to the APIs. Recently, we've made the policies easier to understand and manage by adding Copilot for Azure features for Azure API Management. This allows you to create and explain policies with AI help directly within the Azure portal. This powerful tool lets developers create policies using simple prompts or get detailed explanations of existing policies. This makes it much easier for new users to write policies and makes all users more productive. Now, with the Policy Toolkit, we’re taking another significant step forward. This toolkit brings policy management even closer to the developer experience you know. Elevating policy development experience Azure API Management policies are written in Razor format, which for those unfamiliar with it can be difficult to read and understand, especially when dealing with large policy documents that include expressions. Testing and debugging policy changes requires deployment to a live Azure API Management instance, which slows down feedback loop even for small edits. The Policy Toolkit addresses these challenges. You can now author your policies in C#, a language that feels natural and familiar to many developers and write tests against them. This shift improves the policy writing experience for developers, makes policies more readable, and shortens the feedback loop for policy changes. Key toolkit features to transform your workflow: Consistent policy authoring. Write policies in C#. No more learning Razor syntax and mixing XML and C# in the same document. Syntax checking: Compile your policy documents to catch syntax errors and generate Razor-based equivalents. Unit testing: Write unit tests alongside your policies using your favorite unit testing framework. CI/CD integration: Integrate Policy Toolkit into automation pipelines for testing and compilation into Razor syntax for deployment. Current Limitations While we’re excited about the capabilities of the Policy Toolkit, we want to be transparent about its current limitation: Not all policies are supported yet, but we’re actively working on expanding the coverage. We are working on making the Policy Toolkit available as a NuGet package. In the meantime, you’ll need to build the solution on your own. Unit testing is limited to policy expressions and is not supported for entire policy documents yet. Get Started Today! We want you to try the Azure API Management Policy Toolkit and to see if it helps streamlining your policy management workflow. Check out documentation to get started. We’re eager to hear your feedback! By bringing policy management closer to the developer, we’re opening new possibilities to efficiently manage your API Management policies. Whether you’re using the AI-assisted approach with Copilot for Azure or diving deep into C# with the Policy Toolkit, we’re committed to making policy management more approachable and powerful.3.1KViews10likes2CommentsMicrosoft named a Leader in 2023 Gartner® Magic Quadrant™ for API Management
We're thrilled to announce that Gartner has once again recognized Microsoft as a Leader in the 2023 Magic Quadrant for API Management, marking the fourth consecutive year of this recognition. We believe our continued recognition as a Leader is a testament to our deep customer engagements.Scaling Logic Apps Standard – Sustained Message Processing System
In the previous blog of this blog post series, we discussed how Logic App standard can be used to process high throughput event data at a sustained rate over long periods of time. In this blog, we will see how Logic App standard can be used to process high throughput message data that can facilitate the decoupling of applications and services. We simulate a real-life use case where messages are sent to a Service Bus queue at a sustained rate for processing, and we use a templated Logic App workflow to process the messages in real-time. The business logic in the templated workflow can be easily replaced by the customer to actions that encompass their unique processing of the relevant messaging information. To better showcase the message processing capabilities, we will discuss two scaling capabilities, one for vertical scaling (varying the performance of service plans), and another horizontal scaling (varying the number of service plan instances). Vertical scaling capabilities of the Logic App Standard with Built-In Service Bus Connector In this section, we will investigate the vertical scaling capabilities of the Logic App Service Bus connector, conducting experiments to find the maximum message throughput supported by each of the standard Logic App SKUs from WS1 to WS3. The workflow uses the Service Bus built-in trigger, so the messages are promptly picked up and are processed in the run at par with ingress rate. like the one shown below - available at our Template Gallery. Customers can replace the Business Logic and Compensation Logic to handle their business scenarios. For this investigation, we used the out-of-the-box Logic Apps Standard configuration for scaling: 1 always ready instance 20 maximum burst instances We also used the default trigger batch size of 50. Experiment Methodology For each experiment we selected one of the available SKUs (WS1, WS2, WS3), and supplied a steady influx of X messages per minute to the connected Service Bus queue in one experiment. We conduct multiple experiments for each SKU and gradually increase X until the Logic App cannot process all the messages immediately. For each experiment, we pushed enough (1 million) messages in total to the queue to ensure that each workflow reaches a steady state processing rate with its maximum scaling. Environment Configuration The experiment setup is summarized in the table below: Tests setup Single Stamp Logic App Number of workflows 1 Templated Triggers Service Bus Trigger batch size 50 Actions Service Bus, Scope, Condition, Compose Number of storage accounts 1 Prewarmed instances 1 Max scale settings 20 Message size 1 KB Service Bus queue max size 2 GB Service Bus queue message lock duration 5 minutes Service Bus queue message max delivery count 10 Experiment results We summarize the experiment results in the table below. If the default maximum scaling of 20 instances is adopted, then the throughput we measured here serves as a good reference for the upper bound of message processing powers: WS Plan Message Throughput Time to process 1M messages WS1 9000 messages/minute 120 minutes WS2 19000 messages/minute 60 minutes WS3 24000 messages/minute 50 minutes In all the experiments, the Logic App scaled out to 20 instances at steady state. 📝 Complex business logic, which requires more actions and/or longer processing times, can change those values. Findings Understand the scaling and bottlenecks In the vertical scaling experiments, we limited the maximum instance count to 20. Under this setting, we sometimes observe "dead-letter" messages being generated. With Service Bus, messages become "dead-letters" if they are not processed within the lock duration for all delivery attempts. This means that the workflow takes more than 5 minutes to complete the scope/business logic for some messages. The root cause is that the Service Bus trigger fetches messages faster than the workflow actions can process them. As we can see in the following figure, the Service Bus trigger can fetch as much as 60k messages per minute, but the workflow can only process less than 30k messages per minute. Recommendations We recommend going with the default scaling settings if your workload is well below the published message throughput and increase the maximum burst when a heavier workload is expected. Horizontal scaling capabilities of the Logic App Service Bus connector In this section, we probe into the horizontal scaling of Logic App message handling capabilities with varying instance counts. We conduct experiments on the most performant and widely used WS3 SKU. Experiment Methodology For each experiment we varied the number of pre-warmed instances and maximum burst instances and supplied a steady influx of X messages per minute to the connected Service Bus queue, gradually increase X until the Logic App cannot process all the messages immediately. We push enough (4 million) messages to the queue for each experiment to ensure that each workflow reaches a steady state processing rate. Environment configuration The experiment setup is summarized in the table below: Tests setup Multi Stamp Logic App Number of workflows 1 Templated Triggers Service Bus Trigger batch size 50 Actions Service Bus, Scope, Condition, Compose Number of storage accounts 3 Message size 1 KB Service Bus queue max size 4 GB Service Bus queue message lock duration 5 minutes WS Plan WS3 Service Bus queue message max delivery count 10 Experiment results The experiment results are summarized in the table below: Prewarmed Instances Max Burst Instances Message Throughput 1 20 24000 messages/minute 1 60 65000 messages/minute 5 60 65000 messages/minute 10 60 65000 messages/minute 10 100 85000 messages/minute In all the experiments, the Logic App scaled out to the maximum burst instance allowed at steady state. Editor's Note: The actual business logic can affect the number of machines the app scales out to. The performance might also vary based on the complexity of the workflow logic. Findings Understand the scaling and bottlenecks In the horizontal scaling experiments, when the max burst instances count is 60 or above, we no longer observe "dead-letters" being generated. In these cases, the Service Bus trigger can only fetch messages as fast as the workflow actions can process them. As we can observe in the following figure, all messages are processed immediately after they are fetched. Does the scaling speed affect the workload? As we can see below, a Standard Logic app with a prewarmed instance count of 5 can scale out to its maximum scaling of 60 under 10 minutes. The message fetching and message processing abilities scale out together, preventing the generation of “dead-letters.” Also, from the results in our horizontal scaling experiments, we see that having more prewarmed instances does not affect the steady-state throughput of the workflow. Recommendations With these two findings, we recommend keeping the minimum instance number small for cost-saving, without any impact on your peak performance. If a use case requires a higher throughput, the maximum burst instances setting can be set higher to accommodate that. For production workflows, we still recommend having at least two always-ready instances, as they would reduce any potential downtime from reboots.546Views3likes0CommentsAnnouncing General Availability of Shared Workspace Gateways in Azure API Management
Shared workspace gateways reduce the cost of federating API management Workspaces enable organizations to boost developer productivity and enhance API governance by federating API management. They provide API teams with the autonomy to independently manage APIs, while allowing the API platform team to centralize monitoring, enforce API policies and compliance, and unify API discovery within a developer portal. When we announced the general availability of workspaces in August, each workspace required a dedicated workspace gateway, providing a high degree of isolation for increased API security and reliability. This new capability allows you to associate up to thirty workspaces with a workspace gateway, offering the advantages of federated API management at a lower cost when runtime isolation between workspaces is not necessary. Balance reliability, security, and cost when using workspaces In Azure API Management, workspaces enable API teams to manage APIs, policies, subscriptions, and related resources independently from other teams. Each workspace requires a workspace gateway to run its APIs. Gateway settings—including scale, networking, and hostname—and computing resources, such as CPU and memory, are shared by all workspaces on a gateway. Since workspaces share gateway’s computing resources, resource exhaustion caused by a single API impacts APIs from all workspaces on that gateway. Therefore, it’s important to consider reliability, security, and cost when choosing a deployment model for workspaces. Use dedicated gateways for mission-critical workloads: To maximize API reliability and security, assign each mission-critical workspace to its own dedicated gateway, avoiding shared use with other workspaces. Balance reliability, security, and cost: Associate multiple workspaces with a gateway to balance reliability, security, and cost for non-critical workloads. Distributing workspaces across at least two gateways helps prevent issues, such as resource exhaustion or configuration errors, from impacting all APIs within the organization. Use distinct gateways for different use cases: Group workspaces on a gateway based on a use case or network requirements. For instance, separate internal and external APIs by assigning them to different gateways. Prepare to quarantine troubled workspaces: Use a proxy, such as Azure Application Gateway or Azure Front Door, in front of shared workspace gateways to simplify moving a workspace that’s causing resource exhaustion to a different gateway, preventing impact on other workspaces sharing the gateway. Get started with workspaces The ability to associate multiple workspaces with a workspace gateway will continue to release in December and January, with pauses in the release rollout around the winter holidays. If you created a workspace gateway before the new release is rolled out to your service, you will need to recreate it to associate it with multiple workspaces. Updated documentation will be released in December, alongside pricing page updates that reflect the cost of associating more than five workspaces with a gateway. Get started by creating your first workspace.1.9KViews1like5CommentsIntegration Environment and Application Monitoring enhancements (early preview refresh)
After our Public Preview announcement earlier this earlier, we are excited to share early preview into significant enhancements to the Integration Environment and Application Monitoring experience. These updates are designed to make it easier for you to monitor, trace, and manage your Azure Integration Services (AIS) applications at scale. Note: This capability is in limited Preview and can be accessed via this link - https://aka.ms/aismon/refresh . It will be available publicly in January'25 What’s New Single pane view into the Health of All Applications in Integration Environment Gain a consolidated view of your application's health through Integration Environment: Leverage Azure Alerts to monitor individual resources and view all triggered alerts in one place. Understand the overall health of your application with a single-pane view, helping you stay proactive in identifying and resolving issues. End-to-End Message Tracing Across AIS Resources Using a single correlation ID, you can now trace messages seamlessly across all AIS resources in your application. This enables: A comprehensive end-to-end itinerary of your message flow across resources. Enhanced troubleshooting and diagnostics for complex workflows. Note: We are actively addressing an issue that prevents Service Bus traces from displaying in some scenarios. At-Scale Monitoring for Logic Apps We’ve expanded monitoring capabilities for Logic Apps to support at-scale operations: Health Dashboard: Monitor the health of one or multiple Logic Apps within your application (previously limited to a single Logic App). Bulk Resubmission: Easily select and resubmit multiple Logic App runs in bulk, streamlining operational efficiency. Improved Browsing Experience The Integration Environment now provides a more intuitive browsing experience. Key enhancements include: API Connections in the Integration Application View: Easily locate and monitor API connections within your Logic App and in your integration application. Resource Status Visibility: Quickly check the current status of resources. Plan Details: View detailed information about your plan, including Name and SKU. Customizable Filters: Tailor the columns to display the most relevant information for your monitoring needs. Getting Started This new capability will be available in Public Preview in early January. If you’d like early access, use this link (https://aka.ms/aismon/refresh). Pre-requisites To use this experience completely, you need to use Workspace-based Application Insights resource. All the AIS resources in your Integration application should push logs to the same workspace. When you use the Dashboards, you select the above chosen workspace to power all the visualizations. The dashboards are built using Azure Workbooks and will be customization so that you can extend them based on your business needs. Learn more Single pane view into health of all applications In Integration Environment, the Insights on the menu item will take you to the aggregated view into the health of all applications. This view is built upon Azure Alerts. You will be able to see the health based on the fired alerts. The screen shot below shows each application, and the number of alerts by the Severity level. When you select a row and choose an application, a detailed table view is displayed, providing a drill-down into the alerts triggered by the resources within that application. This centralized view consolidates alerts from various types of Azure Integration Services (AIS) resources, making it easier to monitor and manage them. The table includes details such as the associated resource, the triggered alerts, their severity levels, and a direct link to each alert for more in-depth information. This unified experience simplifies the process of tracking and addressing issues across your application resources. The Open Alert Details link opens gives further details into the specific entities which are in unhealthy state. You can also take an action on the alert here and update the user response and add comments. To summarize, within a single pane you can see the health of your application that includes different AIS resources, drill into the alerts that make up the health of the application. You can even go one step further and update user response. All of this through a in a single pane, across different types of resources, without the need for any context switching. Monitoring Dashboard Enhancements The workbook-based dashboards are accessible through the Insights menu within an application. Under Logic Apps, the Overview page provides an aggregated view of the health of all Logic Apps in the application. From this page, you can drill down to view the health of individual Logic Apps, explore detailed run statuses, and monitor workflows for each Logic App. The trend charts show the runs and their trends over the selected time period. As you are ready to troubleshoot further, the Runs tab gives more details The chart below illustrates the total runs and their pass/fail rates for Logic Apps and their associated workflows. This widget provides a clear visual representation of workflow statuses, helping you quickly identify areas that may require attention. Selecting a row allows you to drill down into the specific runs for the selected Logic App or workflow. The runs for a workflow include all relevant details, with additional insights available in the properties bag to aid further troubleshooting. The table is filterable by run status, making it easier to focus on specific scenarios. Most importantly, it supports resubmission of failed runs, either individually or in bulk. Additionally, each entry includes a unique correlation ID, which tracks the flow across all AIS resources. Selecting a row opens a detailed table showing the AIS processing hops for the message, providing a comprehensive view of its journey through the system. When you select on a row in the runs table, we use this correlation ID to stitch the timelines of the processing of this message across all AIS resources in this application. You can also provide multiple operation IDs to look into the journey of multiple messages The final table on this page provides action-level details for the selected run, offering a deeper drill-down into each individual action. In the event of failures, the properties section includes error details to assist with root cause analysis. What’s Next We are sharing this early preview to get your feedback – do not hesitate to reach out to us via this blogpost or directly. We plan to release this in January in Public Preview. We are also targeting to include some of these capabilities such as bulk resubmission and health based on alerts in Logic Apps Standard as well. Stay Tuned for more updates!1.3KViews1like0Comments