api management
36 TopicsLogic Apps Aviators Newsletter - March 2025
In this issue: Ace Aviator of the Month News from our product group News from our community Ace Aviator of the Month March’s Ace Aviator: Dieter Gobeyn What’s your role and title? What are your responsibilities? I work as an Azure Solution Architect; however, I remain very hands-on and regularly develop solutions to stay close to the technology. I design and deliver end-to-end solutions, ranging from architectural analysis to full implementation. My responsibilities include solution design, integration analysis, contributing to development, reviewing colleagues’ work, and proposing improvements to our platform. I also provide Production support when necessary. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? My days can vary greatly, but collaboration with my globally distributed team is always a priority. I begin my day promptly at 8 AM to align with different time zones. After our daily stand-up, I often reach out to colleagues to see if they need assistance or follow-up on mails/team messages. A significant portion of my day involves solution design—gathering requirements, outlining integration strategies, and collaborating with stakeholders. I also identify potential enhancements, perform preliminary analysis, and translate them into user stories. I also spend time on technical development, building features, testing them thoroughly, and updating documentation for both internal and client use. On occasions where deeper investigation is needed, I support advanced troubleshooting, collaborating with our support team if issues demand additional expertise. If a release is scheduled, I sometimes manage deployment activities in the evening. What motivates and inspires you to be an active member of the Aviators/Microsoft community? I’ve always valued the sense of community that comes from sharing knowledge. Early in my career, attending events and meeting fellow professionals helped me bridge the gap between theory and real-world practice. This informal environment encourages deeper, hands-on knowledge exchange, which often goes beyond what official documentation can provide. Now that I’m in a more senior role, I believe it’s my responsibility—and pleasure—to give back. Contributing to the community enables me to keep learning, connect with fantastic people, and grow both technically and personally. Looking back, what advice do you wish you had been given earlier that you’d now share with those looking to get into STEM/technology? Master the fundamentals, not just the tools. It’s easy to get caught up in the newest frameworks, cloud platforms, and programming languages. However, what remains constant are the core concepts such as networking, data structures, security, and system design. By understanding the ‘why’ behind each technology, you’ll be better equipped to design future-proof solutions and adapt fast as tools and trends evolve. What has helped you grow professionally? Curiosity and a commitment to continuous learning have been key. I’m always keen to understand the ‘why’ behind how things work. Outside my normal job, I pursue Microsoft Reactor sessions, community events, and personal projects to expand my skills. Just as important is receiving open, honest feedback from peers and being honest with oneself. Having mentors or colleagues who offer both challenges and support is crucial for growth, as they provide fresh perspectives and help you refine your skills. In many cases, I’ve found it takes effort outside standard working hours to truly develop my skills, but it has always been worth it. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? I’d love to see more uniformity & predictability across adapters, for example in terms of their availability for both stateless and stateful workflows. Currently, certain adapters—like the timer trigger—are either unavailable in stateless workflows or behave differently. Unifying adapter support would not only simplify solution design decisions, but also reduce proof-of-concept overhead and streamline transitions between stateless and stateful workflows as requirements evolve. News from our product group Logic Apps Live Feb 2025 Missed Logic Apps Live in February? You can watch it here. You will find a live demo for the Exporting Logic Apps Standard to VS Code, some updates on the new Data Mapper User Experience and lots of examples on how to leverage Logic Apps to create your Gen AI solutions. Exporting Logic App Standard to VS Code Bringing existing Logic Apps Standard deployed in Azure to VS Code are now simpler with the new Create Logic Apps Workspaces from package. New & Improved Data Mapper UX in Azure Logic Apps – Now in Public Preview! We’re excited to announce that a UX update for Data Mapper in Azure Logic Apps is now in Public Preview! We have continuously improved Data Mapper, which is already generally available (GA), based on customer feedback. Parse or chunk content for workflows in Azure Logic Apps (Preview) When working with Azure AI Search or Azure OpenAI actions, it's often necessary to convert content into tokens or divide large documents into smaller pieces. The Data Operations actions, "Parse a document" and "Chunk text," can help by transforming content like PDFs, CSVs, and Excel files into tokenized strings and splitting them based on the number of tokens. These outputs can then be used in subsequent actions within your workflow. Connect to Azure AI services from workflows in Azure Logic Apps Integrate enterprise services, systems, and data with AI technologies by connecting your logic app workflows to Azure OpenAI and Azure AI Search resources. This guide offers an overview and practical examples on how to use these connector operations effectively in your workflow. Power Automate migration to Azure Logic Apps (Standard) Development teams often need to build scalable, secure, and efficient automation solutions. If your team is considering migrating flows from Microsoft Power Automate to Standard workflows in Azure Logic Apps, this guide outlines the key advantages of making the transition. Azure Logic Apps (Standard) is particularly beneficial for enterprises running complex, high-volume, and security-sensitive workloads. AI playbook, examples, and other resources for workflows in Azure Logic Apps AI capabilities are increasingly essential in applications and software, offering time-saving and innovative tasks like chat interactions. They also facilitate the creation of integration workloads across various services, systems, apps, and data within enterprises. This guide provides building blocks, examples, samples, and resources to demonstrate how to use AI services, such as Azure OpenAI and Azure AI Search, in conjunction with other services and systems to build automated workflows in Azure Logic Apps. Collect ETW trace in Logic App Standard An Inline C# script to collect Event Tracing for Windows (ETW) and store it in a text file, from within your Logic Apps. Typical Storage access issues troubleshooting With this blog post we intend to provide you more tools and visibility on how to troubleshoot your Logic App and accelerate your service availability restore. Download Logic App content for Consumption and Standard Logic App in the Portal It's common to see customers needing to download the JSON contents for their Logic Apps, either to keep a copy of the code or to initiate CI/CD. The methods to download this are very simple, accessible on a single button. Running Powershell inline with Az commands- Logic App Standard With the availability of the Inline "Execute Powershell code" action, a few questions have been brought to us like for example how to execute Az commands with this action. Deploy Logic App Standard with Application Routing Feature Based on Terraform and Azure Pipeline This article shared a mature plan to deploy logic app standard then set the application routing features automatically. It's based on Terraform template and Azure DevOps Pipeline. News from our community Azure Logic Apps: create Standard Logic App projects in Visual Studio Code from Azure portal export Post by Stefano Demiliani How many times you had the need to create a new Azure Logic App workflow starting from an existing one? Personally, this happens a lot of time… Starting with version 5.18.7 (published some days ago), the Azure Logic Apps (Standard) extension for Visual Studio Code provides the capability to create Standard Azure Logic App projects from an existing Logic App exported from the Azure portal. Bridging the Gap: Azure Logic Apps Meets On-Prem Fileshares Post by Tim D'haeyer The end of BizTalk Server is fast approaching, signaling a significant shift in the Microsoft integration landscape. With this transition, the era of on-premises integration is drawing to a close, prompting many organizations to migrate their integration workloads to Azure. One key challenge in this process is: “How can I read and write from an on-premises file share using Logic Apps?” Thankfully, this functionality has been available for some time with Azure Logic Apps Standard. Azure Logic Apps vs. Power Apps vs. Power Automate: What to Use When? Post by Prashant Singh The Architect’s Dilemma: Logic Apps vs. Power Apps vs. Power Automate! In my latest blog, I compare Logic Apps, Power Automate, and Power Apps—helping you pick the right one! Securing Azure Logic Apps: Prevent SQL Injection in Complex SQL Server Queries Post by Cameron McKay Executing COMPLEX queries as raw SQL is tempting in Logic App workflows. It's clear how to protect SQL CRUD actions in Logic Apps. BUT how do we protect our complex queries? In the Logic App Standard tier, built-in connectors run locally within the same process as the logic app Post by Sandro Pereira In the Logic App Standard tier, built-in connectors run locally within the same process as the logic app, reducing latency and improving performance. This contrasts with the Consumption model, where many connectors rely on external dependencies, leading to potential delays due to network round-trips. This makes Logic App Standard an ideal choice for scenarios where performance and low-latency integration are critical, such as real-time data processing and enterprise API integrations. Scaling Logic Apps Hybrid Post by Massimo Crippa Logic Apps Hybrid provides a consistent development, deployment, and observability experience across both cloud and edge applications. But what about scaling? Let's dive into that in this blog post. Calling API Management in a different subscription on LA Standard Post by Sandro Pereira Welcome again to another Logic Apps Best Practices, Tips, and Tricks post. Today, we will discuss how to call from Logic App Standard an API exposed in API Management from a different subscription using the in-app API Management connector. How to enable API Management Connector inside VS Code Logic App Standard Workflow Designer Post by Sandro Pereira If you’ve been working with Azure Logic Apps Standard in Visual Studio Code and noticed that the API Management connector is conspicuously absent from the list of connectors inside the workflow designer, you’re not alone. This is a typical behavior that many developers encounter, and understanding why it happens—and how to enable it—can save you a lot of headaches. Do you have strict security requirements for your workflows? Azure Logic Apps is the solution. Post by Stefano Demiliani Azure Logic Apps offers robust solutions for enterprise-level workflows, emphasizing high performance, scalability, and stringent security measures. This article explores how Logic Apps ensures business continuity with geo-redundancy, automated backups, and advanced security features like IP restrictions and VNET integration. Discover why Azure Logic Apps is the preferred choice for secure and scalable automation in large organizations.326Views2likes0CommentsIntroducing GenAI Gateway Capabilities in Azure API Management
We are thrilled to announce GenAI Gateway capabilities in Azure API Management – a set of features designed specifically for GenAI use cases. Azure OpenAI service offers a diverse set of tools, providing access to advanced models like GPT3.5-Turbo to GPT-4 and GPT-4 Vision, enabling developers to build intelligent applications that can understand, interpret, and generate human-like text and images. One of the main resources you have in Azure OpenAI is tokens. Azure OpenAI assigns quota for your model deployments expressed in tokens-per-minute (TPMs) which is then distributed across your model consumers that can be represented by different applications, developer teams, departments within the company, etc. Starting with a single application integration, Azure makes it easy to connect your app to Azure OpenAI. Your intelligent application connects to Azure OpenAI directly using API Key with a TPM limit configured directly on the model deployment level. However, when you start growing your application portfolio, you are presented with multiple apps calling single or even multiple Azure OpenAI endpoints deployed as Pay-as-you-go or Provisioned Throughput Units (PTUs) instances. That comes with certain challenges: How can we track token usage across multiple applications? How can we do cross charges for multiple applications/teams that use Azure OpenAI models? How can we make sure that a single app does not consume the whole TPM quota, leaving other apps with no option to use Azure OpenAI models? How can we make sure that the API key is securely distributed across multiple applications? How can we distribute load across multiple Azure OpenAI endpoints? How can we make sure that PTUs are used first before falling back to Pay-as-you-go instances? To tackle these operational and scalability challenges, Azure API Management has built a set of GenAI Gateway capabilities: Azure OpenAI Token Limit Policy Azure OpenAI Emit Token Metric Policy Load Balancer and Circuit Breaker Import Azure OpenAI as an API Azure OpenAI Semantic Caching Policy (in public preview) Azure OpenAI Token Limit Policy Azure OpenAI Token Limit policy allows you to manage and enforce limits per API consumer based on the usage of Azure OpenAI tokens. With this policy you can set limits, expressed in tokens-per-minute (TPM). This policy provides flexibility to assign token-based limits on any counter key, such as Subscription Key, IP Address or any other arbitrary key defined through policy expression. Azure OpenAI Token Limit policy also enables pre-calculation of prompt tokens on the Azure API Management side, minimizing unnecessary request to the Azure OpenAI backend if the prompt already exceeds the limit. Learn more about this policy here. Azure OpenAI Emit Token Metric Policy Azure OpenAI enables you to configure token usage metrics to be sent to Azure Applications Insights, providing overview of the utilization of Azure OpenAI models across multiple applications or API consumers. This policy captures prompt, completions, and total token usage metrics and sends them to Application Insights namespace of your choice. Moreover, you can configure or select from pre-defined dimensions to split token usage metrics, enabling granular analysis by Subscription ID, IP Address, or any custom dimension of your choice. Learn more about this policy here. Load Balancer and Circuit Breaker Load Balancer and Circuit Breaker features allow you to spread the load across multiple Azure OpenAI endpoints. With support for round-robin, weighted (new), and priority-based (new) load balancing, you can now define your own load distribution strategy according to your specific requirements. Define priorities within the load balancer configuration to ensure optimal utilization of specific Azure OpenAI endpoints, particularly those purchased as PTUs. In the event of any disruption, a circuit breaker mechanism kicks in, seamlessly transitioning to lower-priority instances based on predefined rules. Our updated circuit breaker now features dynamic trip duration, leveraging values from the retry-after header provided by the backend. This ensures precise and timely recovery of the backends, maximizing the utilization of your priority backends to their fullest. Learn more about load balancer and circuit breaker here. Import Azure OpenAI as an API New Import Azure OpenAI as an API in Azure API management provides an easy single click experience to import your existing Azure OpenAI endpoints as APIs. We streamline the onboarding process by automatically importing the OpenAPI schema for Azure OpenAI and setting up authentication to the Azure OpenAI endpoint using managed identity, removing the need for manual configuration. Additionally, within the same user-friendly experience, you can pre-configure Azure OpenAI policies, such as token limit and emit token metric, enabling swift and convenient setup. Learn more about Import Azure OpenAI as an API here. Azure OpenAI Semantic Caching policy Azure OpenAI Semantic Caching policy empowers you to optimize token usage by leveraging semantic caching, which stores completions for prompts with similar meaning. Our semantic caching mechanism leverages Azure Redis Enterprise or any other external cache compatible with RediSearch and onboarded to Azure API Management. By leveraging the Azure OpenAI Embeddings model, this policy identifies semantically similar prompts and stores their respective completions in the cache. This approach ensures completions reuse, resulting in reduced token consumption and improved response performance. Learn more about semantic caching policy here. Get Started with GenAI Gateway Capabilities in Azure API Management We’re excited to introduce these GenAI Gateway capabilities in Azure API Management, designed to empower developers to efficiently manage and scale their applications leveraging Azure OpenAI services. Get started today and bring your intelligent application development to the next level with Azure API Management.33KViews10likes14CommentsInbound private endpoint for Standard v2 tier of Azure API Management
Standard v2 was announced in general availability on April 1st, 2024. Customers can now configure an inbound private endpoint (preview) for your API Management Standard v2 instance to allow clients in your private network to securely access the API Management gateway over Azure Private Link. The private endpoint uses an IP address from an Azure virtual network in which it's hosted. Network traffic between a client on your private network and API Management traverses over the virtual network and a Private Link on the Microsoft backbone network, eliminating exposure from the public internet. Further, you can configure custom DNS settings or an Azure DNS private zone to map the API Management hostname to the endpoint's private IP address. Inbound private endpoint With a private endpoint and Private Link, you can: Create multiple Private Link connections to an API Management instance. Use the private endpoint to send inbound traffic on a secure connection. Use policy to distinguish traffic that comes from the private endpoint. Limit incoming traffic only to private endpoints, preventing data exfiltration. Combine with outbound virtual network integration to provide end-to-end network isolation of your API Management clients and backend services. Preview limitations Today, only the API Management instance’s Gateway endpoint supports inbound private link connections. In addition, each API management instance can support at most 100 private link connections. To participate in the preview and add an inbound private endpoint to your Standard v2 instance, you must complete a request form. The Azure API Management team will review your request and respond via email within five business days. Learn more API Management v2 tiers FAQ API Management v2 tiers documentation API Management overview documentationExpanding GenAI Gateway Capabilities in Azure API Management
In May 2024, we introduced GenAI Gateway capabilities – a set of features designed specifically for GenAI use cases. Today, we are happy to announce that we are adding new policies to support a wider range of large language models through Azure AI Model Inference API. These new policies work in a similar way to the previously announced capabilities, but now can be used with a wider range of LLMs. Azure AI Model Inference API enables you to consume the capabilities of models, available in Azure AI model catalog, in a uniform and consistent way. It allows you to talk with different models in Azure AI Studio without changing the underlying code. Working with large language models presents unique challenges, particularly around managing token resources. Token consumption impacts cost and performance of intelligent apps calling the same model, making it crucial to have robust mechanisms for monitoring and controlling token usage. The new policies aim to address challenges by providing detailed insights and control over token resources, ensuring efficient and cost-effective use of models deployed in Azure AI Studio. LLM Token Limit Policy LLM Token Limit policy (preview) provides the flexibility to define and enforce token limits when interacting with large language models available through the Azure AI Model Inference API. Key Features Configurable Token Limits: Set token limits for requests to control costs and manage resource usage effectively Prevents Overuse: Automatically blocks requests that exceed the token limit, ensuring fair use and eliminating the noisy neighbour problem Seamless Integration: Works seamlessly with existing applications, requiring no changes to your application configuration Learn more about this policy here. LLM Emit Token Metric Policy LLM Emit Token Metric policy (preview) provides detailed metrics on token usage, enabling better cost management and insights into model usage across your application portfolio. Key Features Real-Time Monitoring: Emit metrics in real-time to monitor token consumption. Detailed Insights: Gain insights into token usage patterns to identify and mitigate high-usage scenarios Cost Management: Split token usage by any custom dimension to attribute cost to different teams, departments, or applications Learn more about this policy here. LLM Semantic Caching Policy LLM Semantic Caching policy (preview) is designed to reduce latency and reduce token consumption by caching responses based on the semantic content of prompts. Key Features Reduced Latency: Cache responses to frequently requested queries based to decrease response times. Improved Efficiency: Optimize resource utilization by reducing redundant model inferences. Content-Based Caching: Leverages semantic similarity to determine which response to retrieve from cache Learn more about this policy here. Get Started with Azure AI Model Inference API and Azure API Management We are committed to continuously improving our platform and providing the tools you need to leverage the full potential of large language models. Stay tuned as we roll out these new policies across all regions and watch for further updates and enhancements as we continue to expand our capabilities. Get started today and bring your intelligent application development to the next level with Azure API Management.4.8KViews2likes3CommentsIntroducing Azure API Management Policy Toolkit
We’re excited to announce the early release of the Azure API Management Policy Toolkit, a set of libraries and tools designed to change how developers work with API Management policies, making policy management more approachable, testable, and efficient for developers. Empowering developers with Azure API Management Policy Toolkit Policies have always been at the core of Azure API Management, offering powerful capabilities to secure, change behavior, and transform requests and responses to the APIs. Recently, we've made the policies easier to understand and manage by adding Copilot for Azure features for Azure API Management. This allows you to create and explain policies with AI help directly within the Azure portal. This powerful tool lets developers create policies using simple prompts or get detailed explanations of existing policies. This makes it much easier for new users to write policies and makes all users more productive. Now, with the Policy Toolkit, we’re taking another significant step forward. This toolkit brings policy management even closer to the developer experience you know. Elevating policy development experience Azure API Management policies are written in Razor format, which for those unfamiliar with it can be difficult to read and understand, especially when dealing with large policy documents that include expressions. Testing and debugging policy changes requires deployment to a live Azure API Management instance, which slows down feedback loop even for small edits. The Policy Toolkit addresses these challenges. You can now author your policies in C#, a language that feels natural and familiar to many developers and write tests against them. This shift improves the policy writing experience for developers, makes policies more readable, and shortens the feedback loop for policy changes. Key toolkit features to transform your workflow: Consistent policy authoring. Write policies in C#. No more learning Razor syntax and mixing XML and C# in the same document. Syntax checking: Compile your policy documents to catch syntax errors and generate Razor-based equivalents. Unit testing: Write unit tests alongside your policies using your favorite unit testing framework. CI/CD integration: Integrate Policy Toolkit into automation pipelines for testing and compilation into Razor syntax for deployment. Current Limitations While we’re excited about the capabilities of the Policy Toolkit, we want to be transparent about its current limitation: Not all policies are supported yet, but we’re actively working on expanding the coverage. We are working on making the Policy Toolkit available as a NuGet package. In the meantime, you’ll need to build the solution on your own. Unit testing is limited to policy expressions and is not supported for entire policy documents yet. Get Started Today! We want you to try the Azure API Management Policy Toolkit and to see if it helps streamlining your policy management workflow. Check out documentation to get started. We’re eager to hear your feedback! By bringing policy management closer to the developer, we’re opening new possibilities to efficiently manage your API Management policies. Whether you’re using the AI-assisted approach with Copilot for Azure or diving deep into C# with the Policy Toolkit, we’re committed to making policy management more approachable and powerful.3.1KViews10likes2CommentsAnnouncing General Availability of Shared Workspace Gateways in Azure API Management
Shared workspace gateways reduce the cost of federating API management Workspaces enable organizations to boost developer productivity and enhance API governance by federating API management. They provide API teams with the autonomy to independently manage APIs, while allowing the API platform team to centralize monitoring, enforce API policies and compliance, and unify API discovery within a developer portal. When we announced the general availability of workspaces in August, each workspace required a dedicated workspace gateway, providing a high degree of isolation for increased API security and reliability. This new capability allows you to associate up to thirty workspaces with a workspace gateway, offering the advantages of federated API management at a lower cost when runtime isolation between workspaces is not necessary. Balance reliability, security, and cost when using workspaces In Azure API Management, workspaces enable API teams to manage APIs, policies, subscriptions, and related resources independently from other teams. Each workspace requires a workspace gateway to run its APIs. Gateway settings—including scale, networking, and hostname—and computing resources, such as CPU and memory, are shared by all workspaces on a gateway. Since workspaces share gateway’s computing resources, resource exhaustion caused by a single API impacts APIs from all workspaces on that gateway. Therefore, it’s important to consider reliability, security, and cost when choosing a deployment model for workspaces. Use dedicated gateways for mission-critical workloads: To maximize API reliability and security, assign each mission-critical workspace to its own dedicated gateway, avoiding shared use with other workspaces. Balance reliability, security, and cost: Associate multiple workspaces with a gateway to balance reliability, security, and cost for non-critical workloads. Distributing workspaces across at least two gateways helps prevent issues, such as resource exhaustion or configuration errors, from impacting all APIs within the organization. Use distinct gateways for different use cases: Group workspaces on a gateway based on a use case or network requirements. For instance, separate internal and external APIs by assigning them to different gateways. Prepare to quarantine troubled workspaces: Use a proxy, such as Azure Application Gateway or Azure Front Door, in front of shared workspace gateways to simplify moving a workspace that’s causing resource exhaustion to a different gateway, preventing impact on other workspaces sharing the gateway. Get started with workspaces The ability to associate multiple workspaces with a workspace gateway will continue to release in December and January, with pauses in the release rollout around the winter holidays. If you created a workspace gateway before the new release is rolled out to your service, you will need to recreate it to associate it with multiple workspaces. Updated documentation will be released in December, alongside pricing page updates that reflect the cost of associating more than five workspaces with a gateway. Get started by creating your first workspace.1.9KViews1like5CommentsAnnouncing the Public Preview of the Premium v2 tier of Azure API Management
Today, we are excited to announce the public preview of Azure API Management Premium v2 tier. Superior capacity, highest entity limits, unlimited included calls, and the most comprehensive set of features set the Premium apart from other API Management tiers. Customers rely on the Premium tier for running enterprise-wide API programs at scale, with high availability, and performance. The Premium v2 tier has a new architecture that eliminates management traffic from the customer VNet, making private networking much more secure and easier to setup. During the creation of a Premium v2 instance, you can choose between VNet injection or VNet integration (introduced in the Standard v2 tier) options. New and improved VNet injection Using VNet injection in Premium v2 no longer requires any network security groups rules, route tables, or service endpoints. Customers can secure their API workloads without impacting API Management dependencies, while Microsoft can secure the infrastructure without interfering with customer API workloads. In short, the new VNet injection implementation enables both parties to manage network security and configuration setting independently and without affecting each other. You can now configure your APIs with complete networking flexibility: force tunnel all outbound traffic on-premises, send all outbound traffic through an NVA, or add a WAF device to monitor all inbound traffic to your API Management Premium v2—all without constraints. Preview limitations The public preview of the Premium v2 tier is a limited-access feature, available only in selected public regions and requires creating a new service instance. To participate in the preview and deploy a Premium v2 instance, you must complete a request form. The Azure API Management team will review your request and respond via email within five business days. For pricing information and regional availability, please visit the API Management pricing page. Learn more API Management v2 tiers documentation API Management v2 tiers FAQ API Management overview documentation1.5KViews0likes0CommentsAzure Integration Services Unveils New Features at Microsoft Ignite 2024
In today’s fast-paced digital landscape, businesses are turning to AI to drive innovation and maintain their competitive edge. At Microsoft Ignite 2024, Azure Integration Services introduces groundbreaking features that seamlessly integrate AI into your workflows, without disrupting your operations. By putting AI at the forefront, Azure Integration Services helps enterprises streamline business processes, enhance customer experiences, and unlock new capabilities. While AI is a powerful driver of transformation, modernization of your integration platforms is equally critical. Azure Integration Services delivers both, empowering your organization to modernize integrations while tapping into AI innovation. In this blog, we’ll explore how the latest updates to Azure Integration Services equip your organization with the tools and knowledge to integrate AI into workflows, modernize integrations, and create a foundation that’s both scalable and adaptable to future business needs. Future-Proof Your Business: Embrace AI with Azure Integration Services Azure Integration Services continues to transform the way businesses leverage AI, enhancing Azure API Management and expanding Azure Logic Apps capabilities across diverse environments. Support for GPT4o (Text and Images) Across All GenAI Policies in Azure API Management Our expanded support for GPT-4o models (text and image) within Azure API Management’s Generative AI policies brings AI-driven innovation to your fingertips. New features like Token Limit Policy, Token Metric Policy, and Semantic Caching Policy help businesses manage GPT-4 models in Azure OpenAI deployments more effectively. Learn more about how these policies unlock new capabilities here. Generative AI Gateway Token Quota in Azure API Management This enhancement to the Token Limit Policy gives businesses greater flexibility with daily, weekly, or monthly token quotas. With the ability to control costs, track usage trends, and optimize token consumption, you can support dynamic AI-driven innovation while staying within your budget. Explore how this drives cost-controlled AI experimentation here. AI Capabilities in the Azure Logic Apps Consumption SKU We are excited to announce the public preview of AI capabilities in the Azure Logic Apps Consumption SKU, bringing AI directly into your workflows with the Azure AI Search Connector, Azure OpenAI Connector, and Forms Recognizer. These tools enable intelligent document processing, enhanced search, and language capabilities—all essential for creating dynamic and smarter workflows. By adding AI-powered connectors to the Consumption SKU, businesses of all sizes can innovate without the complexity of managing multiple environments. Ready to integrate AI into your workflows? Learn more about these AI capabilities here. Templates Support in Azure Logic Apps Standard Azure Logic Apps makes it easier than ever to launch integrations quickly, whether you're orchestrating simple data transfers or complex workflows. With pre-built workflow templates, you can accelerate integration scenarios—reducing development time while ensuring your workflows meet unique business needs. Explore how these templates can speed up your integration process here. Modernize Without Disruptions While innovation is crucial, maintaining operational stability is just as important. Azure Integration Services ensures that businesses can modernize their integration systems without causing disruptions, even during critical migrations or cloud transitions. Logic Apps Hybrid Deployment Model For businesses with specialized integration requirements, the new Hybrid Deployment Model allows workflows to run on customer-managed infrastructure—whether on-premises, in a private cloud, or a third-party public cloud. This ensures that businesses can meet regulatory, privacy, or network demands while benefiting from Azure's robust connector library for SaaS integration. Learn how this hybrid approach can help your organization meet unique integration requirements. Premium Integration Account in Azure Logic Apps The Premium Integration Account enhances B2B integrations with higher throughput, scalability, and support for advanced security like VNET integration. This offering is optimized for high-performance, mission-critical workloads and provides the reliability your business depends on. Discover how the Premium Integration Account can power your enterprise-grade integrations here. Deployment slots in Azure Logic Apps Standard This feature is designed to enable zero-downtime deployment for mission-critical Logic Apps, allowing you to update and deploy new versions seamlessly without disrupting end users. Deployment slots bring enterprise-grade availability to your Logic Apps, making it easier to meet high availability requirements. Learn more about setting up deployment slots and optimizing your deployment strategy in our documentation. Automate Build and Deployment for Standard Logic App Workflows with Azure DevOps This release streamlines the deployment process for single-tenant Logic Apps using Azure DevOps, ensuring consistency and efficiency across environments. Start optimizing your workflow deployments today and unlock the power of automated CI/CD for your Logic Apps! For setup details and best practices, check out our documentation here. Advanced Enterprise Features for API Management In addition to AI and integration capabilities, Azure API Management continues to evolve with advanced enterprise features designed to streamline operations, enhance security, and improve performance at scale. Let’s take a look at some of the key advancements that will transform your API management experience. Shared Workspace Gateways in Azure API Management This feature allows businesses to connect multiple workspaces to a single gateway, reducing operational costs and simplifying API management. By federating API management across up to 30 workspaces, organizations can maintain decentralized control while unifying oversight through a central developer portal. This means you can innovate rapidly without sacrificing the security and scalability your enterprise demands. Start simplifying your API management here. Azure API Management Premium v2 Tier For businesses managing APIs at scale, the new Premium v2 Tier offers unmatched performance. With higher entity limits, unlimited API requests, and flexible networking options, the Premium v2 Tier supports large-scale enterprise needs, all while offering greater stability and performance. Explore the power of Premium v2 and how it can drive your organization forward. Fully Managed API Analysis in Azure API Center Simplify API governance with fully managed API analysis in Azure API Center. Automatic linting ensures your API definitions align with company standards, helping to maintain high-quality APIs while reducing manual configuration. Learn more about API analysis and how it ensures consistency and quality across all your APIs here. Synchronization Between Azure API Center and Azure API Management This integration brings together the governance power of API Center with the management capabilities of API Management, offering a unified solution for API lifecycle management. With this integration, you can now easily sync your API Management instance directly with API Center for streamlined API discovery, centralized tracking, and enhanced governance. This solution simplifies the API lifecycle, improving operational efficiency while ensuring comprehensive oversight and governance across your organization’s APIs. API security posture management is now natively available in Defender CSPM We’re excited to announce that API security posture management is now natively integrated into Defender CSPM, offering comprehensive visibility and proactive risk analysis for Azure API Management APIs. This integration helps security teams identify vulnerabilities, prioritize best practices, and assess API exposure risks within the broader application context. Additionally, it expands sensitive data discovery to include API URLs, paths, and query parameters, enabling efficient tracking and mitigation of data exposure risks across cloud applications. Empower Your Business for the Future with Azure Integration Services With the latest innovations at Microsoft Ignite 2024, Azure Integration Services ensures that businesses can move forward with confidence—modernizing their integration systems without disruption while leveraging the power of AI. Whether you're managing legacy migrations, automating workflows, or optimizing for AI-driven business success, Azure Integration Services provides the flexibility, scalability, and stability to drive your future growth. Ready to future-proof your business? Start your AI and integration journey with Azure today!GPT-4o Support and New Token Management Feature in Azure API Management
We’re happy to announce new features coming to Azure API Management enhancing your experience with GenAI APIs. Our latest release brings expanded support for GPT-4 models, including text and image-based input, across all GenAI Gateway capabilities. Additionally, we’re expanding our token limit policy with a token quota capability to give you even more control over your token consumption. Token quota This extension of the token limit policy is designed to help you manage token consumption more effectively when working with large language models (LLMs). Key benefits of token quota: Flexible quotas: In addition to rate limiting, set token quotas on an hourly, daily, weekly, or monthly basis to manage token consumption across clients, departments or projects. Cost management: Protect your organization from unexpected token usage costs by aligning quotas with your budget and resource allocation. Enhanced visibility: In combination with emit-token-metric policy, track and analyze token usage patterns to make informed adjustments based on real usage trends. With this new capability, you can empower your developers to innovate while maintaining control over consumption and costs. It’s the perfect balance of flexibility and responsible consumption for your AI projects. Learn more about token quota in our documentation. GPT4o support GPT-4o integrates text and images in a single model, enabling it to handle multiple content types simultaneously. Our latest release enables you take advantage of the full power of GPT-4o with expanded support across all GenAI Gateway capabilities in Azure API Management. Key benefits: Cost efficiency: Control and attribute costs with token monitoring, limits, and quotas. Return cached responses for semantically similar prompts. High reliability: Enable geo-redundancy and automatic failovers with load balancing and circuit breakers. Developer enablement: Replace custom backend code with built-in policies. Publish AI APIs for consumption. Enhanced governance and monitoring: Centralize monitoring and logs for your AI APIs. Phased rollout and availability We’re excited about these new features and want to ensure you have the most up-to-date information about their availability. As with any major update, we’re implementing a phased rollout strategy to ensure safe deployment across our global infrastructure. Because of that some of your services may not have these updates until the deployment is complete. These new features will be available first in the new SKUv2 of Azure API Management followed by SKUv1 rollout towards the end of 2024. Conclusion These new features in Azure API Management represent our step forward in managing and governing your use of GPT4o and other LLMs. By providing greater control, visibility and traffic management capabilities, we’re helping you unlock the full potential of Generative AI while keeping resource usage in check. We’re excited about the possibilities these new features bring and are committed to expanding their availability. As we continue our phased rollout, we appreciate your patience and encourage you to keep an eye out for the updates.1.5KViews1like0Comments