azure api management
105 TopicsTroubleshooting 4xx and 5xx Errors with Azure APIM services
Part I - Troubleshooting 4xx Errors Debugging and Troubleshooting Overview The API Management is nothing but a proxy which help to forward the request from client side to destination API service. It has the ability to modify the request or process based on the inputs from the client side before it reaches the destination. In an ideal scenario, APIs configured within an APIM service are expected to return successful responses (mostly 200 OK) along with the accurate data that is expected from the API. In case of failures, you may see an incorrect response code along with a precise error message of what went wrong during the API call. However, there may be scenarios where you may observe API requests failing with generic 4xx or 5xx errors without a detailed error message, and it could be difficult to narrow down or isolate the source of the error. In such cases, the first point is to isolate whether the error code is thrown by APIM or the backend configured by the APIM. This proves to be an important method as most of the error codes are generated by the backend and APIM being a proxy forwards the response (error codes) back to the users who initiated the request. This makes the user think that the error code is thrown from the APIM. Troubleshooting Azure APIM Failed Requests Let's suppose you have initiated an API request to your APIM service and the request eventually fails with a “HTTP 500 – Internal Server Error” message. With generic error messages such as above, it becomes very difficult to isolate the cause or the source of the failed API request since there are several internal and external components that participate during an API invocation process. If responseCode matches backendResponseCode, then there is an issue with the backend and we should troubleshoot the backend configured with the APIM If responseCode does not match backendResponseCode and errorReason is empty, then we should check if their policy logic is returning the error using inspector traces. If errorReason is not empty, it’s a problem in APIM and the troubleshooting of error codes can help to resolve the issue. Inspector Trace If the issue is reproducible on demand, then your best option would be to enable tracing for your APIM API requests. Azure APIM services have the option of enabling the “Ocp-Apim-Trace” for your API requests. This generates a descriptive trace containing detailed information that helps you inspect the request processing step-by-step in detail and gives you a head-start on the source of the error. Reference: https://docs.microsoft.com/en-us/azure/api-management/api-management-howto-api-inspector Diagnostic Logging to Azure Monitor Log Analytics You could also enable diagnostic logging for your APIM services. Diagnostic Logs can be archived to a storage account, streamed to an Event Hub resource, or be sent to Azure Monitor Log Analytics logs which could be further queried as per the scenario and requirement. These logs provide rich information about operations and errors that are important for auditing as well as troubleshooting purposes. The best part about the diagnostic logs is that they provide you with granular level per-request logs for each of your API requests and assist you with further troubleshooting. Reference Article: https://docs.microsoft.com/en-us/azure/api-management/api-management-howto-use-azure-monitor#resource-logs While storage accounts and event hubs work as single targeted destinations for diagnostic log collection/streaming, if you choose to enable APIM diagnostic settings with the destination as Log Analytics Workspace, you would be offered with the below 2 modes of resource log collection: Azure diagnostics - Data is written to the AzureDiagnostics table, which collates diagnostic information from multiple resources of different resource types. Resource specific - Data is written to individual table for each category of the resource. For APIM, the logs would be ported to ApiManagementGatewayLogs table Reference Article: https://docs.microsoft.com/en-us/azure/azure-monitor/platform/resource-logs#send-to-log-analytics-workspace If you want the resource logs to be ported to the ApiManagementGatewayLogs table, you would have to choose the option ‘Resource specific’ as highlighted in the sample screenshot below: Below are the sample diagnostic logs generated on the Log Analytics Workspace. These logs would provide granular level details for your API requests such as the timestamp, request status, api/operation id, time taken values, caller/client IP, method, url invoked, backend url invoked, response code, backend response code, request size, response size, error source, error reason, error message, et cetera. NOTE: Post initial configuration, it may take a couple of hours for the diagnostic logs to be streamed to the destination by the resource provider. Depending on your mode of log collection, here are a few sample queries that could be used for querying the logs pertaining to diagnostic data for your API requests. You can also choose to filter through the logs by fine-tuning the query to retrieve data specific to an API ID or specific to a response code, et cetera. Maneuver to Azure Portal a APIM service a Logs blade under “Diagnostic Settings” section to execute the queries AzureDiagnostics | where TimeGenerated > ago(24h) | where_ResourceId == “apim-service-name” | limit 100 ApiManagementGatewayLogs | where TimeGenerated > ago(24h) | limit 100 Log to Application Insights Another option is to integrate APIM service with Application Insights for generating diagnostic log data. Integration of APIM with Application Insights - https://docs.microsoft.com/en-us/azure/api-management/api-management-howto-app-insights Below is a sample query that can be used for querying the “requests” table that can retrieve the diagnostic data concerned with Azure APIM API requests Maneuver to the respective Application Insights resource a Click on Logs under “Monitoring” section. requests | where timestamp > ago(24h) | limit 100 Alternatively, the error handling in APIM can be carried out using the API management error handling policy - https://docs.microsoft.com/en-us/azure/api-management/api-management-error-handling-policies Now that we have enabled diagnostic logs in order to retrieve details about the different types of errors and errors messages for failed API requests, let’s walk through a couple of commonly observed 4xx and 5xx errors with APIM services. This troubleshooting series focuses on Capturing some of the common 4xx and 5xx errors observed while making API requests using Azure APIM services. Providing guidance to APIM users as to how can they debug or troubleshooting API requests that fail with these errors. Possible solutions for fixing some of the commonly observed 4xx and 5xx errors. Troubleshooting 4xx and 5xx errors with APIM services The very first pivotal step with troubleshooting failed API requests is to investigate the source of the response code that is being returned. If you have enabled diagnostic logging for your APIM service, then the columns “ResponseCode” and “BackendResponseCode” would divulge this primary information. If the 4xx or the 5xx response being returned to the client is primarily being returned by the backend API (review “BackendResponseCode” column), then the issue has to troubleshoot more often from the backend perspective since the APIM service would then forward the same response back to the client without actually contributing to the issue. 4xx Errors: Error code: 400 Scenario 1 Symptoms: The API Management has been working fine during its implementation. It is now throwing a ‘400 Bad Request’ when invoked using the ‘Test’ option under the API Management in Azure portal. While accessing it using a client app or application, the desired result is yielded. Troubleshooting: Now, from the above scenario, we understand that the API is throwing a ‘400 Bad Request’ when invoke only from API Management under the Azure portal. But the other method of invoking is yielding results. The error message clearly states that the endpoint could not be resolved. In case, if it was an issue with the endpoint, then the issue should occur across the invoking methods of the API. Since it is not our case, let us try verifying the endpoint. You can either try to resolve the endpoint from the same machine using command prompt or try a ping test. Resolution: In this kind of scenario’s, it is always recommended to check if the API Management is present within a Virtual Network and also notice that it will be configured in the internal mode. As per the official documentation, “The Test console available on the Azure Portal will not work for Internal VNET deployed service, as the Gateway Url is not registered on the Public DNS. You should instead use the Test Console provided on the Developer portal.” Scenario 2 Symptoms: While invoking the API present under the API Management, we encounter ‘Error: The remote server returned an error: (400) Invalid client certificate’. Troubleshooting: Let us analyze the scenario, This issue occurs when the customer has implemented mutual client certificate authentication, in this case client should pass the valid certificate as per the condition written in the policy <policies> <inbound> <base /> <choose> <when condition="@(context.Request.Certificate == null || !context.Request.Certificate.Verify() || context.Request.Certificate.Issuer.Contains("*.azure-api.net") || !context.Request.Certificate.SubjectName.Name.Contains("*.azure-api.net") || context.Request.Certificate.Thumbprint != "4BB206E17EE41820B36112FD76CAE3E0F7104F36") "> <return-response> <set-status code="403" reason="Invalid client certificate" /> </return-response> </when> </choose> </inbound><backend><base /> </backend><outbound><base /></outbound><on-error> <base /></on-error> </policies> To check whether the certificate is passed or not we can enable the ocp-apim-trace. The below trace shows that no client certificate received. Resolution: Issue resolved after adding the valid client certificate. Similar Scenario’s: Scenario 3 Error Reason: OperationNotFound Error message: Unable to match incoming request to an operation. Error Section: Backend Resolution: Make sure that the operation which is invoked for the API is configured or present in the API Management. If not, add the operation or modify the request accordingly. Scenario 4 Error Reason: ExpressionValueEvaluationFailure Error message: Expression evaluation failed. EXPECTED400: URL cannot contain query parameters. Provide root site url of your project site (Example: https://sampletenant.sharepoint.com/teams/sampleteam ) Error Section: inbound Resolution: Ensure that the URL contains only the query parameter defined in the API according to the configuration in the API Management. Any mismatch might lead to such error messages. For example, if the expected input value is integer and we supply a string, this scenario might lead to the error. Error code: 401 - Unauthorized issues Scenario 1 Symptoms: The Echo API suddenly started throwing HTTP 401 - Unauthorized error while invoking the operations under it. Message- HTTP/1.1 401 Unauthorized { "statusCode": 401, "message": "Access denied due to missing subscription key. Make sure to include subscription key when making requests to an API."} { "statusCode": 401, "message": "Access denied due to invalid subscription key. Make sure to provide a valid key for an active subscription." } Troubleshooting: To get access to the API, developers must first subscribe to a product. When they subscribe, they get a subscription key that is sent as a part of request header that is good for any API in that product. Ocp-Apim-Subscription-Key is the request header sent for the subscription key of the product that is associated with this API. The key is filled in automatically. Regarding error Access denied due to invalid subscription key. Make sure to provide a valid key for an active subscription, it's clear that you are sending a wrong value of Ocp-Apim-Subscription-Key request header while invoking Create resource and Retrieve resource operations. You can check your subscription key for a particular product from APIM Developer portal by navigating to Profile page after sign-in as shown below. Select the Show button to see the subscription keys for respective products you have subscribed to. If you check the headers being sent from Test tab, you notice that the value of Ocp-Apim-Subscription-Key request header is wrong. You might be wondering how come that is possible, because APIM automatically fills this request header with the right subscription key. Let's check the Frontend definition of Create resource and Retrieve resource operations under Design tab. Upon careful inspection, you would notice that these operations got a wrong hard-coded value of Ocp-Apim-Subscription-Key request header added under Headers tab. You can remove it, this should resolve the invalid subscription key problem, but still you would get missing subscription key error. You may get the following error message: HTTP/1.1 401 Unauthorized Content-Length: 152 Content-Type: application/json Date: Sun, 29 Jul 2018 14:29:50 GMT Vary: Origin WWW-Authenticate: AzureApiManagementKey realm="https://pratyay.azure-api.net/echo",name="Ocp-Apim-Subscription-Key",type="header" { "statusCode": 401, "message": "Access denied due to missing subscription key. Make sure to include subscription key when making requests to an API." } Go to the Echo API settings and check if it is associated with any of the available products. If not, then you must associate this API with a product so that you get a subscription key. Resolution: Developers must first subscribe to a product to get access to the API. When they subscribe, they get a subscription key that is good for any API in that product. If you created the APIM instance, you are an administrator already, so you are subscribed to every product by default. Error code: 401 Unauthorized issues Scenario Symptoms: The Echo API has enabled OAuth 2.0 user authorization in the Developer Console. Before calling the API, the Developer Console will obtain an access token on behalf of the user from Authorization header in the Request. Message : Troubleshooting: To troubleshoot the scenario, we would start with checking the APIM inspector trace. We can also find the Ocp-Apim-Trace link from the response. We notice the existence of a “JWT Validation Failed : Claim Mismatched” message in the traces which is unable to decode the header token provided. To check the scope of the “JWT Validation” policy, select the Calculate effective policy button. If you don't see any access restriction policy implemented at any scopes, next validation step should be done at product level, by navigating to the associated product and then click on Policies option. <inbound> <base /> <validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid."> <openid-config url="https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration" /> <required-claims> <claim name="aud"> <value>bf795850-70c6-4f22- </value> </claim></required-claims> </validate-jwt> </inbound> Resolution: The claim name provided in the Claim section does not match with the APP registered in the AAD. Provide the Client app registered Application ID in the Claims section to fix the authorization error. After providing the valid app id, the HTTP response results with HTTP/1.1 200 OK. Error code: 403 - Forbidden issues Symptoms: GetSpeakers API operation fetches the details of speakers based on the value provided in the parameter. After few days of using it, The Operation started throwing HTTP 403- Forbidden error whereas the other operations are working fine as expected. Message: HTTP/1.1 403 Forbidden { "statusCode": 403, "message": "Forbidden" } Troubleshooting: To troubleshoot the scenario, we would start with checking the APIM inspector trace. We can also find the Ocp-Apim-Trace link from the response We notice the existence of a “ip-filter” policy that filters(allow/denies) call from specific IP address ranges. To check the scope of the 'ip-filter' policy, select the Calculate effective policy button. If you don't see any access restriction policy implemented at any scopes, next validation step should be done at product level, by navigating to the associated product and then click on Policies option. <inbound> <base /><choose> <when condition="@(context.Operation.Name.Equals("GetSpeakers"))"> <ip-filter action="allow"> <address-range from="13.66.140.128" to="13.66.140.143" /> </ip-filter> </when></choose> </inbound> Resolution: HTTP 403 - Forbidden error can be thrown when there is any access restriction policy implemented. As we can see the IP address is not whitelisted in the error screenshot, we need to allow the IP address in the Policy to make it work. Before: <ip-filter action="allow"> <address-range from="13.66.140.128" to="13.66.140.143" /> </ip-filter> After: <ip-filter action="allow"> <address>13.91.254.72</address> <address-range from="13.66.140.128" to="13.66.140.143" /> </ip-filter> Once we allow the IP address in the IP-Filter Policy we would be able to receive the response. Error code: 404 Symptoms: The Demo API is being invoked by either of the means below, - Developer portal - ‘Test’ option under API Management - Client app like PostMan - Using user code The result of the call is a 404 Not Found error code. Troubleshooting: Make sure that the issue is existing to proceed with the troubleshooting steps. Note: The API Management is not present in any Virtual Network which eliminates the option of Network elements causing the issue. According to the API Management configuration, below are the settings Name of the API – Demo API Web Service URL - http://echoapi.cloudapp.net/api Subscription Required – Yes Below is the error scenario for the 404 error code using the API Management and the PostMan. Postman: API Management portal: Based on the trace file, we can see that the error code is thrown from the forward-request section and we do not obtain much insights from it. The configured web service URL is also reachable, and it displays us a visible content. Web Service URL: Hence, we proceed on collecting the browser trace while replicating the issue in the API Management section in Azure portal. Steps to collect browser trace: - Replicate the issue in the browser (chrome, steps for other browsers might differ slightly) - Press F12 and navigate to the network tab. - Make sure that the actions are recorded. - Right click on any one of the actions and select the last option (Save all as HAR with content). From the trace, we could see the below information which is show in preview state. The Requested URL does not lead to a proper content over the mentioned Web Service URL. This is the reason that though the Web Service URL is reachable, the API was still throwing a 404 Not found error code when it was invoked. Resolution: Make sure that the Web Service URL leads to a valid destination which helps in the issue resolution. The best approach is to create a proper backend structure which hosts the APIs and then map it to the respective API of the API Management and not vice versa. The following pointers are the main reason to encounter a 404 Not found error message from an API Management. You might hit the wrong http Method, (for example, the operation might be POST but you are calling it as GET.) You might be calling a wrong URL (that either has a suffix or wrong operation path). You might be using a wrong protocol (HTTP/HTTPS). In our case, the error is in correspondence with the second point where the configured URL is not pointing to the destination. This has been confirmed by the Browser trace too and hence correcting the URL/path will resolve the issue. Continue Reading 5xx Error Series31KViews12likes1CommentCalculating Chargebacks for Business Units/Projects Utilizing a Shared Azure OpenAI Instance
Azure OpenAI Service is at the forefront of technological innovation, offering REST API access to OpenAI's suite of revolutionary language models, including GPT-4, GPT-35-Turbo, and the Embeddings model series. Enhancing Throughput for Scale As enterprises seek to deploy OpenAI's powerful language models across various business units, they often require granular control over configuration and performance metrics. To address this need, Azure OpenAI Service is introducing dedicated throughput, a feature that provides a dedicated connection to OpenAI models with guaranteed performance levels. Throughput is quantified in terms of tokens per second (tokens/sec), allowing organizations to precisely measure and optimize the performance for both prompts and completions. The model of provisioned throughput provides enhanced management and adaptability for varying workloads, guaranteeing system readiness for spikes in demand. This capability also ensures a uniform user experience and steady performance for applications that require real-time responses. Resource Sharing and Chargeback Mechanisms Large organizations frequently provision a singular instance of Azure OpenAI Service that is shared across multiple internal departments. This shared use necessitates an efficient mechanism for allocating costs to each business unit or consumer, based on the number of tokens consumed. This article delves into how chargeback is calculated for each business unit based on their token usage. Leveraging Azure API Management Policies for Token Tracking Azure API Management Policies offer a powerful solution for monitoring and logging the token consumption for each internal application. The process can be summarized in the following steps: ** Sample Code: Refer to this GitHub repository to get a step-by-step instruction on how to build the solution outlined below : private-openai-with-apim-for-chargeback 1. Client Applications Authorizes to API Management To make sure only legitimate clients can call the Azure OpenAI APIs, each client must first authenticate against Azure Active Directory and call APIM endpoint. In this scenario, the API Management service acts on behalf of the backend API, and the calling application requests access to the API Management instance. The scope of the access token is between the calling application and the API Management gateway. In API Management, configure a policy (validate-jwt or validate-azure-ad-token) to validate the token before the gateway passes the request to the backend. 2. APIM redirects the request to OpenAI service via private endpoint. Upon successful verification of the token, Azure API Management (APIM) routes the request to Azure OpenAI service to fetch response for completions endpoint, which also includes prompt and completion token counts. 3. Capture and log API response to Event Hub Leveraging the log-to-eventhub policy to capture outgoing responses for logging or analytics purposes. To use this policy, a logger needs to be configured in the API Management: # API Management service-specific details $apimServiceName = "apim-hello-world" $resourceGroupName = "myResourceGroup" # Create logger $context = New-AzApiManagementContext -ResourceGroupName $resourceGroupName -ServiceName $apimServiceName New-AzApiManagementLogger -Context $context -LoggerId "OpenAiChargeBackLogger" -Name "ApimEventHub" -ConnectionString "Endpoint=sb://<EventHubsNamespace>.servicebus.windows.net/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<key>" -Description "Event hub logger with connection string" Within outbound policies section, pull specific data from the body of the response and send this information to the previously configured EventHub instance. This is not just a simple logging exercise; it is an entry point into a whole ecosystem of real-time analytics and monitoring capabilities: <outbound> <choose> <when condition="@(context.Response.StatusCode == 200)"> <log-to-eventhub logger-id="TokenUsageLogger">@{ var responseBody = context.Response.Body?.As<JObject>(true); return new JObject( new JProperty("Timestamp", DateTime.UtcNow.ToString()), new JProperty("ApiOperation", responseBody["object"].ToString()), new JProperty("AppKey", context.Request.Headers.GetValueOrDefault("Ocp-Apim-Subscription-Key",string.Empty)), new JProperty("PromptTokens", responseBody["usage"]["prompt_tokens"].ToString()), new JProperty("CompletionTokens", responseBody["usage"]["completion_tokens"].ToString()), new JProperty("TotalTokens", responseBody["usage"]["total_tokens"].ToString()) ).ToString(); }</log-to-eventhub> </when> </choose> <base /> </outbound> EventHub serves as a powerful fulcrum, offering seamless integration with a wide array of Azure and Microsoft services. For example, the logged data can be directly streamed to Azure Stream Analytics for real-time analytics or to Power BI for real-time dashboards With Azure Event Grid, the same data can also be used to trigger workflows or automate tasks based on specific conditions met in the incoming responses. Moreover, the architecture is extensible to non-Microsoft services as well. Event Hubs can interact smoothly with external platforms like Apache Spark, allowing you to perform data transformations or feed machine learning models. 4: Data Processing with Azure Functions An Azure Function is invoked when data is sent to the EventHub instance, allowing for bespoke data processing in line with your organization’s unique requirements. For instance, this could range from dispatching the data to Azure Monitor, streaming it to Power BI dashboards, or even sending detailed consumption reports via Azure Communication Service. [Function("TokenUsageFunction")] public async Task Run([EventHubTrigger("%EventHubName%", Connection = "EventHubConnection")] string[] openAiTokenResponse) { //Eventhub Messages arrive as an array foreach (var tokenData in openAiTokenResponse) { try { _logger.LogInformation($"Azure OpenAI Tokens Data Received: {tokenData}"); var OpenAiToken = JsonSerializer.Deserialize<OpenAiToken>(tokenData); if (OpenAiToken == null) { _logger.LogError($"Invalid OpenAi Api Token Response Received. Skipping."); continue; } _telemetryClient.TrackEvent("Azure OpenAI Tokens", OpenAiToken.ToDictionary()); } catch (Exception e) { _logger.LogError($"Error occured when processing TokenData: {tokenData}", e.Message); } } } In the example above, Azure function processes the tokens response data in Event Hub and sends them to Application Insights telemetry, and a basic Dashboard is configured in Azure, displaying the token consumption for each client application. This information can conveniently be used to compute chargeback costs. A sample query used in dashboard above that fetches tokens consumed by a specific client: customEvents | where name contains "Azure OpenAI Tokens" | extend tokenData = parse_json(customDimensions) | where tokenData.AppKey contains "your-client-key" | project Timestamp = tokenData.Timestamp, Stream = tokenData.Stream, ApiOperation = tokenData.ApiOperation, PromptTokens = tokenData.PromptTokens, CompletionTokens = tokenData.CompletionTokens, TotalTokens = tokenData.TotalTokens Azure OpenAI Landing Zone reference architecture A crucial detail to ensure the effectiveness of this approach is to secure the Azure OpenAI service by implementing Private Endpoints and using Managed Identities for App Service to authorize access to Azure AI services. This will limit access so that only the App Service can communicate with the Azure OpenAI service. Failing to do this would render the solution ineffective, as individuals could bypass the APIM/App Service and directly access the OpenAI Service if they get hold of the access key for OpenAI. Refer to Azure OpenAI Landing Zone reference architecture to build a secure and scalable AI environment. Additional Considerations If the client application is external, consider using an Application Gateway in front of the Azure APIM If "streaming" is set to true, tokens count is not returned in response. In that that case libraries like tiktoken (Python), orgpt-3-encoder(javascript) for most GPT-3 models can be used to programmatically calculate tokens count for the user prompt and completion response. A useful guideline to remember is that in typical English text, one token is approximately equal to around 4 characters. This equates to about three-quarters of a word, meaning that 100 tokens are roughly equivalent to 75 words. (P.S. Microsoft does not endorse or guarantee any third-party libraries.) A subscription key or a custom header like app-key can also be used to uniquely identify the client as appId in OAuth token is not very intuitive. Rate-limiting can be implemented for incoming requests using OAuth tokens or Subscription Keys, adding another layer of security and resource management. The solution can also be extended to redirect different clients to different Azure OpenAI instances. For example., some clients utilize an Azure OpenAI instance with default quotas, whereas premium clients get to consume Azure Open AI instance with dedicated throughput. Conclusion Azure OpenAI Service stands as an indispensable tool for organizations seeking to harness the immense power of language models. With the feature of provisioned throughput, clients can define their usage limits in throughput units and freely allocate these to the OpenAI model of their choice. However, the financial commitment can be significant and is dependent on factors like the chosen model's type, size, and utilization. An effective chargeback system offers several advantages, such as heightened accountability, transparent costing, and judicious use of resources within the organization.20KViews9likes9CommentsEnhanced API Developer Experience with the Microsoft-Postman partnership
Application Programming Interfaces (APIs) have emerged as the foundational technology powering application modernization. As developers move to modern applications and embrace an API-first world, an easy-to-use API management platform is the need of the hour. At Microsoft, we are on a mission to build a comprehensive and reliable API management platform that’s enabling developers to streamline and secure APIs at scale. Today, at Microsoft Ignite, the Azure API Management team is excited to announce our partnership with Postman, a leading API testing and development platform used by 20M+ developers worldwide. This partnership is aimed at both simplifying API testing and monitoring using the Postman platform for Azure API developers and making the Azure platform readily available to deploy APIs for Postman developers. “This announcement is great news for organizations that recognize APIs as fundamental business assets critical to digital transformation,” said Abhijit Kane, Postman co-founder. “By uniting the world’s top API platform and a leading cloud provider, our two companies have taken a significant step in giving customers the comprehensive set of tools they need for accelerating the API lifecycle.” Boost developer productivity and achieve faster time to market with the Azure API Management and Postman Integrated platform These integrations provide developers all the support they need to test their APIs on Postman and deploying them back on Azure. It takes away unnecessary friction and enables developers to spend more time on writing and shipping software. Let’s take a closer look at how Azure API Management platform and Postman platform integrate to offer developers a fast path to build, test, deploy and iterate on APIs. Test Azure APIs faster with an integrated Postman platform Azure API developers have instant access to the Postman API testing, monitoring, and development platform for rapid iteration on API changes. The integration support includes: Postman-initiated import from Azure API Management with the ability to import OpenAPI definitions from Azure API Management Azure API Management-initiated export of APIs into Postman using “Run in Postman” Accelerate the path to deployment for Postman tested APIs on Azure Once the APIs are designed, tested and ready to go, the integration makes it easy to deploy them on Azure. The integration support includes: Export of OpenAPI definitions from Postman to Azure API Management With over a million APIs published on Azure API Management platform today - it is a battle-hardened, production ready, and highly scaled platform that stretches from on-premises to multi-cloud. Over the past few years, Azure API Management platform has expanded to support every stage of the API lifecycle - enhancing the overall experience for API developers, consumers, operators, and policymakers. Postman partnership is making this even more frictionless, as Smit Patel, head of partnerships at Postman, said, “We’re very proud because this is Postman’s first bidirectional product alliance with a major cloud provider. Aligning with a cloud leader like Microsoft is terrific for our mutual customers and also boosts Postman’s status as the enterprise grade solution for the API-first world.” The Azure API Management team welcomes Postman to the Microsoft partner ecosystem, and together we look forward to enabling our developers embrace an API-first culture to deliver innovations faster, create new revenue streams, and generate value for their end users. Check out Azure integration docs and Postman integration docs for more details. To learn more about this news, read Postman’s press release and Postman’s blog.35KViews9likes0CommentsBuild next-gen apps with OpenAI and Microsoft Power Platform
Let's discuss how developers can leverage OpenAI's APIs to build next-gen application using Microsoft Power Apps. We will use DALL·E 2 (a new AI system model) to create realistic images and art from a description in natural language.31KViews8likes5Comments