Forum Widgets
Latest Discussions
Unable to get Python3.12 running on Azure App services
Good day, I am having endless trouble getting a python app to run on azure service apps. I cannot find anywhere to install modules like pip, which I need to install openai. Can someone please assist as I've been struggling for 2 days now... I am fairly new to this process. Deployment goes through, but it fails to install pip modules Thank you in advance! WWernerKoenMar 11, 2025Occasional Reader8Views0likes0CommentsAzure App that was working with MS SQL Server database now not working with Azure SQL Database
We had a web page app in Azure that ran a form for data retrieval. When we removed the SQL server and moved the database to Azure SQL, the app no longer works. We tried the ADO.NET (SQL authentication) connection string from the database page, but the app is not working. The error we are getting is "The page cannot be displayed because an internal server error has occurred." We are having trouble trying to figure out if we have a connection issue to the database or this is a problem with the app itself. Any suggestions would be appreciated please.WalterWood44Jun 06, 2024Copper Contributor242Views0likes0Comments[PowerPoint Add-in] The iframe disappears when printing the slide.
Hi everyone. My add-in embeds an iframe to display a website in the slide. I can view and interact with the iframe normally in both edit mode and presentation mode. However, when I print the slide, the iframes do not display. Does anyone know why this is happening? Thank you very much.Quang_Nguyen_DSSMay 22, 2024Copper Contributor245Views0likes0CommentsAzure Function App Http Javascript render simple html file to replicate jsrsasign sign certificate
Good day, Please Help. 1. In PowerBI im trying to render the javascript sign certificate of jsrsasign, i only got it working via an html file. So im trying to read the html file, simple hello to start of with. Am i better going directly to do the jsrsasign? 2. Locally on VS i got the simple function to return Hello Azure, but trying to read the simple html file executes no error but if i copy in postman i just get a 401 no content found, im not sure how further to debug as in VS i get Ok status, Nothing in Console? Anybody have an example or links plz? const { app } = require('@azure/functions'); const fs = require('fs'); const path = require('path'); app.http('IC5', { methods: ['GET', 'POST'], authLevel: 'anonymous', handler: async (request, context) => { context.log(`Http function processed request for url "${request.url}"`); // const name = request.query.get('name') || await request.text() || 'world'; // return { body: `Hello, ${name}!` }; //var res = { //body: "", //headers: { //"Content-Type": "text/html" //} //}; // readFile = require('../SharedCode/readFile.js'); //filepath = __dirname + '/test3.html'; //fs = require('fs'); //await fs.readFile(filepath,function(error,content){ fs.readFile(path.resolve('./test3.html'), 'UTF-8', (err, htmlContent) => { context.res = { status: 200, headers: { 'Content-Type': 'text/html' }, body: htmlContent } }) // if (request.query.name || (request.body && request.body.name)) { // res.body = "<h1>Hello " + (request.query.name || request.body.name) + "</h1>"; //} else { //fs.readFile(path.resolve(__dirname,'test3.html'), 'UTF-8', (err, htmlContent) => { //res.body= htmlContent; //context.res = res; //}); // } } }); //TEST IN POSTMAN: http://localhost:7071/api/IC5?name=hurryicassiemMay 20, 2024Copper Contributor320Views0likes0CommentsIssue with Azure AD B2C: Limited Customization of Sign-Up/Sign-In Flow with Custom Policies
We are experiencing a significant limitation with Azure AD B2C custom policies regarding the customization of the sign-up and sign-in flow. While Azure AD B2C offers some flexibility through custom policies, the extent of this customization is restricted. Specifically, we are unable to introduce entirely new custom form fields and are confined to the predefined flow provided by Azure AD B2C. Detailed Explanation: Predefined Form Fields: Azure AD B2C provides a set of predefined form fields for the sign-up and sign-in processes. These fields include typical information such as email, password, and basic user details. While we can choose which fields to display or hide and modify their order, adding new custom fields that are not supported by Azure AD B2C is not possible. UI Customization: Azure AD B2C allows for customization of the user interface through HTML, CSS, and JavaScript. However, this customization is limited to styling and layout changes. We cannot alter the underlying structure or logic of the form fields provided by Azure AD B2C. Custom Policies Limitations: Custom policies allow for modifying the user journey to some extent, such as integrating external systems, adding conditional logic, and performing claims transformations. Despite these capabilities, the core flow structure of the sign-up and sign-in processes remains fixed. Critical functionalities such as adding entirely new steps in the authentication process or significantly altering existing ones are restricted. Impact on Our Implementation: The inability to fully customize the sign-up and sign-in flow impacts our project in the following ways: User Experience: We are unable to provide a seamless user experience tailored to our specific requirements. Business Logic: Implementing custom business logic directly within the sign-up and sign-in process is challenging. Integration: Integrating additional verification steps or custom fields that are crucial for our application’s workflow is not feasible. Request for Enhancement: We request the following enhancements to Azure AD B2C custom policies: Custom Form Fields: Allow the addition of entirely new custom form fields that can be defined and managed within the custom policies. Flexible Flow Customization: Provide greater flexibility in altering the core flow structure of the sign-up and sign-in processes. Enhanced UI Control: Allow for more comprehensive control over the UI elements, enabling the introduction of new fields and steps within the user journey. These enhancements would significantly improve our ability to tailor Azure AD B2C to our specific needs and provide a better user experience. Conclusion: Azure AD B2C is a powerful identity management solution, but the current limitations on customizing the sign-up and sign-in flows restrict its potential. Addressing these issues would greatly benefit developers and businesses looking to create more customized and user-centric authentication experiences. Thank you for considering our request. We look forward to potential updates and enhancements that will help us leverage Azure AD B2C more effectively.JainamKMay 20, 2024Copper Contributor302Views0likes0CommentsBest Practices for API Error Handling: A Comprehensive Guide
APIs (Application Programming Interfaces) play a critical role in modern software development, allowing different systems to communicate and interact with each other. However, working with APIs comes with its challenges, one of the most crucial being error handling. When an API encounters an issue, it's essential to handle errors gracefully to maintain system reliability and ensure a good user experience. In this article, we'll discuss best practices for API error handling that can help developers manage errors effectively. Why is API Error Handling Important? API error handling is crucial for several reasons: Maintaining System Reliability: Errors are inevitable in any system. Proper error handling ensures that when errors occur, they are handled in a way that prevents them from cascading and causing further issues. Enhancing User Experience: Clear, informative error messages can help users understand what went wrong and how to resolve the issue, improving overall user satisfaction. Security: Proper error handling helps prevent sensitive information from being exposed in error messages, reducing the risk of security breaches. Debugging and Monitoring: Effective error handling makes it easier to identify and debug issues, leading to quicker resolutions and improved system performance. Best Practices for API Error Handling 1. Use Standard HTTP Status Codes HTTP status codes provide a standard way to communicate the outcome of an API request. Use status codes such as 200 (OK), 400 (Bad Request), 404 (Not Found), and 500 (Internal Server Error) to indicate the result of the request. Choosing the right status code helps clients understand the nature of the error without parsing the response body. 2. Provide Descriptive Error Messages Along with HTTP status codes, include descriptive error messages in your API responses. Error messages should be clear, concise, and provide actionable information to help users understand the problem and how to fix it. Avoid technical jargon and use language that is understandable to your target audience. 3. Use Consistent Error Response Formats Maintain a consistent format for your error responses across all endpoints. This makes it easier for clients to parse and handle errors consistently. A typical error response may include fields like status, error, message, code, and details, providing a structured way to convey error information. 4. Avoid Exposing Sensitive Information Ensure that error messages do not expose sensitive information such as database details, API keys, or user credentials. Use generic error messages that do not reveal internal system details to potential attackers. 5. Implement Retry Logic for Transient Errors For errors that are likely to be transient, such as network timeouts or service disruptions, consider implementing retry logic on the client side. However, retries should be implemented judiciously to avoid overwhelming the server with repeated requests. 6. Document Common Errors Provide comprehensive documentation that includes common error codes, messages, and their meanings. This helps developers quickly identify and troubleshoot common issues without needing to contact support. 7. Use Logging and Monitoring Implement logging and monitoring to track API errors and performance metrics. Logging helps you understand the root cause of errors, while monitoring allows you to proactively identify and address issues before they impact users. 8. Handle Rate Limiting and Throttling Implement rate limiting and throttling to protect your API from abuse and ensure fair usage. Return appropriate error codes (e.g., 429 - Too Many Requests) when rate limits are exceeded, and provide guidance on how users can adjust their requests to comply with rate limits. 9. Provide Support for Localization If your API serves a global audience, consider providing support for localization in your error messages. This allows users to receive error messages in their preferred language, improving the user experience for non-English speakers. 10. Test Error Handling Finally, thoroughly test your API's error handling capabilities to ensure they work as expected. Test various scenarios, including valid requests, invalid requests, and edge cases, to identify and address potential issues. Conclusion Effective error handling is essential for building reliable and user-friendly APIs. By following these best practices, you can ensure that your API handles errors gracefully, provides meaningful feedback to users, and maintains high availability and security. Implementing robust error handling practices will not only improve the reliability of your API but also enhance the overall user experience.SenthilMar 18, 2024Copper Contributor6.2KViews0likes0CommentsMICROSOFT FABRIC & CONTENT SAFETY: ANALYTICS ON METADATA
Content Moderation with Azure Content Safety and Blob Metadata for Analysis and Insights with Microsoft Fabric We as IT Professionals are facing a lot of challenges in everyday life, and problem solving is quite often a required skill spanning across various situations and Technologies. Some of these challenges are very special and i am talking about Content and moderation. Internet is everywhere and digital content is taking over, opening a Pandora’s box especially now where anyone from a Computer can create and spread false or “unwanted” text and images and video, to say the least. But we are fortunate enough to utilize countermeasures and moderate that content making the experience a little more safer and filtered, with the help of various tools one of them being Azure Content Safety. In addition we can use metadata for example on photos to perform analysis on the results. Enter Microsoft Fabric! Intro Microsoft Fabric is an end-to-end analytics solution with full-service capabilities including data movement, data lakes, data engineering, data integration, data science, real-time analytics, and business intelligence—all backed by a shared platform providing robust data security, governance, and compliance. Your organization no longer needs to stitch together individual analytics services from multiple vendors. Instead, use a streamlined solution that’s easy to connect, onboard, and operate. Azure AI Content Safety is an AI service that lets you handle content that is potentially offensive, risky, or otherwise undesirable. It includes the AI-powered content moderation service which scans text, image, and videos and applies content flags automatically. So we are going to build a React Application where users upload Photos and select some categories about them, Content Safety performs moderation flagging , and Microsoft Fabric brings to life Analysis on the process and the results. Build For this workshop we need : Azure Subscription VSCode with Node.Js Content Safety Resource from Azure AI Services Azure Functions Azure Container Registry Azure Web APP Azure Logic Apps Azure Storage Accounts Microsoft Fabric ( Trial is fine) Let’s start with our Front-end Web Application , React. React it is easy to understand and very flexible and powerful. Our Web App is a UI where users will Upload Photos, select Categorization for the Photos and submit. The process will take the Photos to a Storage Account and a Storage Trigger fires our Azure Function. Let’s have a look on our required details. We have 2 Blobs ‘uploads’ & ‘content’ with Container Access level, and we need to create a SAS token for the React App . Once we have this let’s add it into our .env file in React. The App.js is like this : /*App.js*/ // App.js import React, { useState } from 'react'; import { BlobServiceClient } from '@azure/storage-blob'; import logoIcon from './logo-icon.png'; import './App.css'; function App() { const [selectedCategories, setSelectedCategories] = useState({}); const [file, setFile] = useState(null); const [message, setMessage] = useState(''); const [isCategorySelected, setIsCategorySelected] = useState(false); const handleCheckboxChange = (event) => { const { value, checked } = event.target; setSelectedCategories(prev => { const updatedCategories = { ...prev, [value]: checked }; setIsCategorySelected(Object.values(updatedCategories).some(v => v)); // Check if at least one category is selected return updatedCategories; }); }; const handleFileChange = (event) => { setFile(event.target.files[0]); /*setFile(event.target.Files);*/ setMessage(`File "${event.target.file} selected !`); }; const handleSubmit = async (event) => { event.preventDefault(); if (!file) { setMessage('Please select a file to upload.'); return; } if (!isCategorySelected) { setMessage('Please select at least one category.'); return; } const sasToken = process.env.REACT_APP_SAS_TOKEN; const storageAccountName = process.env.REACT_APP_STORAGE_ACCOUNT; const containerName = 'uploads'; const blobServiceClient = new BlobServiceClient( `https://${storageAccountName}.blob.core.windows.net?${sasToken}` ); // Concatenate the selected categories into a comma-separated string const categoriesMetadataValue = Object.entries(selectedCategories) .filter(([_, value]) => value) .map(([key]) => key) .join(','); const metadata = { 'Category': categoriesMetadataValue }; try { const containerClient = blobServiceClient.getContainerClient(containerName); const blobClient = containerClient.getBlockBlobClient(file.name); await blobClient.uploadData(file, { metadata }); setMessage(`Success! File "${file.name}" has been uploaded with categories: ${categoriesMetadataValue}.`); } catch (error) { setMessage(`Failure: An error occurred while uploading the file. ${error.message}`); } }; return ( <div className="App"> <div className="info-text"> <h1>Welcome to the Image Moderator App!</h1> JPEG, PNG, BMP, TIFF, GIF or WEBP; max size: 4MB; max resolution: 2048x2048 pixels </div> <form className="main-content" onSubmit={handleSubmit}> <div className="upload-box"> <label htmlFor="photo-upload" className="upload-label"> Upload Photo <input type="file" id="photo-upload" accept="image/jpeg, image/png, image/bmp, image/tiff, image/gif, image/webp" onChange={handleFileChange} /> </label> </div> <div className="logo-box"> <img src={logoIcon} alt="Logo Icon" className="logo-icon" /> <div className="submit-box"> <button type="submit" disabled={!isCategorySelected} className="submit-button">Submit</button> </div> </div> <div className="categories-box"> {['people', 'inside', 'outside', 'art', 'society', 'nature'].map(category => ( <label key={category}> <span>{category}</span> <input type="checkbox" name="categories" value={category} onChange={handleCheckboxChange} checked={!!selectedCategories[category]} /> </label> ))} </div> </form> {message && <div className="feedback-message">{message}</div>} {/* Display feedback messages */} <div className="moderator-box"> {/* Data returned from Moderator will be placed here */} </div> </div> ); } export default App; There is an accompanying App.css file that is all about style, i am going to post that also to GitHub. We can test our App with npm start and if we are happy time to deploy to Web App Service! So we have our Azure Container Registry and we need to login, tag and push our app ! Don;t forge we need Docker running and a Docker file as simple as this: # Build stage FROM node:18 AS build # Set the working directory WORKDIR /app # Copy the frontend directory contents into the container at /app COPY . /app # Copy the environment file COPY .env /app/.env # Install dependencies and build the app RUN npm install RUN npm run build # Serve stage FROM nginx:alpine # Copy the custom Nginx config into the image # COPY custom_nginx.conf /etc/nginx/conf.d/default.conf # Copy the build files from the build stage to the Nginx web root directory COPY --from=build /app/build /usr/share/nginx/html # Expose port 80 for the app EXPOSE 80 # Start Nginx CMD ["nginx", "-g", "daemon off;"] And let’s deploy : az acr login --name $(az acr list -g rgname --query "[].{name: name}" -o tsv) az acr list -g rg-webvideo --query "[].{name: name}" -o tsv docker build -t myoapp . docker tag myapp ACRNAME.azurecr.io/myapp:v1 docker push ACRNAME.azurecr.io/myapp:v1 Once our App is pushed go to your Container Registry, from Registries select our image and deploy to a Web App: Some additional setting are needed on our Storage Account, beside the Container Level Anonymous read Access The settings are about CORS and we must add “*” in allowed Origins with GET, PUT and LIST for the allowed methods. Once we are ready, we can open our URL and Upload a sample file to verify everything is working as expected. Now we have a Function App to build. Create a new Function App with an APP Service plan of B2, and .NET 6.0 since we are going to deploy a C# code for a new trigger. Also we need to add into the Function App configuration the CONTENT_SAFETY_ENDPOINT, and CONTENT_SAFETY_KEY, for our Azure AI Content Safety resource. From VSCode add a new Function, set it to Blob Trigger and here is the code for our Function to call the Safety API and get the Image moderation status. We can see that we can set our Safety levels depending on the case : using System; using System.IO; using System.Threading.Tasks; using Azure; using Azure.AI.ContentSafety; using Azure.Storage.Blobs; using Microsoft.Azure.WebJobs; using Microsoft.Extensions.Logging; using System.Collections.Generic; using System.Linq; namespace Company.Function { public static class BlobTriggerCSharp1 { [FunctionName("BlobTriggerCSharp1")] public static async Task Run( [BlobTrigger("uploads/{name}.{extension}", Connection = "AzureWebJobsStorage_saizhv01")] Stream myBlob, string name, string extension, ILogger log) { log.LogInformation($"Processing blob: {name}.{extension}"); string connectionString = Environment.GetEnvironmentVariable("AzureWebJobsStorage_saizhv01"); BlobServiceClient blobServiceClient = new BlobServiceClient(connectionString); BlobClient blobClient = blobServiceClient.GetBlobContainerClient("uploads").GetBlobClient($"{name}.{extension}"); string endpoint = Environment.GetEnvironmentVariable("CONTENT_SAFETY_ENDPOINT"); string key = Environment.GetEnvironmentVariable("CONTENT_SAFETY_KEY"); ContentSafetyClient contentSafetyClient = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key)); ContentSafetyImageData image = new ContentSafetyImageData(BinaryData.FromStream(myBlob)); AnalyzeImageOptions request = new AnalyzeImageOptions(image); try { Response<AnalyzeImageResult> response = await contentSafetyClient.AnalyzeImageAsync(request); var existingMetadata = (await blobClient.GetPropertiesAsync()).Value.Metadata; var categoriesAnalysis = response.Value.CategoriesAnalysis; bool isRejected = categoriesAnalysis.Any(a => a.Severity > 0); // Strict threshold string jsonResponse = System.Text.Json.JsonSerializer.Serialize(response.Value); log.LogInformation($"Content Safety API Response: {jsonResponse}"); var metadataUpdates = new Dictionary<string, string> { {"moderation_status", isRejected ? "BLOCKED" : "APPROVED"} }; // Add metadata for each category with detected severity foreach (var category in categoriesAnalysis) { if (category.Severity > 0) { metadataUpdates.Add($"{category.Category.ToString().ToLower()}_severity", category.Severity.ToString()); } } foreach (var item in metadataUpdates) { existingMetadata[item.Key] = item.Value; } await blobClient.SetMetadataAsync(existingMetadata); log.LogInformation($"Blob {name}.{extension} metadata updated successfully."); } catch (RequestFailedException ex) { log.LogError($"Analyze image failed. Status code: {ex.Status}, Error code: {ex.ErrorCode}, Error message: {ex.Message}"); throw; } } } } The Filtering Settings are configured within our code and we can be as strict as we need. The corresponding setting is Azure Ai Studio We have our Custom Metadata inserted on our Image Blob Files. Now we need a way to extract these into a CSV or JSON file, so later Microsoft Fabric would provide Analysis. Enter Logic Apps! With a simple Trigger either on a Schedule or whenever a Blob changes we will execute our workflow! The following is the whole code: { "definition": { "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#", "contentVersion": "1.0.0.0", "triggers": { "Recurrence": { "type": "Recurrence", "recurrence": { "interval": 1, "frequency": "Week", "timeZone": "GTB Standard Time", "schedule": { "weekDays": [ "Monday" ] } } } }, "actions": { "Initialize_variable": { "type": "InitializeVariable", "inputs": { "variables": [ { "name": "Meta", "type": "array" } ] }, "runAfter": {} }, "Lists_blobs_(V2)": { "type": "ApiConnection", "inputs": { "host": { "connection": { "name": "@parameters('$connections')['azureblob']['connectionId']" } }, "method": "get", "path": "/v2/datasets/@{encodeURIComponent(encodeURIComponent('xxxxxxx'))}/foldersV2/@{encodeURIComponent(encodeURIComponent('JTJmdXBsb2Fkcw=='))}", "queries": { "nextPageMarker": "", "useFlatListing": true } }, "runAfter": { "Initialize_variable": [ "Succeeded" ] }, "metadata": { "JTJmdXBsb2Fkcw==": "/uploads" } }, "For_each": { "type": "Foreach", "foreach": "@body('Lists_blobs_(V2)')?['value']", "actions": { "HTTP": { "type": "Http", "inputs": { "uri": "https://strxxx.blob.core.windows.net/uploads/@{items('For_each')?['Name']}?comp=metadata&sv=2022-11-02&ss=bfqt&srt=sco&sp=rwdlacupiytfx&se=2023-11-30T21:56:52Z&st=2023-11-19T13:56:52Z&spr=https&sig=S4PlM4MJc9SI9e0iD5HlhJPZL3DWkwdEi%2BBIzbpLyX4%3D", "method": "GET", "headers": { "x-ms-version": "2020-06-12", "x-ms-date": "@{utcNow()}" } } }, "Category": { "type": "Compose", "inputs": "@outputs('HTTP')['headers']['x-ms-meta-Category']", "runAfter": { "HTTP": [ "Succeeded" ] } }, "Moderation": { "type": "Compose", "inputs": "@outputs('HTTP')['Headers']['x-ms-meta-moderation_status']", "runAfter": { "Category": [ "Succeeded" ] } }, "ArrayString": { "type": "AppendToArrayVariable", "inputs": { "name": "Meta", "value": { "Category": "@{outputs('Category')}", "Moderation": "@{outputs('Moderation')}" } }, "runAfter": { "Moderation": [ "Succeeded" ] } } }, "runAfter": { "Lists_blobs_(V2)": [ "Succeeded" ] } }, "Compose": { "type": "Compose", "inputs": "@variables('Meta')", "runAfter": { "For_each": [ "Succeeded" ] } }, "Update_blob_(V2)": { "type": "ApiConnection", "inputs": { "host": { "connection": { "name": "@parameters('$connections')['azureblob']['connectionId']" } }, "method": "put", "body": "@body('Create_CSV_table')", "headers": { "ReadFileMetadataFromServer": true }, "path": "/v2/datasets/@{encodeURIComponent(encodeURIComponent('strxxxx'))}/files/@{encodeURIComponent(encodeURIComponent('/content/csvdata.csv'))}" }, "runAfter": { "Create_CSV_table": [ "Succeeded" ] }, "metadata": { "JTJmY29udGVudCUyZmNzdmRhdGEuY3N2": "/content/csvdata.csv" } }, "Create_blob_(V2)": { "type": "ApiConnection", "inputs": { "host": { "connection": { "name": "@parameters('$connections')['azureblob']['connectionId']" } }, "method": "post", "body": "@body('Create_CSV_table')", "headers": { "ReadFileMetadataFromServer": true }, "path": "/v2/datasets/@{encodeURIComponent(encodeURIComponent('strxxxx'))}/files", "queries": { "folderPath": "/content", "name": "csvdata.csv", "queryParametersSingleEncoded": true } }, "runAfter": { "Update_blob_(V2)": [ "Failed" ] } }, "Create_CSV_table": { "type": "Table", "inputs": { "from": "@variables('csvData')", "format": "CSV" }, "runAfter": { "csvData": [ "Succeeded" ] } }, "csvData": { "type": "InitializeVariable", "inputs": { "variables": [ { "name": "csvData", "type": "array", "value": "@outputs('Compose')" } ] }, "runAfter": { "Compose": [ "Succeeded" ] } } }, "outputs": {}, "parameters": { "$connections": { "type": "Object", "defaultValue": {} } } }, "parameters": { "$connections": { "value": { "azureblob": { "id": "/subscriptions/xxxx/providers/Microsoft.Web/locations/westeurope/managedApis/azureblob", "connectionId": "/subscriptions/xxxxxxxxxxx/resourceGroups/rg-modapp/providers/Microsoft.Web/connections/azureblob", "connectionName": "azureblob", "connectionProperties": { "authentication": { "type": "ManagedServiceIdentity" } } } } } } } It is quite challenging to get Custom Metadata from your Blobs as it needs a crafted API call with specific Headers. For example if you look into the Code : "uri": "https://strxxx.blob.core.windows.net/uploads/@{items('For_each')?['Name']}?comp=metadata&sv=2022-11-02&ss=bfqt&srt=sco&sp=rwdlacupiytfx&se=2023-11-30T21:56:52Z&st=2023-11-19T13:56:52Z&spr=https&sig=S4PlM4MJc9SI9e0iD5HlhJPZL3DWkwdEi%2BBIzbpLyX4%3D", "method": "GET", "headers": { "x-ms-version": "2020-06-12", "x-ms-date": "@{utcNow()}" } } }, "Category": { "type": "Compose", "inputs": "@outputs('HTTP')['headers']['x-ms-meta-Category']", "runAfter": { "HTTP": [ "Succeeded" ] } And here is the Flow, again we can notice it is quite complex to get Custom Metadata from Azure Blob, we need an HTTP Call with specific headers and specific output for the metadata in the format of x-ms-meta: {Custom Key} Finally our CSV is stored into a new Container in our Storage Account ! Head over to Microsoft Fabric and create a new Workspace, and a new Copy Task: We are moving with this Task our Data directly into the Managed Lakehouse of Fabric Workspace, which we ca run on a schedule or with a Trigger. Next we will create a Semantic Model but first let’s create a Table from our CSV. Find your File into the Lakehouse selection. Remember you need to create a new Lakehouse in the Workspace ! Now select the file we inserted with the Pipeline and from the elipsis menu select Load to Tables : Go to the Tables Folder and create a new Semantic Model for the Table: On the semantic model editing experience, you are able to define relationships between multiple tables, and also apply data types normalization and DAX transformations to the data if desired. Select New report on the ribbon. Use the report builder experience to design a Power BI report. And there you have it ! A complete Application where we utilized Azure Content Safety, and Microsoft Fabric to moderate and perform analysis on images that our users upload ! Conclusion In this exploration, we’ve journeyed through the intricate and powerful capabilities of Azure Content Safety and its seamless integration with custom metadata, culminating in robust analysis using Fabric. Our journey demonstrates not only the technical proficiency of Azure’s tools in moderating and analyzing content but also underscores the immense potential of cloud computing in enhancing content safety and insights. By harnessing the power of Azure’s content moderation features and the analytical prowess of Fabric, we’ve unlocked new frontiers in data management and analysis. This synergy empowers us to make informed decisions, ensuring a safer and more compliant digital environment. GitHub Repo : Content Safety with Custom Metadata Architecture:KonstantinosPassadisMar 03, 2024Learn Expert2.6KViews0likes0CommentsAzure AI Language: Sentiment Analysis with Durable Functions
Implementing Sentiment Analysis with Azure AI Language and Durable Functions Intro In today’s exploration, we delve into the world of Durable Functions, an innovative orchestration mechanism that elevates our coding experience. Durable Functions stand out by offering granular control over the execution steps, seamlessly integrating within the Azure Functions framework. This unique approach not only maintains the serverless nature of Azure Functions but also adds remarkable flexibility. It allows us to craft multifaceted applications, each capable of performing a variety of tasks under the expansive Azure Functions umbrella. Originating from the Durable Task Framework, widely used by Microsoft and various organizations for automating critical processes, Durable Functions represent the next step in serverless computing. They bring the power and efficiency of the Durable Task Framework into the serverless realm of Azure Functions, offering an ideal solution for complex, mission-critical workflows. Alongside with Azure Functions we are going to build a Python Flask Web Application where users enter text and we get a Sentiment Analysis from Azure AI Language Text Analytics, while results are stored into Azure Table Storage. Requirements For this workshop we need an Azure Subscription and we are using VSCode with Azure Functions Core Tools. We are building an Azure Web App to host our Flask UI, Azure Language AI with Python SDK for the sentiment analysis, Azure Durable Functions and Storage Account. The Durable Functions have an HTTP Trigger, the Orchestrator and two Activity Functions. The first activity is the API that sends data to the Language Endpoint and the second stores the results into Azure Table Storage, where we can utilize later for analysis and so on. Build Let’s explore our elements from the UI to each Function. Our UI is a Flask Web App and we have the index.html served from our app.py program: from flask import Flask, render_template, request, jsonify import requests import os app = Flask(__name__) @app.route('/', methods=['GET']) def index(): return render_template('index.html') # HTML file with input form @app.route('/analyze', methods=['POST']) def analyze(): text = request.form['text'] print("Received text:", text) function_url = os.environ.get('FUNCTION_URL') if not function_url: return jsonify({'error': 'Function URL is not configured'}) # Trigger the Azure Function response = requests.post(function_url, json={'text': text}) if response.status_code != 202: return jsonify({'error': 'Failed to start the analysis'}) # Get the status query URL status_query_url = response.headers['Location'] # Poll the status endpoint while True: status_response = requests.get(status_query_url) status_response_json = status_response.json() if status_response_json['runtimeStatus'] in ['Completed']: # The result should be directly in the output results = status_response_json.get('output', []) return jsonify({'results': results}) elif status_response_json['runtimeStatus'] in ['Failed', 'Terminated']: return jsonify({'error': 'Analysis failed or terminated'}) # Implement a delay here if necessary if __name__ == '__main__': app.run(debug=True) <!DOCTYPE html> <html> <head> <title>Sentiment Analysis App</title> <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}"> </head> <body> <img src="{{ url_for('static', filename='logo.png') }}" class="icon" alt="App Icon"> <h2>Sentiment Analysis</h2> <form id="textForm"> <textarea name="text" placeholder="Enter text here..."></textarea> <button type="submit">Analyze</button> </form> <div id="result"></div> <script> document.getElementById('textForm').onsubmit = async function(e) { e.preventDefault(); let formData = new FormData(this); let response = await fetch('/analyze', { method: 'POST', body: formData }); let resultData = await response.json(); // Accessing the 'results' object from the response let results = resultData.results; if (results) { // Constructing the display text with sentiment and confidence scores let displayText = `Document: ${results.document}\nSentiment: ${results.overall_sentiment}\n`; displayText += `Confidence - Positive: ${results.confidence_positive}, Neutral: ${results.confidence_neutral}, Negative: ${results.confidence_negative}`; document.getElementById('result').innerText = displayText; } else { // Handling cases where results may not be present document.getElementById('result').innerText = 'No results to display'; } }; </script> </body> </html> Durable Functions There are currently four durable function types in Azure Functions: activity, orchestrator, entity, and client. In our deployment we are using: Function 1 – HTTP Trigger (Client-Starter Function): Receives text input from the frontend and starts the orchestrator. Function 2 – Orchestrator Function: Orchestrates the sentiment analysis workflow. Function 3 – Activity Function: Calls Azure Cognitive Services Text Analytics API to analyze sentiment. Function 4 – Activity Function: Stores results into Azure Table Storage. And here is the code for each Durable Function, starting with the HTTP Trigger: # HTTP Trigger - The Client\Starter Function listener import logging import azure.functions as func import azure.durable_functions as df async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse: client = df.DurableOrchestrationClient(starter) text = req.params.get('text') if not text: try: req_body = req.get_json() except ValueError: pass else: text = req_body.get('text') if text: instance_id = await client.start_new("SentimentOrchestrator", None, text) logging.info(f"Started orchestration with ID = '{instance_id}'.") return client.create_check_status_response(req, instance_id) else: return func.HttpResponse( "Please pass the text to analyze in the request body", status_code=400 ) Following the Orchestrator: # Orchestrator Function import azure.durable_functions as df def orchestrator_function(context: df.DurableOrchestrationContext): document = context.get_input() # Treat input as a single document result = yield context.call_activity("AnalyzeSentiment", document) # Call the function to store the result in Azure Table Storage yield context.call_activity("StoreInTableStorage", result) return result main = df.Orchestrator.create(orchestrator_function) The Orchestrator is firing the following Activity functions, the Sentiment Analysis call and the results stored to Azure Table Storage: # Activity - Sentiment Analysis import os import requests from azure.core.credentials import AzureKeyCredential from azure.ai.textanalytics import TextAnalyticsClient def main(document: str) -> dict: endpoint = os.environ["TEXT_ANALYTICS_ENDPOINT"] key = os.environ["TEXT_ANALYTICS_KEY"] text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key)) response = text_analytics_client.analyze_sentiment([document], show_opinion_mining=False) doc = next(iter(response)) if not doc.is_error: simplified_result = { "overall_sentiment": doc.sentiment, "confidence_positive": doc.confidence_scores.positive, "confidence_neutral": doc.confidence_scores.neutral, "confidence_negative": doc.confidence_scores.negative, "document": document } return simplified_result else: return {"error": "Sentiment analysis failed"} # Activity - Results to Table Storage from azure.data.tables import TableServiceClient import os import json from datetime import datetime def main(results: dict) -> str: connection_string = os.environ['AZURE_TABLE_STORAGE_CONNECTION_STRING'] table_name = 'SentimentAnalysisResults' table_service = TableServiceClient.from_connection_string(connection_string) table_client = table_service.get_table_client(table_name) # Prepare the entity with a unique RowKey using timestamp timestamp = datetime.utcnow().strftime('%Y%m%d%H%M%S%f') row_key = f"{results.get('document')}-{timestamp}" entity = { "PartitionKey": "SentimentAnalysis", "RowKey": row_key, "Document": results.get('document'), "Sentiment": results.get('overall_sentiment'), "Confidence": results.get('confidence') } # Insert the entity table_client.create_entity(entity=entity) return "Result stored in Azure Table Storage" Our Serverless Workshop is almost ready ! We need to carefully add the relevant configuration values for each resource : Azure Web Application : FUNCTION_URL, the HTTP Start URL from the Durable Functions resource. Durable Functions : TEXT_ANALYTICS_ENDPOINT, the Azure AI Language endpoint. Durable Functions : TEXT_ANALYTICS_KEY, the Azure AI Language key. Durable Functions : AZURE_TABLE_STORAGE_CONNECTION_STRING, the connection string for the Storage Account. We need to create a Storage Account and a Table , an Azure Web Application with an App Service Plan, an Azure Durable Functions resource and either a Cognitive Services Multi-Service account or an Azure AI Language resource. From VSCode create a new Durable Functions Project and four Durable Functions, each one as mentioned above. Make sure to add the correct names on the bindings for example in the Store In Table Storage Function we have: def main(results: dict) -> str: connection_string = os.environ['AZURE_TABLE_STORAGE_CONNECTION_STRING'] table_name = 'SentimentAnalysisResults'....... So in the function.json binding file make sure to match the name given in our code: { "scriptFile": "__init__.py", "bindings": [ { "name": "results", "type": "activityTrigger", "direction": "in" } ] } Add a System Assigned Managed Identity to the Function Resource and add the Storage Table Data Contributor role. Create the Web Application and Deploy the app.py to the Web App, make sure you have selected the Directory where your app.py file exists. Add the Configuration setting we described and hit the URL, you will be presented with the UI: Let’s break down the whole procedure in addition to the flow we have seen above: User Enters Text: It all starts when a user types a sentence or paragraph into the text box on your web page (the UI). Form Submission to Flask App: When the user clicks the “Analyze” button, the text is sent from the web page to your Flask app. This happens via an HTTP POST request, triggered by the JavaScript code on your web page. The Flask app, running on a server, receives this text. Flask App Invokes Azure Function: The Flask app then sends this text to an Azure Function. This is done by making another HTTP POST request, this time from the Flask app to the Azure Function’s endpoint. The Azure Function is a part of Azure Durable Functions, which are special types of Azure Functions designed for more complex workflows. Processing in Azure Durable Function: The text first arrives at the Orchestrator function in your Azure Durable Function setup. This Orchestrator function coordinates what happens to the text next. The Orchestrator function calls another function, typically known as an Activity function, specifically designed for sentiment analysis. This Activity function might use Azure Cognitive Services to analyze the sentiment of the text. Once the Activity function completes the sentiment analysis, it returns the results (like whether the sentiment is positive, neutral, or negative, and confidence scores) back to the Orchestrator function. Storing Results (Optional): If you’ve set it up, the Orchestrator function might then call another Activity function to store these results in Azure Table Storage for later use. Results Sent Back to Flask App: After processing (and optionally storing) the results, the Orchestrator function sends these results back to your Flask app. Flask App Responds to Web Page: Your Flask app receives the sentiment analysis results and sends them back to the web page as a response to the initial HTTP POST request. Displaying Results on the UI: Finally, the JavaScript code on your web page receives this response and updates the web page to display the sentiment analysis results to the user. And here is the Data Stored in our Table: As you may understand we can expand the Solution to further analyze our Data, add Visualizations and ultimately provide an Enterprise grade Solution where Durable Functions is the heart of it! Our Architecture is simple but powerful and extendable: Closing Modern solutions are bound to innovative yet powerful offerings and Azure Durable Functions can integrate seamlessly with every Azure service, even better, orchestrate our code with ease, providing fast delivery, scalability and security. Today we explored Azure AI Language with Text Analytics and Sentiment Analysis and Durable Functions helped us deliver a multipurpose solution with Azure Python SDK. Integration is key if we want to create robust and modern solutions without having to write hundreds of lines of code and Azure is leading the way with cutting edge, serverless PaaS offerings for us to keep building! GitHub Repository: Sentiment Analysis with Durable FunctionsKonstantinosPassadisMar 03, 2024Learn Expert844Views0likes0CommentsAzure Vision AI – Object Detection Web App with Docker and Container Registry
Build a Python Container Image and Deploy via Azure Container Registry to Azure Web Apps for Object Detection. Terraform Docker, and Scripts. Intro Our era is marked by amazing achievements and groundbreaking technologies. Artificial Intelligence is one of them and Cloud Vendors have been investing widely on new innovations making AI twice as powerful and also reachable! Azure AI Services ( Cognitive Services ) are on the front line delivering a range of Products accessible, for novice and experienced users, for Development and Production. Today we are exploring Azure AI Vision with Computer Vision API integrated in our Web App for object detection. The approach expands to the use of Azure Container Registry for Web Apps and we build a Python application with Flask, containerize it with Docker and configure Continuous Deployment with Webhooks. So let’s start ! Deployment This is a deployment with terraform, so let’s have a look. We need our code editor, in our case VSCode and our standard files. It is quite a big deployment but IaC is here to help! Here is our very interesting main.tf : # Create Local Variables to use later, execute bash script locals { storage_account_url = "https://${azurerm_storage_account.storage.name}.blob.core.windows.net/" cognitive_services_endpoint = azurerm_cognitive_account.cvision.endpoint vision_api_key = azurerm_cognitive_account.cvision.primary_access_key acr_url = "https://${azurerm_container_registry.acr.login_server}/" ftp_username = data.external.ftp_credentials.result["username"] ftp_password = data.external.ftp_credentials.result["password"] } data "external" "ftp_credentials" { program = ["bash", "${path.module}/find.sh"] depends_on = [azurerm_linux_web_app.webapp] } output "ftp_username" { value = data.external.ftp_credentials.result["username"] sensitive = true } output "ftp_password" { value = data.external.ftp_credentials.result["password"] sensitive = true } # Create Randomness resource "random_string" "str-name" { length = 5 upper = false numeric = false lower = true special = false } # Create a resource group resource "azurerm_resource_group" "rgdemo" { name = "rg-webvideo" location = "northeurope" } # Create virtual network resource "azurerm_virtual_network" "vnetdemo" { name = "vnet-demo" address_space = ["10.0.0.0/16"] location = azurerm_resource_group.rgdemo.location resource_group_name = azurerm_resource_group.rgdemo.name } # Create 2 subnets resource "azurerm_subnet" "snetdemo" { name = "snet-demo" address_prefixes = ["10.0.1.0/24"] virtual_network_name = azurerm_virtual_network.vnetdemo.name resource_group_name = azurerm_resource_group.rgdemo.name service_endpoints = ["Microsoft.Storage", "Microsoft.ContainerRegistry", "Microsoft.CognitiveServices"] } resource "azurerm_subnet" "snetdemo2" { name = "snet-demo2" address_prefixes = ["10.0.2.0/24"] virtual_network_name = azurerm_virtual_network.vnetdemo.name resource_group_name = azurerm_resource_group.rgdemo.name service_endpoints = ["Microsoft.Storage", "Microsoft.ContainerRegistry", "Microsoft.CognitiveServices"] delegation { name = "delegation" service_delegation { name = "Microsoft.Web/serverFarms" } } } # Create a Storage Account resource "azurerm_storage_account" "storage" { name = "s${random_string.str-name.result}01" resource_group_name = azurerm_resource_group.rgdemo.name location = azurerm_resource_group.rgdemo.location account_tier = "Standard" account_replication_type = "LRS" } # Create a Container resource "azurerm_storage_container" "blob" { name = "uploads" storage_account_name = azurerm_storage_account.storage.name container_access_type = "container" } # Create Azure Container Registry resource "azurerm_container_registry" "acr" { name = "azr${random_string.str-name.result}" resource_group_name = azurerm_resource_group.rgdemo.name location = azurerm_resource_group.rgdemo.location sku = "Premium" admin_enabled = true data_endpoint_enabled = true public_network_access_enabled = true network_rule_set { default_action = "Deny" ip_rule { action = "Allow" ip_range = "4.210.120.223/32" } } } output "acrname" { value = azurerm_container_registry.acr.name } # Create an App Service Plan resource "azurerm_service_plan" "asp" { name = "asp-${random_string.str-name.result}" resource_group_name = azurerm_resource_group.rgdemo.name location = azurerm_resource_group.rgdemo.location os_type = "Linux" sku_name = "B3" } # WebApp resource "azurerm_linux_web_app" "webapp" { name = "wv${random_string.str-name.result}" location = azurerm_resource_group.rgdemo.location resource_group_name = azurerm_resource_group.rgdemo.name service_plan_id = azurerm_service_plan.asp.id logs { http_logs { file_system { retention_in_mb = 35 retention_in_days = 2 } } } site_config { always_on = true vnet_route_all_enabled = true application_stack { docker_image_name = "videoapp:v20" docker_registry_url = local.acr_url docker_registry_username = azurerm_container_registry.acr.admin_username docker_registry_password = azurerm_container_registry.acr.admin_password } } app_settings = { AZURE_ACCOUNT_URL = local.storage_account_url AZURE_CONTAINER_NAME = "uploads" COMPUTERVISION_ENDPOINT = local.cognitive_services_endpoint COMPUTERVISION_KEY = local.vision_api_key DOCKER_ENABLE_CI = "true" WEBSITES_ENABLE_APP_SERVICE_STORAGE = "true" WEBSITE_PULL_IMAGE_OVER_VNET = "true" } identity { type = "SystemAssigned" } } # VNET Integration resource "azurerm_app_service_virtual_network_swift_connection" "vnetintegrationconnection" { app_service_id = azurerm_linux_web_app.webapp.id subnet_id = azurerm_subnet.snetdemo2.id } # WebHook resource "azurerm_container_registry_webhook" "whook" { actions = ["push"] location = azurerm_resource_group.rgdemo.location name = "wh${random_string.str-name.result}" registry_name = azurerm_container_registry.acr.name resource_group_name = azurerm_resource_group.rgdemo.name scope = "videoapp:v20" service_uri = "https://${local.ftp_username}:${local.ftp_password}@${azurerm_linux_web_app.webapp.name}.scm.azurewebsites.net/api/registry/webhook" depends_on = [azurerm_linux_web_app.webapp] } # Create Computer Vision resource "azurerm_cognitive_account" "cvision" { name = "ai-${random_string.str-name.result}01" location = azurerm_resource_group.rgdemo.location resource_group_name = azurerm_resource_group.rgdemo.name kind = "ComputerVision" custom_subdomain_name = "ai-${random_string.str-name.result}01" sku_name = "F0" identity { type = "SystemAssigned" } } # Private DNS resource "azurerm_private_dns_zone" "blobzone" { name = "privatelink.blob.core.azure.com" resource_group_name = azurerm_resource_group.rgdemo.name } resource "azurerm_private_endpoint" "blobprv" { location = azurerm_resource_group.rgdemo.location name = "spriv${random_string.str-name.result}" resource_group_name = azurerm_resource_group.rgdemo.name subnet_id = azurerm_subnet.snetdemo.id private_dns_zone_group { name = "default" private_dns_zone_ids = [azurerm_private_dns_zone.blobzone.id] } private_service_connection { is_manual_connection = false name = "storpriv" private_connection_resource_id = azurerm_storage_account.storage.id subresource_names = ["blob"] } } resource "azurerm_private_dns_zone_virtual_network_link" "bloblink" { name = "main" resource_group_name = azurerm_resource_group.rgdemo.name private_dns_zone_name = azurerm_private_dns_zone.blobzone.name virtual_network_id = azurerm_virtual_network.vnetdemo.id } resource "azurerm_private_dns_zone" "aizone" { name = "privatelink.cognitiveservices.azure.com" resource_group_name = azurerm_resource_group.rgdemo.name } resource "azurerm_private_endpoint" "visionpriv" { location = azurerm_resource_group.rgdemo.location name = "vis${random_string.str-name.result}" resource_group_name = azurerm_resource_group.rgdemo.name subnet_id = azurerm_subnet.snetdemo.id private_dns_zone_group { name = "default" private_dns_zone_ids = [azurerm_private_dns_zone.aizone.id] } private_service_connection { is_manual_connection = false name = "visonpriv" private_connection_resource_id = azurerm_cognitive_account.cvision.id subresource_names = ["account"] } } resource "azurerm_private_dns_zone_virtual_network_link" "ailink" { name = "main" resource_group_name = azurerm_resource_group.rgdemo.name private_dns_zone_name = azurerm_private_dns_zone.aizone.name virtual_network_id = azurerm_virtual_network.vnetdemo.id } resource "azurerm_private_dns_zone" "acrzone" { name = "privatelink.azurecr.io" resource_group_name = azurerm_resource_group.rgdemo.name } resource "azurerm_private_endpoint" "acrpriv" { location = azurerm_resource_group.rgdemo.location name = "acr${random_string.str-name.result}" resource_group_name = azurerm_resource_group.rgdemo.name subnet_id = azurerm_subnet.snetdemo.id private_dns_zone_group { name = "default" private_dns_zone_ids = [azurerm_private_dns_zone.acrzone.id] } private_service_connection { is_manual_connection = false name = "acrpriv" private_connection_resource_id = azurerm_container_registry.acr.id subresource_names = ["registry"] } } resource "azurerm_private_dns_zone_virtual_network_link" "acrlink" { name = "main" resource_group_name = azurerm_resource_group.rgdemo.name private_dns_zone_name = azurerm_private_dns_zone.acrzone.name virtual_network_id = azurerm_virtual_network.vnetdemo.id } # Assign RBAC Role to WebApp data "azurerm_subscription" "current" {} resource "azurerm_role_assignment" "rbac1" { scope = data.azurerm_subscription.current.id role_definition_name = "Storage Blob Data Contributor" principal_id = azurerm_linux_web_app.webapp.identity[0].principal_id } I know we could have broken this one down to separate files per resources by type etc. , but i prefer to have it all in one and understand what we are building here . Most important, this is not a Production deployment no recommended for example without key vault, but we have the opportunity to see some cool features like extracting the FTP Username & password from Azure Web App with a Bash program, and watch the values marked as sensitive never revealed in our console. So what have we build here ? Let’s have a look on our Terraform code to understand better : We are building an Azure Infrastructure, ready made, connected with all Private Endpoints and Private DNS Zones in place. We have also extracted with a bash script some variables to make a WebHook fully integrated in our code so the only thing we have to do is to build our App and push it to Azure Container Registry ! The Webhook is already in sync with our Web App to push the image, and that’s it ! Cool right ? Our APP is a Python 3.10 Flask Web App exposed by Gunicorn, and let’s explain what we are doing here: We have a simple Web Interface where users can Browse a Video Library (Storage Blob Containers) or upload videos. In the browse page we can see Thumbnails of our Videos and a Button “Analyze’ under each Video. Here is the magic ! Once the Analyze is clicked our Python App has already extracted frames of the Video ( n=5sec) and the analysis selects a Frame sends it to our Computer Vision resource ( Azure AI Vision) and Object Detection is performed, bringing the frame into our page with bounding boxes and some text with the detected objects names. We need one small thing to do, build and push our App as a Docker Image to Azure Container Registry, so first login and get the ACR name: az acr login --name $(az acr list -g rg-webvideo --query "[].{name: name}" -o tsv) az acr list -g rg-webvideo --query "[].{name: name}" -o tsv Then build the Image , tag it and push it : docker build -t videoapp . docker tag videoapp azruoxwf.azurecr.io/videoapp:v20 docker push azruoxwf.azurecr.io/videoapp:v20 Be careful and consistent ! I have my tag set to v20 so you must decide ahead these details that could really mess things up ! So, we have our Image Build , Tagged and Pushed ! Because the dependencies are both ways, we need to make sure that the Web App can pull the Image,so go to Deployment Center and make sure of the setting, if needed set it : Run once more the docker push azruoxwf.azurecr.io/videoapp:v20 command and restart the WebApp. Object Detection Start by uploading a Video , 10-20 seconds is enough and go to Browse ( i have to add a button there to go directly ), by returning to home and browse. You will see the Video there with full controls, Press Analyze and there you go : I suggest a P1V3 Plan or scaling rules starting with 2 x B3 so you wont get nasty fails on the UI, better keep that in mind. Another variation of this Application is to perform Streaming Video analysis, while may looks similar, it is a whole new approach ! I will share the Python code and leave the HTML \ CSS to you! from flask import Flask, render_template, request, redirect, url_for, flash from azure.storage.blob import BlobServiceClient from azure.identity import DefaultAzureCredential from werkzeug.utils import secure_filename from azure.cognitiveservices.vision.face import FaceClient from azure.cognitiveservices.vision.computervision import ComputerVisionClient from azure.cognitiveservices.vision.computervision.models import VisualFeatureTypes from msrest.authentication import CognitiveServicesCredentials import requests import os import cv2 import io import logging import tempfile import random import numpy as np app = Flask(__name__) app.config['UPLOAD_EXTENSIONS'] = ['.mp4', '.mov', '.avi'] app.config['MAX_CONTENT_LENGTH'] = 50 * 1024 * 1024 # 50 MB app.secret_key = '1q2w3e4r' logging.basicConfig(filename='error.log', level=logging.ERROR) computervision_client = ComputerVisionClient(os.getenv('COMPUTERVISION_ENDPOINT'), CognitiveServicesCredentials(os.getenv('COMPUTERVISION_KEY'))) blob_service_client = BlobServiceClient(account_url=os.getenv('AZURE_ACCOUNT_URL'), credential=DefaultAzureCredential()) vision_api_endpoint = os.getenv('COMPUTERVISION_ENDPOINT') vision_api_key = os.getenv('COMPUTERVISION_KEY') vision_client = ComputerVisionClient(vision_api_endpoint, CognitiveServicesCredentials('COMPUTERVISION_KEY')) @app.route('/') def index(): return render_template('index.html') @app.route('/upload') def upload(): return render_template('upload.html') @app.route('/upload', methods=['POST']) def upload_file(): file = request.files['file'] if file: filename_with_ext = secure_filename(file.filename) filename, file_ext = os.path.splitext(filename_with_ext) if file_ext not in app.config['UPLOAD_EXTENSIONS']: flash('Invalid file type!') return redirect(url_for('upload')) # Save the file temporarily temp_path = os.path.join(tempfile.gettempdir(), filename_with_ext) # Use filename with extension for temp save file.save(temp_path) container_client = blob_service_client.get_container_client(os.getenv('AZURE_CONTAINER_NAME')) blob_client = container_client.get_blob_client(filename_with_ext) try: # Save video to Blob Storage with open(temp_path, 'rb') as f: blob_client.upload_blob(f, overwrite=True) # Open the video file from temporary location cap = cv2.VideoCapture(temp_path) fps = int(cap.get(cv2.CAP_PROP_FPS)) frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) # Set the time interval (in seconds) time_interval = 5 # Calculate the total number of intervals num_intervals = frame_count // (fps * time_interval) # Loop through each interval and save the corresponding frame for i in range(num_intervals + 1): # +1 to include the last frame frame_number = i * time_interval * fps cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number) ret, frame = cap.read() if ret: _, buffer = cv2.imencode('.jpg', frame) img_byte_io = io.BytesIO(buffer) frame_blob_name = f"{filename}/frame_{frame_number}.jpg" frame_blob_client = container_client.get_blob_client(frame_blob_name) frame_blob_client.upload_blob(img_byte_io, overwrite=True) cap.release() os.remove(temp_path) # Remove the temporary file once done flash('File and frames uploaded successfully') except Exception as e: flash(f'An error occurred: {e}') return redirect(url_for('upload')) return redirect(url_for('upload')) flash('No file uploaded') return redirect(url_for('upload')) def get_video_urls(): container_client = blob_service_client.get_container_client(os.getenv('AZURE_CONTAINER_NAME')) blob_list = container_client.list_blobs() # Only fetch .mp4 files video_urls = [container_client.get_blob_client(blob.name).url for blob in blob_list if blob.name.endswith('.mp4')] return video_urls @app.route('/browse') def browse(): video_urls = get_video_urls() return render_template('browse.html', video_urls=video_urls) @app.route('/analyze', methods=['POST']) def analyze_video(): video_url = request.form.get('video_url') video_basename = os.path.basename(video_url).split('.')[0] container_client = blob_service_client.get_container_client(os.getenv('AZURE_CONTAINER_NAME')) blob_list = list(container_client.list_blobs(name_starts_with=video_basename)) if not blob_list: flash('No frames found for the video.') return redirect(url_for('browse')) random_frame_blob = random.choice(blob_list) frame_blob_client = container_client.get_blob_client(random_frame_blob.name) frame_bytes = io.BytesIO(frame_blob_client.download_blob().readall()) img_arr = np.frombuffer(frame_bytes.getvalue(), dtype=np.uint8) img = cv2.imdecode(img_arr, cv2.IMREAD_COLOR) analysis = computervision_client.analyze_image_in_stream(frame_bytes, visual_features=[VisualFeatureTypes.objects]) detected_objects = [] for detected_object in analysis.objects: # Draw bounding boxes around detected objects left = detected_object.rectangle.x top = detected_object.rectangle.y right = left + detected_object.rectangle.w bottom = top + detected_object.rectangle.h label = detected_object.object_property confidence = detected_object.confidence # Draw rectangle and label color = (255, 0, 0) cv2.rectangle(img, (left, top), (right, bottom), color, 2) font = cv2.FONT_HERSHEY_SIMPLEX label_size = cv2.getTextSize(label, font, 0.5, 2)[0] cv2.rectangle(img, (left, top - label_size[1] - 10), (left + label_size[0], top), color, -1) cv2.putText(img, f"{label} ({confidence:.2f})", (left, top - 5), font, 0.5, (255, 255, 255), 2) detected_objects.append({ 'label': label, 'confidence': confidence }) # Convert image back to bytes to store in blob storage _, buffer = cv2.imencode('.jpg', img) img_byte_io = io.BytesIO(buffer) # Upload the result image to the Blob storage result_blob_name = f"{video_basename}/analyzed_frame.jpg" result_blob_client = container_client.get_blob_client(result_blob_name) result_blob_client.upload_blob(img_byte_io, overwrite=True) video_urls = get_video_urls() # Send analysis results to the template analysis_results = { 'analyzed_video_url': video_url, # The URL of the video that was analyzed 'img_url': result_blob_client.url, # The URL of the analyzed frame 'objects': detected_objects # Detected objects and their details } return render_template('browse.html', video_urls=video_urls, analysis_results=analysis_results) if __name__ == '__main__': app.run(debug=True) Closing I love to work on ideas like this on Azure, and my limits are always pushed. But the final results are always more than rewarding, with Azure being such an amazing platform with strong dynamics, always evolving and taking us along for the ride, the experiences and to learn new things. GitHub Repository : Azure AI Vision Object DetectionKonstantinosPassadisMar 03, 2024Learn Expert839Views0likes0CommentsSemantic Kernel: Container Apps with React and Python
Build your custom AI Solution integrating Semantic Kernel into your Azure Container Apps. Today we are exploring Semantic Kernel and how we can implement Azure Open AI with total control on the prompts and the requests towards our Chat deployments! Our Project is a Learning Application assisting users into learning different topics with Tutorials and Quizzes fresh from Azure Open AI. The same can be achieved using Open AI Chat GPT, but we love Azure and we are going to use the Azure Open AI resources! Intro For our solution we have some prerequisites needed to make it work so lets have a look : We need an Azure Subscription with Open AI enabled, keep in mind you need to apply via Microsoft Form for Open AI access. Please make sure you have applied to the Azure OpenAI Service. Once approved for the Azure OpenAI Service, you will have access to GPT-4 in the following regions: – Sweden Central – Canada East – Switzerland North. Our Workstation should have Docker installed and Azure Functions Core Tools, Azure Account and Python for VSCode. Also Node.Js for our React implementation and testing. Deployment Let’s start with a quick Terraform deployment to build our base resources , like a resource group, Azure Container Registry, Application Insights etc. I will provide the main.tf since the rest are quite standard. # main.tf - Deploy Core Resources and Services on Azure - # Create Randomness resource "random_string" "str-name" { length = 5 upper = false numeric = false lower = true special = false } # Create a resource group resource "azurerm_resource_group" "rgdemo" { name = "rg-myapp" location = "westeurope" } # Create Log Analytics Workspace resource "azurerm_log_analytics_workspace" "logs" { name = "Logskp" location = azurerm_resource_group.rgdemo.location resource_group_name = azurerm_resource_group.rgdemo.name sku = "PerGB2018" retention_in_days = 30 } # Create Application Insights resource "azurerm_application_insights" "appinsights" { name = "appin${random_string.str-name.result}" location = azurerm_resource_group.rgdemo.location resource_group_name = azurerm_resource_group.rgdemo.name workspace_id = azurerm_log_analytics_workspace.logs.id application_type = "other" } output "instrumentation_key" { value = azurerm_application_insights.appinsights.instrumentation_key sensitive = true } output "app_id" { value = azurerm_application_insights.appinsights.app_id sensitive = true } # Create Azure Container Registry resource "azurerm_container_registry" "acr" { name = "azr${random_string.str-name.result}" resource_group_name = azurerm_resource_group.rgdemo.name location = azurerm_resource_group.rgdemo.location sku = "Premium" admin_enabled = true data_endpoint_enabled = true public_network_access_enabled = true } # Create an App Service Plan resource "azurerm_service_plan" "asp" { name = "asp-${random_string.str-name.result}" resource_group_name = azurerm_resource_group.rgdemo.name location = azurerm_resource_group.rgdemo.location os_type = "Linux" sku_name = "B2" } # Create a Storage Account resource "azurerm_storage_account" "storage" { name = "s${random_string.str-name.result}01" resource_group_name = azurerm_resource_group.rgdemo.name location = azurerm_resource_group.rgdemo.location account_tier = "Standard" account_replication_type = "LRS" } # Create a Linux Function App resource "azurerm_linux_function_app" "pyapp" { name = "py${random_string.str-name.result}" resource_group_name = azurerm_resource_group.rgdemo.name location = azurerm_resource_group.rgdemo.location service_plan_id = azurerm_service_plan.asp.id storage_account_name = azurerm_storage_account.storage.name storage_account_access_key = azurerm_storage_account.storage.primary_access_key app_settings = { "WEBSITE_RUN_FROM_PACKAGE" = "1" "APPLICATIONINSIGHTS_CONNECTION_STRING" = azurerm_application_insights.appinsights.connection_string "APPINSIGHTS_INSTRUMENTATIONKEY" = azurerm_application_insights.appinsights.instrumentation_key } site_config { always_on = true application_stack { python_version = "3.11" } cors { allowed_origins = ["*"] } } } We are getting some traction here, but we are still on Infrastructure deployment, Let’s pick some speed up and start configuring our Docker Images. We need a Docker Image for the React Application and another one for the Python backend, We will use docker-compose so we can test our Frontend and Backend locally before deploying to cloud. I won’t dive into React here but you can find the code in GitHub. So in simple words, create a Folder Structure like this: Dockerfiles: # Frontend - React # Build stage FROM node:18 AS build # Set the working directory WORKDIR /app # Copy the frontend directory contents into the container at /app COPY . /app # Copy the environment file COPY .env /app/.env # Install dependencies and build the app RUN npm install RUN npm run build # Serve stage FROM nginx:alpine # Copy the custom Nginx config into the image COPY custom_nginx.conf /etc/nginx/conf.d/default.conf # Copy the build files from the build stage to the Nginx web root directory COPY --from=build /app/build /usr/share/nginx/html # Expose port 80 for the app EXPOSE 80 # Start Nginx CMD ["nginx", "-g", "daemon off;"] # Backend - Python # Use the official Python image as the base image FROM python:3.11 # Copy the backend directory contents into the container at /app COPY . /app # Set the working directory in the container to /app WORKDIR /app # Copy the environment file COPY .env /app/.env # Install any needed packages specified in requirements.txt RUN pip install --no-cache-dir -r requirements.txt # Expose port 5000 for the app EXPOSE 5000 CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "5000"] You can either work as separate Containers or work with Docker and docker-compose: # Docker Compose both Images version: '3' services: frontend: image: frontend:v10 build: context: . dockerfile: Dockerfile ports: - "80:80" backend: image: backend:v10 build: context: ../backend dockerfile: Dockerfile ports: - "5000:5000" environment: - QUART_ENV=development command: uvicorn app:app --host 0.0.0.0 --port 5000 Take it to Azure It is time to move our Project to its final destination, Azure Cloud! Lets start with the HTTP trigger cause we need the URL to enter in the Frontend React App to make the calls. We will use Model V1 as until the time this post is written the V2 is in preview. In VSCode (or any Editor), following the structure above we need a Trigger to route the payload to our Python App. So here we have our function.json : # function.json - HTTP Trigger { "scriptFile": "__init__.py", "bindings": [ { "authLevel": "anonymous", "type": "httpTrigger", "direction": "in", "name": "req", "methods": ["post"], "route": "engage" }, { "type": "http", "direction": "out", "name": "$return" } ] } We need also to modify and tune our init.py, and it is really a display of how flexible and powerful Python is : # init.py - Send payload to Python App route /engage import logging import azure.functions as func from shared.app import engage_logic import json import asyncio def run_async_function(func_to_run, *args): new_loop = asyncio.new_event_loop() try: asyncio.set_event_loop(new_loop) return new_loop.run_until_complete(func_to_run(*args)) finally: new_loop.close() def main(req: func.HttpRequest) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') try: request_data = req.get_json() user_input = request_data['user_input'] except ValueError: return func.HttpResponse("Invalid JSON input", status_code=400) # Using the run_async_function to execute the engage_logic function answer_text = run_async_function(engage_logic, user_input) return func.HttpResponse(json.dumps({"response": answer_text}), status_code=200) Here, the function run_async_function creates a new event loop and sets it as the current loop for its context. Then, it runs the async function func_to_run with the provided arguments (*args). After it’s done, it closes the loop. Nice ! Don’t forget the requirements.txt in the Function’s root folder and let’s go ! Deploy the new Function to Azure Function Apps : func azure functionapp publish <FunctionAppName> Make sure the local.settings are uploaded into Azure, or do it by hand! Also available in the relevant Git Hub Repo. Now we have our Function App ready. Let’s get our URL either from the Portal or from Azure Cli: az functionapp function show --name xxx--function-name httptrigger1 --resource-group rg-xxx--query invokeUrlTemplate --output tsv And off to our React App in the .env file ( Avalable in GitHub) to add our Endpoint, rebuild the Image and push it to Azure Container Registry. First things first, we have an .env file that shows to the Frontend where to send the payload once a user makes a request (Hits a link). Here is a sample, where we are entering the Function App URL: az acr login --name $(az acr list -g rg-myapp --query "[].{name: name}" -o tsv) Build and tag our Image and Push it to ACR : docker build -t learning-aid-app-frontend:latest . docker tag learning-aid-app-frontend:latest azcont1.azurecr.io/frontend:v10 docker push azcont1.azurecr.io/frontend:v10 Our Image is uploaded so lets create Azure Container Apps Environment and create our image from the registry: Search for Container Apps and create a new one, with an Environment with Default settings. Pay attention to the Image selection to the newly uploaded one, and create the Ingress as shown : I suggest you grab a Certificate from Let’s Encrypt or any other and upload it to serve your own domain : From the Container App – Custom Domains add your Domain and Upload your PFX: Follow the instructions and you are serving your Domain from this Container App ! Finally we are ready ! Open your URL and hit one of the available links ! We are getting a response from Open AI Chat GPT, in fact we have our prompts pre configured with the Semantic Kernel at hand. The possibilities are numerous ! Let’s have a look on the Python code ( Available in Git Hub ) snippet, which shows the connector and the prompt manufacturing : kernel = sk.Kernel() #kernel.add_chat_service( from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion #deployment, api_key, endpoint = sk.azure_openai_settings_from_dot_env() deployment = os.environ.get('AZURE_OPENAI_DEPLOYMENT_NAME') api_key = os.environ.get('AZURE_OPENAI_API_KEY') endpoint = os.environ.get('AZURE_OPENAI_ENDPOINT') kernel.add_chat_service("chat_completion", AzureChatCompletion(deployment, endpoint, api_key)) prompt_config = sk.PromptTemplateConfig.from_completion_parameters( max_tokens=6000, temperature=0.7, top_p=0.8 ) prompt_template = sk.ChatPromptTemplate( "{{$user_input}}", kernel.prompt_template_engine, prompt_config ) prompt_template.add_system_message(system_message) function_config = sk.SemanticFunctionConfig(prompt_config, prompt_template) chat_function = kernel.register_semantic_function("ChatBot", "Chat", function_config) The highlighted areas show the interesting parts, where we create the connection and select the flavor of our AI Chat. Also in the React App we add the actual Prompts: /*function Tutorials() { */ return ( <div className="tutorials"> <h1>Tutorials</h1> <ul className="tutorial-list"> <li> <a href="#tutorial1" className="tutorial-link" onClick={() => handleEngageClick('Write 50 words on Introduction to Algebra')} It is really amazing ! Integrating AI into our code and choosing what to request with a lot of flexibility. This is a small scale example, really the sky is the limit ! Tuning We can expand the Solution to narrow down allowed IPs for Azure Open AI. For example Azure Functions usually have a number of IP Addresses for Outbound communications and we can add these to Azure Open AI Allowed IPs , from the /16 networks. We can also utilize NAT Gateway for Azure Functions and use just one IP Address in our Open AI Network rules. Conclusion In wrapping up, our demo really brought to life the magic that happens when you toss Semantic Kernel, Azure Container Apps, and OpenAI into the mix together. Azure Container Apps showed us that managing our app can be a breeze, making sure it stays up and runs smoothly no matter what. On the other hand, diving into the Semantic Kernel was like taking a joy ride through the depths of semantic understanding, making sense of the context way better than before. And of course, bringing OpenAI into the party just took things to a whole new level. It’s like we gave our app a brain boost with the kind of AI smarts that OpenAI brings to the table. It’s pretty exciting to see how blending these tech pieces together in the demo paves the way for some cool, intelligent apps down the line. This combo of tech goodies not only shows off the cool stuff happening in cloud computing and AI, but also gets the wheels turning on what other cool innovations we could whip up next on platforms like Azure. GitHub : Learning Aid Web App with Semantic KernelKonstantinosPassadisMar 02, 2024Learn Expert2.2KViews0likes0Comments
Resources
Tags
- web apps73 Topics
- AMA47 Topics
- azure functions37 Topics
- Desktop Apps10 Topics
- Mobile Apps9 Topics
- azure kubernetes service3 Topics
- community2 Topics
- azure1 Topic
- Feature Request1 Topic
- Azure SignalR Service1 Topic