Artificial Intelligence Tools
3 TopicsEvaluating Generative AI Models with Azure Machine Learning
LLM evaluation assesses the performance of a large language model on a set of tasks, such as text classification, sentiment analysis, question answering, and text generation. The goal is to measure the model's ability to understand and generate human-like language.4.3KViews2likes0CommentsDeploy Open Web UI on Azure VM via Docker: A Step-by-Step Guide with Custom Domain Setup.
Introductions Open Web UI (often referred to as "Ollama Web UI" in the context of LLM frameworks like Ollama) is an open-source, self-hostable interface designed to simplify interactions with large language models (LLMs) such as GPT-4, Llama 3, Mistral, and others. It provides a user-friendly, browser-based environment for deploying, managing, and experimenting with AI models, making advanced language model capabilities accessible to developers, researchers, and enthusiasts without requiring deep technical expertise. This article will delve into the step-by-step configurations on hosting OpenWeb UI on Azure. Requirements: Azure Portal Account - For students you can claim $USD100 Azure Cloud credits from this URL. Azure Virtual Machine - with a Linux of any distributions installed. Domain Name and Domain Host Caddy Open WebUI Image Step One: Deploy a Linux – Ubuntu VM from Azure Portal Search and Click on “Virtual Machine” on the Azure portal search bar and create a new VM by clicking on the “+ Create” button > “Azure Virtual Machine”. Fill out the form and select any Linux Distribution image – In this demo, we will deploy Open WebUI on Ubuntu Pro 24.04. Click “Review + Create” > “Create” to create the Virtual Machine. Tips: If you plan to locally download and host open source AI models via Open on your VM, you could save time by increasing the size of the OS disk / attach a large disk to the VM. You may also need a higher performance VM specification since large resources are needed to run the Large Language Model (LLM) locally. Once the VM has been successfully created, click on the “Go to resource” button. You will be redirected to the VM’s overview page. Jot down the public IP Address and access the VM using the ssh credentials you have setup just now. Step Two: Deploy the Open WebUI on the VM via Docker Once you are logged into the VM via SSH, run the Docker Command below: docker run -d --name open-webui --network=host --add-host=host.docker.internal:host-gateway -e PORT=8080 -v open-webui:/app/backend/data --restart always ghcr.io/open-webui/open-webui:dev This Docker command will download the Open WebUI Image into the VM and will listen for Open Web UI traffic on port 8080. Wait for a few minutes and the Web UI should be up and running. If you had setup an inbound Network Security Group on Azure to allow port 8080 on your VM from the public Internet, you can access them by typing into the browser: [PUBLIC_IP_ADDRESS]:8080 Step Three: Setup custom domain using Caddy Now, we can setup a reverse proxy to map a custom domain to [PUBLIC_IP_ADDRESS]:8080 using Caddy. The reason why Caddy is useful here is because they provide automated HTTPS solutions – you don’t have to worry about expiring SSL certificate anymore, and it’s free! You must download all Caddy’s dependencies and set up the requirements to install it using this command: sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list sudo apt update && sudo apt install caddy Once Caddy is installed, edit Caddy’s configuration file at: /etc/caddy/Caddyfile , delete everything else in the file and add the following lines: yourdomainname.com { reverse_proxy localhost:8080 } Restart Caddy using this command: sudo systemctl restart caddy Next, create an A record on your DNS Host and point them to the public IP of the server. Step Four: Update the Network Security Group (NSG) To allow public access into the VM via HTTPS, you need to ensure the NSG/Firewall of the VM allow for port 80 and 443. Let’s add these rules into Azure by heading to the VM resources page you created for Open WebUI. Under the “Networking” Section > “Network Settings” > “+ Create port rule” > “Inbound port rule” On the “Destination port ranges” field, type in 443 and Click “Add”. Repeat these steps with port 80. Additionally, to enhance security, you should avoid external users from directly interacting with Open Web UI’s port - port 8080. You should add an inbound deny rule to that port. With that, you should be able to access the Open Web UI from the domain name you setup earlier. Conclusion And just like that, you’ve turned a blank Azure VM into a sleek, secure home for your Open Web UI, no magic required! By combining Docker’s simplicity with Caddy’s “set it and forget it” HTTPS magic, you’ve not only made your app accessible via a custom domain but also locked down security by closing off risky ports and keeping traffic encrypted. Azure’s cloud muscle handles the heavy lifting, while you get to enjoy the perks of a pro setup without the headache. If you are interested in using AI models deployed on Azure AI Foundry on OpenWeb UI via API, kindly read my other article: Step-by-step: Integrate Ollama Web UI to use Azure Open AI API with LiteLLM Proxy100Views0likes0Comments¿Cómo instalar GitHub Copilot en Visual Studio?
[Blog original en inglés] GitHub Copilot es un asistente de programación impulsado por Inteligencia Artificial (IA) que puede ejecutarse en varios entornos, ayudándote a ser más eficiente en tus tareas diarias de programación. En este blog, te mostraremos específicamente cómo funciona GitHub Copilot en Visual Studio y cómo puede aumentar tu productividad. Comprendiendo la diferencia entre GitHub Copilot y GitHub Copilot Chat: GitHub Copilot funciona directamente en tus archivos de código, proporcionando sugerencias para tu código. Funciona de manera similar a IntelliSense, pero es capaz de proponer bloques completos de código en base a lo que estás escribiendo. También proporciona acceso a comandos, puede explicar el código y ofrecer funciones adicionales directamente respecto a tus archivos. GitHub Copilot Chat funciona en una ventana independiente dentro del entorno de Visual Studio. Proporciona un asistente de chat que puede recordar el contexto de la conversación y ofrecer sugerencias inteligentes. Ambas extensiones se pueden instalar por separado. Te recomendamos probar ambas para que puedas elegir la que prefieras. En próximas oportunidades, te mostraremos más detalles sobre cada una de estas extensiones. Instalando las extensiones de GitHub Copilot Ambas extensiones se pueden instalar directamente desde Visual Studio, a través del menú Extensiones / Administrar extensiones. Desde allí, busca GitHub Copilot y GitHub Copilot Chat. También puedes dirigirte a Visual Studio Marketplace, que contiene una gran cantidad de extensiones para mejorar tu experiencia con Visual Studio. Ten en cuenta que GitHub Copilot requiere Visual Studio 2022 17.5.5 o posterior. Más información Para obtener más información, consulta nuestra colección de recursos aquí. Mantente al tanto de este blog para más contenido sobre Visual Studio. Y, por supuesto, ¡también puedes suscribirte a nuestro canal de YouTube para más contenido!97Views0likes0Comments