API REST
9 TopicsStep-by-step: Integrate Ollama Web UI to use Azure Open AI API with LiteLLM Proxy
Introductions Ollama WebUI is a streamlined interface for deploying and interacting with open-source large language models (LLMs) like Llama 3 and Mistral, enabling users to manage models, test them via a ChatGPT-like chat environment, and integrate them into applications through Ollama’s local API. While it excels for self-hosted models on platforms like Azure VMs, it does not natively support Azure OpenAI API endpoints—OpenAI’s proprietary models (e.g., GPT-4) remain accessible only through OpenAI’s managed API. However, tools like LiteLLM bridge this gap, allowing developers to combine Ollama-hosted models with OpenAI’s API in hybrid workflows, while maintaining compliance and cost-efficiency. This setup empowers users to leverage both self-managed open-source models and cloud-based AI services. Problem Statement As of February 2025, Ollama WebUI, still do not support Azure Open AI API. The Ollama Web UI only support self-hosted Ollama API and managed OpenAI API service (PaaS). This will be an issue if users want to use Open AI models they already deployed on Azure AI Foundry. Objective To integrate Azure OpenAI API via LiteLLM proxy into with Ollama Web UI. LiteLLM translates Azure AI API requests into OpenAI-style requests on Ollama Web UI allowing users to use OpenAI models deployed on Azure AI Foundry. If you haven’t hosted Ollama WebUI already, follow my other step-by-step guide to host Ollama WebUI on Azure. Proceed to the next step if you have Ollama WebUI deployed already. Step 1: Deploy OpenAI models on Azure Foundry. If you haven’t created an Azure AI Hub already, search for Azure AI Foundry on Azure, and click on the “+ Create” button > Hub. Fill out all the empty fields with the appropriate configuration and click on “Create”. After the Azure AI Hub is successfully deployed, click on the deployed resources and launch the Azure AI Foundry service. To deploy new models on Azure AI Foundry, find the “Models + Endpoints” section on the left hand side and click on “+ Deploy Model” button > “Deploy base model” A popup will appear, and you can choose which models to deploy on Azure AI Foundry. Please note that the o-series models are only available to select customers at the moment. You can request access to the o-series models by completing this request access form, and wait until Microsoft approves the access request. Click on “Confirm” and another popup will emerge. Now name the deployment and click on “Deploy” to deploy the model. Wait a few moments for the model to deploy. Once it successfully deployed, please save the “Target URI” and the API Key. Step 2: Deploy LiteLLM Proxy via Docker Container Before pulling the LiteLLM Image into the host environment, create a file named “litellm_config.yaml” and list down the models you deployed on Azure AI Foundry, along with the API endpoints and keys. Replace "API_Endpoint" and "API_Key" with “Target URI” and “Key” found from Azure AI Foundry respectively. Template for the “litellm_config.yaml” file. model_list: - model_name: [model_name] litellm_params: model: azure/[model_name_on_azure] api_base: "[API_ENDPOINT/Target_URI]" api_key: "[API_Key]" api_version: "[API_Version]" Tips: You can find the API version info at the end of the Target URI of the model's endpoint: Sample Endpoint - https://example.openai.azure.com/openai/deployments/o1-mini/chat/completions?api-version=2024-08-01-preview Run the docker command below to start LiteLLM Proxy with the correct settings: docker run -d \ -v $(pwd)/litellm_config.yaml:/app/config.yaml \ -p 4000:4000 \ --name litellm-proxy-v1 \ --restart always \ ghcr.io/berriai/litellm:main-latest \ --config /app/config.yaml --detailed_debug Make sure to run the docker command inside the directory where you created the “litellm_config.yaml” file just now. The port used to listen for LiteLLM Proxy traffic is port 4000. Now that LiteLLM proxy had been deployed on port 4000, lets change the OpenAI API settings on Ollama WebUI. Navigate to Ollama WebUI’s Admin Panel settings > Settings > Connections > Under the OpenAI API section, write http://127.0.0.1:4000 as the API endpoint and set any key (You must write anything to make it work!). Click on “Save” button to reflect the changes. Refresh the browser and you should be able to see the AI models deployed on the Azure AI Foundry listed in the Ollama WebUI. Now let’s test the chat completion + Web Search capability using the "o1-mini" model on Ollama WebUI. Conclusion Hosting Ollama WebUI on an Azure VM and integrating it with OpenAI’s API via LiteLLM offers a powerful, flexible approach to AI deployment, combining the cost-efficiency of open-source models with the advanced capabilities of managed cloud services. While Ollama itself doesn’t support Azure OpenAI endpoints, the hybrid architecture empowers IT teams to balance data privacy (via self-hosted models on Azure AI Foundry) and cutting-edge performance (using Azure OpenAI API), all within Azure’s scalable ecosystem. This guide covers every step required to deploy your OpenAI models on Azure AI Foundry, set up the required resources, deploy LiteLLM Proxy on your host machine and configure Ollama WebUI to support Azure AI endpoints. You can test and improve your AI model even more with the Ollama WebUI interface with Web Search, Text-to-Image Generation, etc. all in one place.545Views0likes0CommentsA Comprehensive Guide to Getting Started with Data API Builder for Azure SQL Database or SQL Server
Learn about the capabilities of Data API Builder for Azure SQL Database or SQL Server. Follow our step-by-step guide on provisioning, deploying, and testing your API, as well as learning how to link your database to a simple HTML page. With Data API Builder, you can unleash the strength of your data and improve your productivity.17KViews8likes6CommentsUnderstanding Resource SKU restriction reason codes
Hello, I am using the SKU REST API (https://docs.microsoft.com/en-us/rest/api/compute/resource-skus/list?tabs=HTTP) to query available VM sizes for each region and I do not understand ResourceSkuRestrictionsReasonCode. There are two codes: NotAvailableForSubscription and QuotaId. Now, for NotAvailableForSubscription I see many deprecated sizes listed which makes perfect sense, but I also do see some sizes to disappear/reapper from time to time when I do multiple requests and I wonder if this reason code is also used when capacity for such VM size is reached. The problem is, I want to get full list of all possible (not deprecated) VM sizes, I am not interested in what the actual capacity is and I believe this REST API is the right one to use. Can someone explain me this flag? The second code, QuotaId, is also a bit weird. From the name, it looks like it should restrict VM sizes my account don't have quotas for, however, on my account I only have basic quotas, yet I see all of the VM sizes. I have literally zero quota restrictions. The documentation is not really good for this REST API, I hope to find more information somewhere. Thanks!909Views0likes0CommentsSharePoint API won't return readOnly fields
Hello everyone, we try to get all the attributes from a file in SharePoint using the API. When we call /_api/web/lists(...)/items and filter for the wanted file we get all attributes like name, id, the last modify date and even custom fields, but none like file size and file type. We figured that fields that are meant to be readOnly won't be delivered with the calls we make. But we also don't want to make more calls than necessary to get the attributes. Is there a way or call to get all attributes a file has at once?1.7KViews0likes2CommentsInvoke-WebRequest : { "type": "Client error", "title": "Missing HTTP body" } - Error Api invoke
Dear all, I need your support about the error indicated in the subject. The Body field in the cmdlet contain the Json data but it is not possible to execute. Invoke-WebRequest -Uri 'xxxxx' -Method POST -Headers $headers -ContentType 'application/problem+json' -body $body1.2KViews0likes1CommentOneDrive API - createLink with type "embed" returns dead urls
Hello. I've just started using OneDrive Rest API (I'm using https://github.com/OneDrive/onedrive-sdk-python if that's important). What I'm trying to accomplish is to get embed url for some pictures from OneDrive. I get an error while using createLink method with type = "embed". I receive an url in format "https://onedrive.live.com/embed?resid=abcdef&authkey=123456" that leads to non-working page: If I try to get link from onedrive website interface (using "Embed" option) it works fine but it's in different format (https://yfzapq.am.files.1drv.com/abcdefg123456?height=504&width=504). Am I missing something? Is there way to get working embed urls via OneDrive API? PS I've found that some people also receiving the same issue - https://stackoverflow.com/questions/58805214/onedrive-rest-api-embed-download-url What I've also found that in 2016 ember urls were unavailable in API (https://github.com/OneDrive/onedrive-api-docs/issues/102). Maybe nothing changed since then.1.6KViews0likes1Comment