Forum Discussion
etekinalp
Aug 28, 2023Copper Contributor
Stable Diffusion Webui on azure ML compute instance endpoint connection
Hello everybody,
I try to make the following setup work on Azure ML compute instances:
On my local client machine (could be anything from mac or windows or linux) I do have a node.js based electron desktop app and I want to use the azure_ML compute instance to send HTTPS GET and POST request calls to an azure_ML compute instance so that the azure cloud machine runs the python based API version of automatic1111 Stable diffusion's webui which I successfully setup following this article from Vlad Iliescu
How to run Stable Diffusion Web UI on Azure ML Compute Instances | Vlad Iliescu
It is in the end a compute instance where I have git cloned the open source github repo from sd webui automatic1111, created a conda env, pip installed all it's torch cudatoolkit 18 packages and 40 other python packages and called the webui.sh file which launches a localhost with a gradio based webui where you can generate stable diffusion images.
This azure compute instance generates me the images and sends me back the base64 image response which I convert on my client's local machine to a png and save that in the client's local storage. When launching the Stable diffusion webui on the azure machine it launches a REST API based off of a FAST API on a localhost with port number localhost:7861
I'm not able to access the azure compute instance's localhost:7861 from my client's local machine to send https requests back and forth. The automatic1111 sd webui launch provides a public gradio.live IP address also but I don't want to use it since it's shared and public. So I turn the webui off which leaves me with the api only localhost:7861 port.
In the end I want to be able to make this work for client companies in Germany which are known to have very strict security protocols and the gradio.live public link which looks like something like this: "https://2e11d22a72837b885b.gradio.live" will not work on a professional production level for multiple B2B clients.
Ideally, my goal is that the user should click a "connect" button on the electron app which connects with the azure compute instance, that accesses and communicates with stable diffusion's FAST API and the azure compute instance's localhost port 7861. I want to be able to access or communicate with the localhost of the compute instance to keep it isolated, private and secure for the client, while matching the azure standards and proper ways of working with REST APIs in the end. The customer should get a unique private or secured public IP or link like "https://customerXYZ-onAzure-port1234.com" something like that ideally.
Having said all of that: How would I achieve this? I was researching many things like creating webapps (but I don't have a website since this is a desktop app), or create an application gateway, setup ACR, or ACI, container registries for docker images, ssh into the machine (which I tried but request calls are not working as well as stdin and it felt like not a proper way to do so), create load balancers, virtual networks etc.
Thank you for your time,
Emre
No RepliesBe the first to reply