Do you want the benefits of a comprehensive service like Azure SQL without moving your data outside your infrastructure? Are you facing challenges in hosting and managing your data? Tired of dealing with obsolescence? Want to add a hassle-free service to your private cloud? Considering another provider that is inevitably less efficient with MSSQL? And by chance, are you using OpenShift? If so, you are exactly where you need to be.
Openshift
In case you didn't know, OpenShift is a container management platform developed by Red Hat and based on Kubernetes. It provides everything you need to deploy, manage, and scale containerized applications. With its advanced tools and integrated environment, OpenShift simplifies application management, automates tasks, and ensures secure development and production environments.
Here, we will explore the deployment on an OpenShift cluster (yes, because we did this with one of my amazing clients); however, this would work on any CNCF-certified orchestrator.
We begin by deploying a cluster…
Before we begin, the objective here is to deploy a small sandbox quickly and easily to familiarize yourself with the subject. But once you are done here and want to delve deeper, I highly recommend Azure Arc Jumpstart, where you can find a number of advanced scenarios.
If you're following the procedure, I assume you already have your OnPrem cluster. So, I'll see you in the next section.
If you're curious and came unprepared, I'll assume you already have your subscription, you're familiar with the portal, azcli, powershell, bicep, and all that good stuff...
For simplicity's sake, we'll simply deploy an OpenShift cluster on Azure. It follows the exact same principles as OnPrem; the workloads will be just as isolated. However, since it's Azure, the deployment is turnkey, and I can go for a run while my script executes.
- We start by enabling the two Resource Providers in the Subscription that will be useful here:
#Register Provider Microsoft.RedHatOpenShift az provider register -n Microsoft.RedHatOpenShift #Register Provider Microsoft.AzureArcData az provider register --namespace Microsoft.AzureArcData
- First, we look for the Azure Red Hat OpenShift Cluster service.
- Next, you will need:
- A Resource Group
- Domain Name
- Choose the size of the master nodes
- Choose the size and number of worker nodes
- And you click directly on Review+Create,
You'll see here that a principal service is created, remember to copy the secret.
- And then click Create. (But nothing stops you from tinkering with other settings)
Now it’s time for Azure Arc Enabled Data Services …..
Now that everyone is on the same page, we deploy Azure Arc Enabled Data Services.
To quickly explain what it is, with Azure Arc Enabled Data Services, you can manage and monitor your SQL Managed Instance and PostgreSQL databases from a single location, whether they are on-premises, in the cloud, or at the edge. This service offers incredible flexibility by allowing hybrid and multi-cloud configurations while ensuring security, compliance, and high availability.
So after a few minutes, the cluster is ready.
- Click on Connect, and you will have access to the console link and the appropriate credentials.
- If you don't have it, retrieve the CLI.
- select Copy Login Command then Click Display Token
- You'll have a ready-made command line to connect with `oc`.
Now that you are authenticated, let's start by quickly fetching the necessary container images from mcr.microsoft.com.
This will add your cluster to .kube\config, which will be highly useful with Azure Data Services (you’ll see)
In our case, we won't complicate things; it's just a sandbox. We'll continue using kubeadmin.
- You create a project
oc new-project azurearcdataservices-prj
- If you want to see the list of images to retrieve
(Invoke-RestMethod -Uri "https://mcr.microsoft.com/v2/_catalog" -Method Get).repositories.Where({$_ -like 'arcdata/*' -and $_ -notlike '*/preview/*' -and $_ -notlike '*/test/*'})
(FYI: No secret needed, the images are public)
And we use it to quickly set up import commands (here we'll take the latest ones)
$ImageList = (Invoke-RestMethod -Uri "https://mcr.microsoft.com/v2/_catalog" -Method Get).repositories.Where({$_ -like 'arcdata/*' -and $_ -notlike '*/preview/*' -and $_ -notlike '*/test/*'}) foreach ($Image in $ImageList) { $Tags = (Invoke-RestMethod -Uri "https://mcr.microsoft.com/v2/$Image/tags/list" -Method Get).tags if ($Tags -contains 'latest') { $tag = 'latest' } else { $tag = ($tags | Sort-Object { [int]($_ -split '\.')[1] } -Descending)[0] } Write-Host "oc import-image $($Image):$tag --from=mcr.microsoft.com/$($Image):$tag --confirm" }
Refer to Release Notes (Azure Arc-enabled data services - Release notes - Azure Arc | Microsoft Learn)
- Run the generated commands
For each image, you should find something like this:
You can check that everything is in place with:
oc get is -n azurearcdataservices-prj
We have the images, now let's deploy them.
We'll use the az cli and its arcdata extension for this.
- Run:
az login
Select your subscription and then change the configuration to make arcdata fetch the images directly from your repository.
Here, we're using the ARO repository. If you're using another repository, it will work as well.
Optional:
# Initialize the Arc data controller configuration from the local repository az arcdata dc config init --source local --path ./custom # Replace the configuration values with those from your local repository az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=<your-storage-class>" az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=<your-storage-class>" az arcdata dc config replace --path ./custom/control.json --json-values "spec.docker.registry=<your-local-repo>" az arcdata dc config replace --path ./custom/control.json --json-values "spec.docker.imagePullPolicy=IfNotPresent"
- Run:
az arcdata dc create --profile-name azure-arc-openshift --k8s-namespace arcdatans --name arcdatacontroller --subscription XXXXXXXX --resource-group arcdsRG --location uksouth --connectivity-mode indirect --use-k8s --infrastructure onpremises
At this point, it will prompt you for credentials for Grafana and Kibana (yes, they are deployed simultaneously so you can monitor everything effortlessly). Enter them and ensure they are securely stored in your AKV (or your Keepass 😊 ).
Then, configure the storage profile. On ARO, there are two default profiles available, managed-csi and azurefile-csi . You should choose the one that best suits your needs.
And now, let the magic happen…
….or go to the Home/Events section and check.
And that's it.
The DataController is created, allowing us to deploy our SQL MI instances.
How?
Using the az CLI, for instance, with minimal command line input, and answering the multiple-choice questions. (Otherwise, use parameters, which is better for automation. 😊)
az sql mi-arc | Microsoft Learn
az sql mi-arc create -n sqlmi1 --k8s-namespace arcdatans --use-k8s --cores-request 2 --memory-request 2Gi --tier gp
And you can do the same with Azure Data Studio, using its excellent GUI. (Remember to install the Azure Arc extension)
- In Azure Arc Controllers, Click on the plug and
- Click on Manage
- Click on New Instance and choose Azure SQL Managed Insatcne)
- You comfortably set up in high availability.
Then you let it run.
And while it's deploying, you can check the monitoring of the other managed instance (with the credentials from earlier).
And of course, once it's ready, you connect as usual
Time to enjoy.
This Sample Code is provided for the purpose of illustration only
and is not intended to be used in a production environment. THIS
SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED "AS IS" WITHOUT
WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT
LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS
FOR A PARTICULAR PURPOSE. We grant You a nonexclusive, royalty-free
right to use and modify the Sample Code and to reproduce and distribute
the object code form of the Sample Code, provided that You agree:
(i) to not use Our name, logo, or trademarks to market Your software
product in which the Sample Code is embedded; (ii) to include a valid
copyright notice on Your software product in which the Sample Code is
embedded; and (iii) to indemnify, hold harmless, and defend Us and
Our suppliers from and against any claims or lawsuits, including
attorneys' fees, that arise or result from the use or distribution
of the Sample Code.
This sample script is not supported under any Microsoft standard
support program or service. The sample script is provided AS IS
without warranty of any kind. Microsoft further disclaims all implied
warranties including, without limitation, any implied warranties of
merchantability or of fitness for a particular purpose. The entire
risk arising out of the use or performance of the sample scripts and
documentation remains with you. In no event shall Microsoft, its
authors, or anyone else involved in the creation, production, or
delivery of the scripts be liable for any damages whatsoever
(including, without limitation, damages for loss of business
profits, business interruption, loss of business information, or
other pecuniary loss) arising out of the use of or inability to
use the sample scripts or documentation, even if Microsoft has
been advised of the possibility of such damages
Updated Feb 14, 2025
Version 1.0RyadB
Microsoft
Joined September 14, 2018
Core Infrastructure and Security Blog
Follow this blog board to get notified when there's new activity