Azure Local: How (and Why) to Run Kubernetes on Your Own Location
A Practical Guide to On-Premises AKS
Want to run cloud-native apps in your own datacenter or location? Azure Local lets you do just that. In this guide, we’ll explore how to run Kubernetes workloads on Azure Local. When we have our cluster up and running, we will explore the benefits of running AKS on Azure Local.
What is Azure Local?
Azure Local brings Azure’s cloud features right to your datacenter or location. Think of it as having a piece of the Azure cloud running on your own servers. You can run many of the same services you’d use in Azure, but they live in your location instead of Microsoft’s datacenters.
Understanding AKS on Azure Local
When you run Azure Kubernetes Service (AKS) on Azure Local, you get a fully managed Azure Kubernetes cluster on your local hardware. This means you can:
Deployment
Let’s take a look at how we can deploy an AKS cluster using Azure Local.
First, we need our Azure Local instance to be set up, with additional required resources. This is the resource group that contains everything I need:
Explaining each of these resources in detail is beyond the scope of this article. In short, we have an Azure Local instance with an associated machine, which is the actual physical server we will run our cluster on. Lastly, we have a Custom Location.
Simply explained, a Custom Location is your own private Azure region. When deploying resources, instead of setting it to “West Europe”, you can use this custom location to deploy your AKS cluster. And that’s exactly what we’ll do.
In a future article, I will explain how I deploy AKS clusters to our custom location using Bicep. For now, let’s focus on deploying it through the portal.
We start by navigating to our Azure Local resource. From there we can easily deploy Kubernetes clusters or virtual machines.
When creating the cluster, we are met with the same interface that we get when creating an AKS cluster on Azure. The key difference is that we can select our Custom Location here:
Another difference is that we create our cluster with a Logical Network instead of the usual Virtual network:
Note that this cluster is created on the local network of the data center that the server is located in. If we want to expose the cluster, we can use the MetalLB extension:
However, this is beyond the scope of this article and will be covered in our future blogs. Stay tuned for that!
When the deployment is finished, I can review my cluster resource from the portal:
After connecting to our cluster, everything is very similar to our familiar AKS. However, one key difference is the storage classes.
In Azure, we have the following:
领英推荐
But in our AKS cluster on Azure Local, we only have one by default:
Why the stark difference? On Azure AKS, storage is provisioned on Azure Disks or Azure File shares. However, on our Azure Local AKS cluster, we are using the disk that is directly attached to the server. This means that we have to use a different storage class, in this case, the 'disk.csi.akshci.com'
By using this storage class we get direct access to disks available in the Azure Local Instance, which leads to much higher IOPS speeds, which we will benchmark and share in a future blog post.
While the technical differences between cloud and local AKS deployments might seem subtle - like the variation in storage classes we just explored - these distinctions have significant real-world implications. The direct access to local storage not only provides higher IOPS but also illustrates a broader point: running AKS locally gives you direct control over your infrastructure while maintaining the familiar Azure management experience.?
This balance of control and convenience is precisely why many organizations are choosing to run Kubernetes workloads on their own hardware. Let's explore these compelling benefits in detail.
Key Benefits of Running AKS Locally
Now that we've deployed our local AKS cluster, let's explore why organizations choose to run Kubernetes on their own hardware instead of using the cloud. While cloud computing offers great flexibility, there are compelling reasons to keep your containers running on local infrastructure.
Predictable Performance
Running AKS locally gives you dedicated hardware resources that aren't shared with other companies. This means your applications get consistent, reliable performance without the variability that sometimes comes with cloud environments. It's like having your own dedicated highway instead of sharing the public roads - you can count on getting there at the same speed every time.
This becomes especially important when working with GPU-accelerated workloads. In the cloud, GPU enabled Virtual Machines are expensive, and sometimes not even available. With local AKS, you own the hardware, ensuring your GPU-dependent applications always have the resources they need.
Data and Network Efficiency
Moving large amounts of data to the cloud can create significant latency and bandwidth costs. However, running AKS locally allows you to process the data right where it's generated. This is particularly important when dealing with data-intensive operations like video streams, IoT sensor data, or real-time manufacturing systems.
For example, a manufacturing facility using computer vision for quality control can process video feeds locally, sending only important alerts or results to the cloud. This approach cuts down on bandwidth costs while maintaining the quick response times needed for real-time quality control decisions.
Enhanced Reliability
Local AKS deployments continue running even during internet outages, providing crucial autonomy for your operations. This independence is essential for:
When your applications run locally, network issues between you and the cloud provider don't impact your core operations. Your systems keep running smoothly, processing data and serving users without interruption.
Regulatory Compliance
Many industries face strict regulations about data location and sovereignty. Running AKS locally provides complete control over data residency while still leveraging modern container orchestration. This makes compliance straightforward for healthcare, financial, and government organizations that must maintain data within specific geographical or organizational boundaries.
AI and ML Workload Optimization
Local AKS clusters provide an ideal environment for AI, LLM and ML workloads. Using the KAITO operator, we can deploy machine learning models alongside our application containers, creating an efficient inference environment. This setup offers several key advantages over cloud-based ML deployments.
First, having LLMs run locally eliminates the network latency that comes with cloud-based inference endpoints. Your applications can get AI-powered insights in near real-time, which is crucial for time-sensitive decisions.
Second, local deployment significantly reduces data transfer costs. Consider a retail chain using AI for inventory management - each store can process their camera feeds locally, only sending summarized insights to central systems instead of streaming raw video to the cloud.
Finally, local AKS ensures consistent resource availability for ML workloads. You maintain control over your ML infrastructure, guaranteeing that GPU resources are available when needed, without competing with other cloud users or dealing with regional capacity constraints.
Running AKS locally essentially gives you the benefits of a private cloud: modern container orchestration with the performance, control, and reliability of on-premises infrastructure.
Conclusion
In this article we’ve seen how easy it is to deploy AKS clusters on our local hardware through Azure Local and Azure Arc Enabled AKS clusters. The main benefits of running AKS on your local hardware are related to availability, latency and data control. This is especially useful for use cases where speed and uptime are of the utmost importance and where the data needs to remain on-site.
If you are interested in starting your Azure Local journey, head over to Splitbrain or https://splitbrain.com/contact/ and get in touch with us. We specialize in setting up your cloud native environments right where you need them.
?? Freelance Azure & Kubernetes Engineer
1 个月Jorn Beyers
Did you know that you can run Kubernetes Container on any Industry PC with Windows IOT on it? The license for Azure Kubernetes Services Edge Essentials is included for free.
Safer today than yesterday | Cybersecurity
1 个月Tim Ruitenberg
DevOps Engineer ?? || Cloud Architect || AWS || GCP || Kubernetes || Terraform || Ansible
1 个月Thanks for the valuable information! I hadn’t heard about?Azure Local?before, but your article immediately got me interested. Did I get it right that the setup looks like this: 1???VM with Ubuntu 22.04?(Proxmox or another hypervisor) 2???Install packages:?qemu-guest-agent,?cloud-init,?containerd 3???Install Azure Arc Agent: curl -sSL https://aka.ms/InstallAzureArcAgent | sudo bash 4???Connect to Azure: az login && az connectedmachine connect --resource-group <RG> --name <NodeName> --location <Location> 5???Create an AKS cluster: az aks create --resource-group <RG> --name <AKSName> --location <Location> --custom-location <CustomLocation> --generate-ssh-keys And that’s it — you get a local Kubernetes cluster managed via?Azure Portal! ???
Operationeel Manager bij SplitBrain
1 个月Nice!??