Azure Kubernetes Services
Azure Kubernetes Service(AKS) is Managed Kubernetes Services of Azure that many industries are benefiting. In this article, we are going to discuss what is AKS, how AKS works, and how industries are getting benefits from this.
Before jumping to the use cases let’s understand what is Azure and Kubernetes
What is Azure?
Azure is a public cloud platform that provides more than 200+ services including Infrastructure as a Service(IaaS), Platform as a Service(Paas), Software as a Service(SaaS).
In Azure public cloud, you ask whatever you need for your infrastructure, you don’t have to worry about management, Azure manages all the backend activities for you.
Using public cloud you actually don’t have to pay an upfront amount of infrastructure, you pay what you use.
What is Kubernetes?
- Kubernetes is a portable, extensible, open-source platform for container orchestration. It allows developers and engineers to manage containerized workloads and services through both declarative configuration and automation.
Basic benefits of Kubernetes include:
- Run distributed systems resiliently
- Automatically mount a storage system
- Automated rollouts and rollbacks
- Self-healing
- Secret and configuration management
- No alt text provided for this image
- Key Terms
API Server: Exposes the underlying Kubernetes API. This is how various management tools interact with the Kubernetes cluster
Controller Manager: Watches the state of the cluster through the API server and when necessary makes changes attempting to move the current state towards the desired state.
Etcd: Highly available key-value store which maintains the Kubernetes cluster state.
Scheduler: Schedules unassigned pods to nodes. Determines the most optimal node run your pod
Node: A physical or virtual machine which is where Kubernetes runs your containers.
Kube-proxy: A network proxy that proxies requests to Kubernetes services and their backend pods
Pods: One or more containers logically grouped together. Usually, they need to share the same resources.
Kublet: Agent that processes orchestration requests and handles starting pods that have been assigned to its node by the scheduler.
Why Use Kubernetes?
When running containers in a production environment, containers need to be managed to ensure they are operating as expected in an effort to ensure there is no downtime.
Container Orchestration:
Without container orchestration, If a container was to go down and stop working, an engineer would need to know the container has failed and manually start a new one. Wouldn’t it be better if this was handled automatically by its own system? Kubernetes provides a robust declarative framework to run your containerized applications and services resiliently.
Cloud Agnostic: Kubernetes has been designed and built to be used anywhere (public/private/hybrid clouds)
Prevents Vendor Lock-In: Your containerized application and Kubernetes manifests will run the same way on any platform with minimal changes
Increase Developer Agility and Faster Time-to-Market: Spend less time scripting deployment workflows and focus on developing. Kubernetes provides a declarative configuration that allows engineers to define how their service is to be run by Kubernetes, Kubernetes will then ensure the state of the application is maintained
Cloud Aware: Kubernetes understands and supports a number of various clouds such as Google Cloud, Azure, AWS. This allows Kubernetes to instantiate various public cloud-based resources, such as instances, VMs, load balancers, public IPs, storage..etc.
Basics of Azure Kubernetes Services
Azure Kubernetes Service (AKS) is a fully-managed service that allows you to run Kubernetes in Azure without having to manage your own Kubernetes clusters. Azure manages all the complex parts of running Kubernetes, and you can focus on your containers.
Basic features include:
- Pay only for the nodes (VMs)
- Easier cluster upgrades
- Integrated with various Azure and OSS tools and services
- Kubernetes RBAC and Azure Active Directory Integration
- Enforce rules defined in Azure Policy across multiple clusters
- Kubernetes can scale your Nodes using cluster autoscaler
- Expand your scale even greater by scheduling your containers on Azure Container Instances.
- No alt text provided for this image
- Azure Kubernetes Best Practices
- Cluster Multi-Tenancy
- Logically isolate clusters to separate teams and projects in an effort to try to minimize the number of physical AKS clusters you deploy
- A namespace allows you to isolate inside of a Kubernetes cluster
- Same best practices with hub-spoke but you do it within the Kubernetes cluster itself
Scheduling and Resource Quotas
- Enforce resource quotas – Plan out and apply resource quotas at the namespace level
- Plan for availability
- Define pod disruption budgets
- Limit resource-intensive applications – Apply taints and tolerations to constrain resource-intensive applications to specific nodes
- No alt text provided for this image
- Cluster Security
- Azure AD and Kubernetes RBAC integration
- Bind your Kubernetes RBAC roles with Azure AD Users/Groups
- Grant your Azure AD users or groups access to Kubernetes resources within a namespace or across a cluster
Kubernetes Cluster Updates
- Kubernetes releases updates at a quicker pace than more traditional infrastructure platforms. These updates usually include new features, and bug or security fixes.
- AKS supports four minor versions of Kubernetes
- Upgrading AKS clusters are as simple as executing an Azure CLI command. AKS handles a graceful upgrade by safe cordon and draining old nodes in order to minimize disruption to running applications. Once new nodes are up and containers are running, old nodes are deleted by AKS.
Node Patching
- Kubernetes runs your workload by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the control plane and contains the services necessary to run Pods
- Typically you have several nodes in a cluster; in a learning or resource-limited environment, you might have just one.
- The components on a node include the kubetel, a container runtime, and the kube-proxy.
Linux
- AKS automatically checks for kernel and security updates on a nightly basis and if available AKS will install them on Linux nodes. If a reboot is required, AKS will not automatically reboot the node, a best practice for patching Linux nodes is to leverage the cured (k8s reboot daemons) which looks for the existence of /var/run/reboot-required file (created when a reboot is required) and will automatically reboot during a predefined scheduled time.
Windows
- The process for patching Windows nodes is slightly different. Patches aren’t applied on a daily basis like Linux nodes. Windows nodes must be updated by performing an AKS upgrade which creates new nodes on the latest base Windows Server image and patches.
Pod Identities
- If your containers require access to the ARM API, there is no need to provide fixed credentials that must be rotated periodically. Azure’s pod identities solution can be deployed to your cluster which allows your containers to dynamically acquire access to Azure API and services through the use of Managed Identities
Limit container access
- Avoid creating applications and containers that require escalated privileges or root access.
Monitoring
- As AKS is already integrated with other Azure services, you can use Azure Monitor to monitor containers in AKS.
- Toggled based implementation, can be enabled after the fact or enforced via Azure Policy
- Multi and Cluster specific views
- Integrates with Log Analytics
- Ability to query historic data
- Analyze your Cluster, Nodes, Controllers, and Containers
- Alert on Cluster & Container performance by writing customizable Log Analytics search queries
- Integrate Application logging and exception handling with Application Insights
- Azure Kubernetes Service Use Cases
- We’ll take a look at different use cases where AKS can be used.
Migration of existing applications: You can easily migrate existing apps to containers and run them with Azure Kubernetes Service. You can also control access via Azure AD integration and SLA-based Azure Services like Azure Database using Open Service Broker for Azure (OSBA).
Simplifying the configuration and management of microservices-based Apps: You can also simplify the development and management of microservices-based apps as well as streamline load balancing, horizontal scaling, self-healing, and secret management with AKS.
Bringing DevOps and Kubernetes together: AKS is also a reliable resource to bring Kubernetes and DevOps together for securing DevOps implementation with Kubernetes. Bringing both together, it improves the security and speed of the development process with Continuous Integration and Continuous Delivery (CI/CD) with dynamic policy controls.
Ease of scaling: AKS can also be applied in many other use cases such as ease of scaling by using Azure Container Instances (ACI) and AKS. By doing this, you can use AKS virtual node to provision pods inside Azure Container Instance (ACI) that start within a few seconds and enables AKS to run with required resources. If your AKS cluster is run out of resources, if will scale-out additional pods automatically without any additional servers to manage in the Kubernetes environment.
Data streaming: AKS can also be used to ingest and process real-time data streams with data points via sensors and perform quick analysis.
Case Study on BOSCH
When Robert Bosch GmbH set out to solve the problem of drivers going the wrong way on highways, the goal was to save lives. Other services like this existed in Germany, but precision and speed cannot be compromised. Could Bosch get precise enough location data—in real time—to do this? The company knew it had to try.
The result is the wrong-way driver warning (WDW) service and software development kit (SDK). Designed for use by app developers and original equipment manufacturers (OEMs), the architecture pivots on an innovative map-matching algorithm and the scalability of Microsoft Azure Kubernetes Service (AKS) in tandem with Azure HDInsight tools that integrate with the Apache Kafka streaming platform.
The right way to solve wrong way problem
The Bosch team had to solve two major issues: first, to get the last piece of information out of the noisy sensor data; and second, to develop a highly scalable and ultra-flexible service to process the data in near real time. The question was how to build a real-time data ingestion and processing pipeline capable of returning notifications to drivers within seconds.
The problem was speed. The team assumed that devices emitting location information, such as smartphone apps and automotive head units, could eventually send thousands of data points to the solution per second, from all over Europe and eventually other countries. Bosch needed lightning fast compute capable of filtering events and pushing a notification back to an end device within 10 seconds—the time estimated to make the solution viable.
A team of Microsoft cloud solution architects worked closely with Bosch engineers, who provided valuable feedback to Azure product teams. Microsoft continues to work with Bosch teams around the world. Working together, they devised a solution that produced the speed Bosch needed.
The key was orchestration. By orchestrating the deployment of containers using AKS, Bosch would get repeatable, manageable clusters of containers. Bosch already had a continuous integration (CI) and continuous deployment (CD) process to use in producing the container images and orchestration. The result: increased speed and reliability of deployments.
AKS provides the elastic provisioning that Bosch wanted, without the need to manage its own environment. The developers can deploy self-managed AKS clusters as needed, and they get the benefit of running their services within a secured network environment.
How solution works?
The wrong-way driver warning solution runs as a service on Azure and provides an SDK. Service providers, such as smartphone app developers and OEM partners, can install the WDW SDK to make use of the service within their products. The SDK maintains a list of hotspots within which GPS data is collected anonymously. These hotspots include specific locations, such as segments of divided highways and on-ramps. Every time a driver enters a hotspot, the client generates a new ID, so the service remains anonymous.
When a driver using a WDW-configured app or in-car system enters a hotspot, the WDW SDK begins to collect GPS signals and sensor events, such as acceleration and rotational data and heading information. These data points are packaged as observations and sent in the frequency of 1 Hertz (Hz)—one event per second—via HTTP to the WDW service on Azure, either directly or to the service provider’s back end, and then to Azure. The SDK supports both routes so that service providers stay in charge of the data that is sent to the WDW system.
If the WDW service determines that the driver is going the wrong way within a hotspot, it sends a notification to the originating device and to other drivers in the vicinity who are also running an app with the WDW SDK.
Getting accuracy from GPS Data
The team’s biggest technical challenge was to improve the reliability of the incoming GPS data. Bosch developed a custom sensor data-fusion and map-matching algorithm to verify a driver’s location and driving direction. Then the algorithm filters all suspicious trips and forwards them to the alert validator app. This multistep classification approach was used to reduce the computational complexity required for a cost-effective solution architecture.
Additional Azure services
Bosch also used following set of services:
- Azure API Management provides the gateway to the back end. It pushes observations from client devices, currently serving about 6 million requests per day.
- Azure App Service was used to build and host multiple internal front ends used by the team for debugging and monitoring. For example, a real-time dashboard shows all the drivers currently passing a hotspot. App Service supports both Windows and Linux and works with the team’s automated deployment pipeline.
- Azure Content Delivery Network (CDN) uses the closest point of presence (POP) server to cache static objects locally, thus reducing load times, saving bandwidth, and speeding responsiveness of the WDW service.
- Azure Databricks is an Apache Spark–based analytics platform designed to support team collaboration. It enables Bosch data scientists, data engineers, and business analysts to make the most of the WDW service’s big data pipeline.
Thanks for your precious time to read this article. Hope you got something extra from this article... #KEEPLEARNING__KEEPSHARING....