USE-CASE FOR KUBERNETES
Introduction
Kubernetes is a powerful open-source system, initially developed by Google, for managing containerized applications in a clustered environment. It aims to provide better ways of managing related, distributed components and services across the varied infrastructure.
In this guide, we’ll discuss some of Kubernetes’ basic concepts. We will talk about the architecture of the system, the problems it solves, and the model that it uses to handle containerized deployments and scaling.
What is Kubernetes?
Kubernetes, at its basic level, is a system for running and coordinating containerized applications across a cluster of machines. It is a platform designed to completely manage the life cycle of containerized applications and services using methods that provide predictability, scalability, and high availability.
As a Kubernetes user, you can define how your applications should run and the ways they should be able to interact with other applications or the outside world. You can scale your services up or down, perform graceful rolling updates, and switch traffic between different versions of your applications to test features or rollback problematic deployments. Kubernetes provides interfaces and composable platform primitives that allow you to define and manage your applications with high degrees of flexibility, power, and reliability.
Why Kubernetes?
The Docker adoption is still growing exponentially as more and more companies have started using it in production. It is important to use an orchestration platform to scale and manage your containers.
Imagine a situation where you have been using Docker for a little while, and have deployed it on a few different servers. Your application starts getting massive traffic, and you need to scale up fast; how will you go from 3 servers to 40 servers that you may require? And how will you decide which container should go where? How would you monitor all these containers and make sure they are restarted if they die? This is where Kubernetes comes in.
"Kubernetes has the opportunity to be the new cloud platform. The amount of innovation that's going to come from being able to standardize on Kubernetes as a platform is incredibly exciting - more exciting than anything I've seen in the last 10 years of working on the cloud. "
--BOX
Kubernetes Use Cases
Spotify: An Early Adopter of Containers, Spotify Is Migrating from Homegrown Orchestration to Kubernetes
Challenge
Launched in 2008, the audio-streaming platform has grown to over 200 million monthly active users across the world. "Our goal is to empower creators and enable a really immersive listening experience for all of the consumers that we have today—and hopefully the consumers we'll have in the future," says Jai Chakrabarti, Director of Engineering, Infrastructure, and Operations. An early adopter of microservices and Docker, Spotify had containerized microservices running across its fleet of VMs with a homegrown container orchestration system called Helios. By late 2017, it became clear that "having a small team working on the features was just not as efficient as adopting something that was supported by a much bigger community," he says.
Solution
"We saw the amazing community that had grown up around Kubernetes, and we wanted to be part of that," says Chakrabarti. Kubernetes was more feature-rich than Helios. Plus, "we wanted to benefit from added velocity and reduced cost, and also align with the rest of the industry on best practices and tools." At the same time, the team wanted to contribute its expertise and influence to the flourishing Kubernetes community. The migration, which would happen in parallel with Helios running, could go smoothly because "Kubernetes fit very nicely as a compliment and now as a replacement to Helios," says Chakrabarti.
Impact
The team spent much of 2018 addressing the core technology issues required for a migration, which started late that year and is a big focus for 2019. "A small percentage of our fleet has been migrated to Kubernetes, and some of the things that we've heard from our internal teams are that they have less of a need to focus on manual capacity provisioning and more time to focus on delivering features for Spotify," says Chakrabarti. The biggest service currently running on Kubernetes takes about 10 million requests per second as an aggregate service and benefits greatly from autoscaling, says Site Reliability Engineer James Wen. Plus, he adds, "Before, teams would have to wait for an hour to create a new service and get an operational host to run it in production, but with Kubernetes, they can do that on the order of seconds and minutes." In addition, with Kubernetes's bin-packing and multi-tenancy capabilities, CPU utilization has improved on average two- to threefold.
"We saw the amazing community that's grown up around Kubernetes, and we wanted to be part of that. We wanted to benefit from added velocity and reduced cost, and also align with the rest of the industry on best practices and tools."
— JAI CHAKRABARTI, DIRECTOR OF ENGINEERING, INFRASTRUCTURE AND OPERATIONS, SPOTIFY
Nokia: Enabling 5G and DevOps at a Telecom Company with Kubernetes
Challenge
Nokia's core business is building telecom networks end-to-end; its main products are related to the infrastructure, such as antennas, switching equipment, and routing equipment. "As telecom vendors, we have to deliver our software to several telecom operators and put the software into their infrastructure, and each of the operators have a bit different infrastructure," says Gergely Csatari, Senior Open Source Engineer. "There are operators who are running on bare metal. There are operators who are running on virtual machines. There are operators who are running on VMware Cloud and OpenStack Cloud. We want to run the same product on all of these different infrastructures without changing the product itself."
Solution
The company decided that moving to cloud-native technologies would allow teams to have infrastructure-agnostic behavior in their products. Teams at Nokia began experimenting with Kubernetes in pre-1.0 versions. "The simplicity of the label-based scheduling of Kubernetes was a sign that showed us this architecture will scale, will be stable, and will be good for our purposes," says Csatari. The first Kubernetes-based product, the Nokia Telephony Application Server, went live in early 2018. "Now, all the products are doing some kind of re-architecture work, and they're moving to Kubernetes."
Impact
Kubernetes has enabled Nokia's foray into 5G. "When you develop something that is part of the operator's infrastructure, you have to develop it for the future, and Kubernetes and containers are the forward-looking technologies," says Csatari. The teams using Kubernetes are already seeing clear benefits. "By separating the infrastructure and the application layer, we have fewer dependencies in the system, which means that it's easier to implement features in the application layer," says Csatari. And because teams can test the exact same binary artifact independently of the target execution environment, "we find more errors in early phases of the testing, and we do not need to run the same tests on different target environments, like VMware, OpenStack, or bare metal," he adds. As a result, "we save several hundred hours in every release."
"When people are picking up their phones and making a call on Nokia networks, they are creating containers in the background with Kubernetes."
— GERGELY CSATARI, SENIOR OPEN SOURCE ENGINEER, NOKIA
Booking.com: After Learning the Ropes with a Kubernetes Distribution, Booking.com Built a Platform of Its Own
Challenge
In 2016, Booking.com migrated to an OpenShift platform, which gave product developers faster access to infrastructure. But because Kubernetes was abstracted away from the developers, the infrastructure team became a "knowledge bottleneck" when challenges arose. Trying to scale that support wasn't sustainable.
Solution
After a year operating OpenShift, the platform team decided to build its own vanilla Kubernetes platform—and ask developers to learn some Kubernetes in order to use it. "This is not a magical platform," says Ben Tyler, Principal Developer, B Platform Track. "We're not claiming that you can just use it with your eyes closed. Developers need to do some learning, and we're going to do everything we can to make sure they have access to that knowledge."
Impact
Despite the learning curve, there's been a great uptick in the adoption of the new Kubernetes platform. Before containers, creating a new service could take a couple of days if the developers understood Puppet, or weeks if they didn't. On the new platform, it can take as few as 10 minutes. About 500 new services were built on the platform in the first 8 months.
"As our users learn Kubernetes and become more sophisticated Kubernetes users, they put pressure on us to provide a better, more native Kubernetes experience, which is great. It's a super healthy dynamic."
— BEN TYLER, PRINCIPAL DEVELOPER, B PLATFORM TRACK AT BOOKING.COM