What Is Kubernetes and Industries Usecase of Kubernetes

What Is Kubernetes and Industries Usecase of Kubernetes

WHAT IS KUBERNETES?

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

NEED OF KUBERNETES

Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system?

That’s how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.

What can you do with Kubernetes?

  • Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
  • Storage orchestration Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
  • Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
  • Automatic bin packing You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
  • Self-healing Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
  • Secret and configuration management Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration


No alt text provided for this image

Some terminology of kubernetes

As is the case with most technologies, language specific to Kubernetes can act as a barrier to entry. Let's break down some of the more common terms to help you better understand Kubernetes.

Control plane: The collection of processes that control Kubernetes nodes. This is where all task assignments originate.

Nodes: These machines perform the requested tasks assigned by the control plane.

Pod: A group of one or more containers deployed to a single node. All containers in a pod share an IP address, IPC, hostname, and other resources. Pods abstract network and storage from the underlying container. This lets you move containers around the cluster more easily.

Replication controller:  This controls how many identical copies of a pod should be running somewhere on the cluster.

Service: This decouples work definitions from the pods. Kubernetes service proxies automatically get service requests to the right pod—no matter where it moves in the cluster or even if it’s been replaced.

Kubelet: This service runs on nodes, reads the container manifests, and ensures the defined containers are started and running.

kubectl: The command-line configuration tool for Kubernetes.

No alt text provided for this image


Some use cases :

Kubernetes stands on the shoulders of giants, so to speak. Some key stepping stones and enables that make Kubernetes possible and popular now, include:

  • DevOps – culture shift and automation tools that implement the idea that you can speed up AND increase service quality.
  • Virtualization – VMs abstract applications from infrastructure.
  • Infrastructure as code – configuration tools that help maintain the desired state
  • Cloud computing – infrastructure services for rent called via API
  • Software-Defined Datacenter – compute storage and network via API in an on-premises infrastructure.
  • Containers – immutable images that bundle an application and all of its dependencies.

CI/CD Platform with Kubernetes

Kubernetes’ open API brings many advantages to developers. The level of control means developers can integrate Kubernetes into their automated CI/CD workflow effortlessly. So even while Kubernetes doesn’t provide any CI/CD features out of the box, it’s very easy to add Kubernetes to a CI/CD pipeline.

Kubernetes is an ideal platform for running CI/CD platforms, as it has plenty of features that make it easy to do so. How many CI/CD platforms can run on Kubernetes at the moment you ask? Well, as long as they can be packaged in a container, Kubernetes can run them. There are quite a few options that work more closely with Kubernetes and are worth mentioning:

  • Jenkins: Jenkins is the most popular and the most stable CI/CD platform. It’s used by thousands of enterprises around the world due to its vast ecosystem and extensibility. If you plan to use it with Kubernetes, it’s recommended to install the official plugin.JenkinsX is a version of Jenkins suited specifically for the Cloud Native world. It operates more harmoniously with Kubernetes and offers better integration features like GitOps, Automated CI/CD and Preview Environments.

Registry and Package Management — Helm/Terraform

A private registry server is an important function in that it stores Docker images securely. The registry enables image management workflow, with image signing, security, LDAP integration, and more. Package managers, such as Helm, provide a template (called a “chart” in Helm) to define, install, and upgrade Kubernetes-based applications.

Once developers build their code successfully, they ideally, use the registry to regenerate a Docker image which is ultimately deployed using a Helm chart to a set of target pods.

This streamlines the CI/CD pipeline and release processes of Kubernetes-based applications. Developers can more easily collaborate on their applications, version code changes, ensure deployment and configuration consistency, ensure compliance and security, and roll back to a previous version if needed. Private Registry along with package management ensure that the right images are deployed into the right containers, and that security is integrated into the process as well.

Cluster Provisioning and Load Balancing

Production-grade Kubernetes infrastructure typically requires the creation of highly available, Multimaster, multi-etcd Kubernetes clusters that can span across availability zones in your private or public cloud environment. The provisioning of these clusters usually involves tools such as Ansible or Terraform.

Once clusters have been set up and pod created for running applications, these pods are fronted by load balancers, which route traffic to the service. Load balancers are not a native capability in the open-source Kubernetes project and so you need to integrate with products like NGINX Ingress controller, HAProxy or ELB (on an AWS VPC), or other tools that extend the Ingress plugin in Kubernetes to provide load-balancing.

Security

It goes without saying that security is a critical part of cloud native applications and needs to be considered and designed for from the very start. Security is a constant throughout the container lifecycle and it affects the design, development, DevOps practices and infrastructure choice for your container-based application. A range of technology choices is available to cover various areas such as application-level security and the security of the container and infrastructure itself. There range from using role-based access control, multifactor authentication (MFA), A&A (Authentication & Authorization) using protocols such as OAuth, OpenID, SSO etc.; Different tools that provide certification and security for what goes inside the container itself (such as image registry, image signing, packaging), CVE scans, and more.

No alt text provided for this image


Companies using Kubernetes

Some of the biggest and most intensively used online platforms in the world — Shopify, Slack, eBay, Spotify, and the wildly popular Pokemon Go game — are powered by Kubernetes. Beyond internet businesses, diverse companies across the world such as China Unicom, Ant Financial, Comcast, Huawei, Blackrock, SAP, The New York Times, and Philips are all using Kubernetes. Even banks (usually the most cautious and stringent adopters of new technology) like Goldman Sachs and ING are using it.

SPOTIFY

No alt text provided for this image

Launched in 2008, the audio-streaming platform has grown to over 200 million monthly active users across the world. “Our goal is to empower creators and enable a really immersive listening experience for all of the consumers that we have today — and hopefully the consumers we’ll have in the future,” says Jai Chakrabarti, Director of Engineering, Infrastructure and Operations. An early adopter of microservices and Docker, Spotify had containerized microservices running across its fleet of VMs with a homegrown container orchestration system called Helios. By late 2017, it became clear that “having a small team working on the features was just not as efficient as adopting something that was supported by a much bigger community,” he says.

“We saw the amazing community that had grown up around Kubernetes, and we wanted to be part of that,” says Chakrabarti. Kubernetes was more feature-rich than Helios. Plus, “we wanted to benefit from added velocity and reduced cost, and also align with the rest of the industry on best practices and tools.” At the same time, the team wanted to contribute its expertise and influence in the flourishing Kubernetes community. The migration, which would happen in parallel with Helios running, could go smoothly because “Kubernetes fit very nicely as a complement and now as a replacement to Helios,” says Chakrabarti.

The team spent much of 2018 addressing the core technology issues required for a migration, which started late that year and is a big focus for 2019. “A small percentage of our fleet has been migrated to Kubernetes, and some of the things that we’ve heard from our internal teams are that they have less of a need to focus on manual capacity provisioning and more time to focus on delivering features for Spotify,” says Chakrabarti. The biggest service currently running on Kubernetes takes about 10 million requests per second as an aggregate service and benefits greatly from autoscaling, says Site Reliability Engineer James Wen. Plus, he adds, “Before, teams would have to wait for an hour to create a new service and get an operational host to run it in production, but with Kubernetes, they can do that on the order of seconds and minutes.” In addition, with Kubernetes’s bin-packing and multi-tenancy capabilities, CPU utilization has improved on average two- to threefold.

IF WANT TO READ MORE INDUSTRY USE CASE OF KUBERNETES AND HOW KUBERNETES IS HELPING IN GROWING THE BUSINESS THEN GO TO THE LINK ATTACHED BELOW.


https://kubernetes.io/case-studies/case study link

https://kubernetes.io/case-studies/

I hope this article will help to understand the need of kubernetes and the use case of kubernetes in industries.

Thankyou....

-by Manmohan


Manish kumar Singh

Sr. DevOps Engineer @Mindbowser | 1x AWS | CKA, CKAD , DO180 , RHCSA, RH294 Trained | DevSecOps | KOps | GitOps | DevOps

4 年

Very useful

要查看或添加评论,请登录

Manmohan .的更多文章

社区洞察

其他会员也浏览了