Tips and Best practices for Load Testing Using Kubernetes.

 Load Testing Using Kubernetes

Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

Containers, which offer a lightweight alternative to running full virtual machine instances for applications, are well-suited for rapid scaling of simulated clients. Containers are an excellent abstraction for running test clients because they are lightweight, simple to deploy, immediately available, and well-suited to singular tasks.

 


When working with Kubernetes, you have to become familiar with concepts such as pods, services, and replication controllers. If you're not already familiar with these concepts, there are some excellent resources available to get up to speed. The Kubernetes documentation is a great place to start, since it has several guides for beginners.


Container clusters(Nodes)


A container cluster is a group of Compute Engine instances that provides the foundation for your entire application. The Kubernetes Engine and Kubernetes documentation refer to these instances as nodes. A cluster comprises a single master node and one or more worker nodes. The master and workers all run on Kubernetes, which is why container clusters are sometimes called Kubernetes clusters


Pods/Slaves


A pod is a tightly-coupled group of containers that should be deployed together. Some pods contain only a single container. each of the containers runs in its own pod. Often, however, pods contain multiple containers that work together in some way.

For example, Kubernetes can use a pod with multiple containers to provide DNS services.


 


Replication controllers


A replication controller ensures that a specified number of pod "replicas" are running at any one time. If there are too many, the replication controller kills some pods. If there are too few, it starts more. Ex : In case of three replication controllers: one ensures the existence of a single DNS server pod; another maintains a single master pod; and a third keeps exactly 10 worker pods running.


Services


A particular pod can disappear for a variety of reasons, including node failure or intentional node disruption for updates or maintenance. This means that the IP address of a pod does not provide a reliable interface for that pod. A more reliable approach would use an abstract representation of that interface that never changes, even if the underlying pod disappears and is replaced by a new pod with a different IP address. A Kubernetes Engine service provides this type of abstract interface by defining a logical set of pods and a policy for accessing them. In this solution, there are several services that represent pods or sets of pods. For example, there is a service for the DNS server pod, another service for the master pod, and a service that represents all 10 worker pods.


Deploying load testing tasks


To deploy the load testing tasks, you first deploy a load testing master and then deploy a group of ten load testing workers. With this many load testing workers, you can create a substantial amount of traffic for testing purposes. Keep in mind, however, that generating excessive amounts of traffic to external systems can resemble a denial-of-service(DOS) attacks.


The load testing master


The first component of the deployment is the master, which is the entry point for executing the load testing tasks described above. The master is deployed as a replication controller with a single replica because we need only one master. A replication controller is useful even when deploying a single pod because it ensures high availability.


The configuration for the replication controller specifies several elements, including the name of the controller (Load-master), labels for organization (name: Load-master, role: master), and the ports* that need to be exposed by the container (80,3000,1099,8089 8083,8086, 9376,5557 and 5558 etc.., for communicating with workers, Load balancers ,Bi-directional,TCP/UDP/SCTP,etc..). This information is later used to configure the workers controller.



*Multi-Port Services



Many Services need to expose more than one port. For this case, Kubernetes supports multiple port definitions on a Service object. When using multiple ports you must give all of your ports names, so that endpoints can be disambiguated.


Do not use round-robin DNS?


A question that pops up every now and then is why we do all this stuff with virtual IPs rather than just use standard round-robin DNS. There are a few reasons:


  • There is a long history of DNS libraries not respecting DNS TTLs and caching the results of name lookups.
  • Many apps do DNS lookups once and cache the results.
  • Even if apps and libraries did proper re-resolution, the load of every client re-resolving DNS over and over would be difficult to manage.
  •  



Choosing your own IP address



You can specify your own cluster IP address as part of a Service creation request.



Discovering services



Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS.


Environment variables


When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service.


Proxy-mode: ipvs


ipvs provides more options for load balancing algorithm, such as:


  • rr: round-robin
  • lc: least connection
  • dh: destination hashing
  • sh: source hashing
  • sed: shortest expected delay
  • nq: never queue


 


The load testing workers


The next component of the deployment includes the Load workers, which execute the load testing tasks described above. The Load workers are deployed by a single replication controller that creates ten pods. The pods are spread out across the Kubernetes cluster. Each pod uses environment variables to control important configuration information such as the hostname of the system under test and the hostname of the Load master. The configuration of the worker’s replication controller can be found in the tutorial below. The configuration contains the name of the controller, Load-worker, labels for organization, name: Load-worker, role: worker, and the previously described environment variables.


For the Load workers, no additional service needs to be deployed because the worker pods themselves do not need to support any inbound communication—they connect directly to the Load master pod.


After the replication controller deploys the Load workers, you can return to the Load master web interface and see that the number of slaves corresponds to the number of deployed workers.


Scaling clients


Scaling up the number of simulated users will require an increase in the number of Load worker pods. As specified in the Load worker controller, the replication controller deploys 10 Load worker pods. To increase the number of pods deployed by the replication controller, Kubernetes offers the ability to resize controllers without redeploying them


 


 


Metrics:


API server metrics


API responsiveness


POD start up times


Server request Latencies


IOPS


DNS server CPU usage


DNS service memory usage


Container CPU usage


Container Memory usage


Cron jobs


Node resource utilization – there are many metrics in this area, all related to resource utilization. Network bandwidth, disk utilization, CPU, and memory utilization are examples of this. Using these metrics, one can find out whether or not to increase or decrease the number and size of nodes in the cluster.


The number of nodes – the number of nodes available is an important metric to follow. This allows you to figure out what you are paying for (if you are using cloud providers), and to discover what the cluster is being used for.


Running pods – the number of pods running will show you if the number of nodes available is sufficient and if they will be able to handle the entire workload in case a node fails.


 


Capacity :Describes the resources available on the node: CPU, memory and the maximum number of pods that can be scheduled onto the node.


Pod Monitoring


The act of monitoring a pod can be separated into three categories: (1) Kubernetes metrics, (2) container metrics, and (3) application metrics.


Using Kubernetes metrics, we can monitor how a specific pod and its deployment are being handled by the orchestrator. The following information can be monitored: the number of instances a pod has at the moment and how many were expected (if the number is low, your cluster may be out of resources), how the on-progress deployment is going (how many instances were changed from an older version to a new one), health checks, and some network data available through network services.


Container metrics are available mostly through Cadvisor and exposed by Heapster, which queries every node about the running containers. In this case, metrics like CPU, network, and memory usage compared with the maximum allowed are the highlights.


Finally, there are the application specific metrics. These metrics are developed by the application itself and are related to the business rules it addresses. For example, a database application will probably expose metrics related to an indices’ state and statistics concerning tables and relationships. An e-commerce application would expose data concerning the number of users online and how much money the software made in the last hour, for example.

The metrics described in the latter type are commonly exposed directly by the application: if you want to keep closer track you should connect the application to a monitoring application.

Monitoring Kubernetes Methods

I’d like to mention two main approaches to collecting metrics from your cluster and exporting them to an external endpoint. As a guiding rule, the metric collection should be handled consistently over the entire cluster. Even if the system has nodes deployed in several places all over the world or in a hybrid cloud, the system should handle the metrics collection in the same way, with the same reliability.

Method 1 – Using DaemonSets


One approach to monitoring all cluster nodes is to create a special kind of Kubernetes pod called DaemonSets. Kubernetes ensures that every node created has a copy of the DaemonSet pod, which virtually enables one deployment to watch each machine in the cluster. As nodes are destroyed, the pod is also terminated. Many monitoring solutions use the DaemonSet structure to deploy an agent on every cluster node. In this case, there is not a general solution — each tool will have its own software for cluster monitoring.


Method 2 – Using Heapster


Heapster, on the other hand, is a uniform platform adopted by Kubernetes to generally send monitoring metrics to a system. Heapster is a bridge between a cluster and a storage designed to collect metrics. The supported storages are listed here.


Unlike DaemonSets, Heapster acts as a normal pod and discovers every cluster node via the Kubernetes API. Using Kubelet (a tool that enables master-node communications) and cAdvisor (a container monitoring tool that collects metrics for each running container), the bridge can store all relevant information about the cluster and its containers.


A cluster can consist of thousands of nodes, and an even greater amount of pods. It is virtually impossible to observe each one on a normal basis so it is important to create multiple labels for each deployment. For example, creating a label for database intensive pods will enable the operator to identify if there is a problem with the database service


 


 


 


 


 

Did u try using Deployment in your setup. As Deployment resource which creates ReplicaSet is the recommended way to achieve replication hence forth.

Chandra shekar S

Performance Specialist at NTT DATA Services

6 年

Thanks for publishing, worthy read

要查看或添加评论,请登录

社区洞察

其他会员也浏览了