KUBERNETES CLUSTER HEALTH CHECKS USING ROBOT DATA DRIVEN LIBRARY

KUBERNETES CLUSTER HEALTH CHECKS USING ROBOT DATA DRIVEN LIBRARY

SHORT INTRODUCTION TO ROBOT FRAMEWORK

Robot Framework is a generic open-source test automation framework, which is suitable for both end-to-end acceptance testing and acceptance test-driven development (ATDD). Robot Framework knows its origin at Nokia Networks, but has been open sourced since 2008.For creating Test Cases based on Test Designs or based on User Story acceptance criteria, a Robot Framework test script will have Test Case documentation and automated test script in one. Besides that, it is written in plain text and therefore very suitable for adding them to your version control system. After running the test script, a test report is automatically generated, which acts as the test evidence.

Below is the link to the official Robot Framework website and the user-guide.

Data-Driven Testing

Data-Driven Testing is a test design and an execution method where the test scripts read data from sources like XLS, XML and CSV files rather than utilizing hard-coded values. This strategy allows the automation engineers to implement a single test script which will execute tests for all the test data in the table.

For example, to validate the user authentication to a website, the below scenarios may be tested:

a) Valid email, invalid password

b) Valid email, blank password

c) Unregistered email, valid password

d) Valid email, valid password

The above scenarios would require a tester to create four different test cases and test each one of those cases individually. Since all the tests are effectively following the same procedure, that is, submitting after providing the email and password; we could loop through a series of data values in the same test case and keep track of which one passes the test, and which one fails.

Set-up for Data Driven Testing in Robot Framework

Data Driver is a "Library Listener" but does not provide keywords. Because Data Driver is a listener and a library at the same time it sets itself as a listener when this library is imported into a test suite.

Install Data driver library using below pip command

pip install robotframework-datadriver

To use it, just use it as Library in your suite. You may use the first argument (option) which may set the file name or path to the data file. Without any options set, it loads a .csv file which has the same name and path like the test suite .robot .

How Data-Driver Works

When the Data-Driver is used in a test suite it will be activated before the test suite starts. It uses the Listener Interface Version 3 of Robot Framework to read and modify the test specification objects. After activation it searches for the Test Template -Keyword to analyze the [Arguments] it has. As a second step, it loads the data from the specified data source. Based on the Test Template -Keyword, Data-Driver creates as much test cases as data sets are in the data source.

In the case of data source as csv (Default), the values for the arguments of the Test Template -Keyword, are read by data-driver from the column of the CSV file with the matching name of the [Arguments]. For each line of the CSV data table, one test case will be created. It is also possible to specify test case names, tags and documentation for each test case in the specific test suite related CSV file.

KubeLibrary

KubeLibrary is a wrapper for the Python Kubernetes Client. It enables you to assert the status of various objects in your Kubernetes Clusters. As the library can be integrated with any RobotFramework test suite, it is ideal to verify the testability of your System-under-Test by asserting the status of your nodes, deployments, pods, configmaps, and others Kubernetes objects before running any end to end tests.

Install kubelibrary library using below pip command

pip install robotframework-kubelibrary

Integrating Data driven approach for K8s health checks

As mentioned in the earlier section, kubelibrary is essentially used to validate the status of k8s components inside a cluster. We can also use the process library in robot framework to execute custom unix shell or python scripts for certain specific validations. We can integrate the data driver library for both these scenarios to extend this validation across the cluster for multiple use-cases.

Below are two such examples:

Validating Pod Status across namespaces in a K8s cluster

In a production K8s cluster, we would be deploying multiple custom micro-service applications in the form of pods in a single cluster. All these pods would usually be interacting with each other as a part of the design to ensure that the overall functionality of the use-case concerned is met. Hence it is critical as part of sanity check that we make sure these pods are up and running. Any error must be reported to the development/integration team.

Let’s look at an example on how to achieve the same using robot data-driver and kubelibrary. Let’s assume we have 4 namespaces. Below are the list of applications in those namespaces:

${namespace},${pod_pattern},${microservice}
<ns-1>,<app1>,Application1 for NS1
<ns-1>,<app2>,Application2 for NS1
<ns-1>,<app3>,Application3 for NS1
<ns-2>,<app4>,Application1 for NS2
<ns-2>,<app5>,Application2 for NS2
<ns-3>,<app6>,Application1 for NS3
<ns-4>,<app7>,Application1 for NS4


        

Let’s put this file under a directory testdata. This file acts as the data-driver input for the robot script. We can now create the robot script(VALIDATE_PODS_IN_A_CLUSTER.robot) to utilize this file.

*** Settings ***
Library? ? ? ? ? ?OperatingSystem
Library? ? ? ? ? ?Collections
Library? ? ? ? ? ?KubeLibrary? ?context=${cluster}
Library? ? ? ? ? ?String
Library DataDriver? ? ? file=testdata/pod_validation.csv? ? ? ? dialect=unix
Test Template? ?validate_pods_in_running_state


*** Test Cases ***
Validate pods for ${microservice} in ${cluster}
? ? ? ? [Tags]? ? pod_validation
? ? validate_pods_in_running_state? ? ? ${pod_pattern}? ? ?${namespace} {microservice}


*** Keywords ***
validate_pods_in_running_state
? ? [Arguments]? ? ${pod_pattern}? ${namespace}? ? ? ?${microservice}
? ? @{namespace_pods}=? ? Get Pod Names In Namespace? ? ${pod_pattern}? ? ?${namespace}
? ? ${num_of_pods}=? ? Get Length? ? ${namespace_pods}
? ? Should Be True? ? ? ${num_of_pods} >= 1? ? ?Number of instances for pods of "${microservice}" found is 0
? ? FOR? ? ${pod}? ? IN? ? @{namespace_pods}
? ? ? ? ${status}=? ? Get Pod Status in Namespace? ? ${pod}? ? ${namespace}
? ? ? ? Should Be True? '${status}'=='Running'? Current pod status is :: ${status}
? ? END



        

The robot script would execute a single test case for each of the pod patterns and the overall result would be stored into a test report in html format.

No alt text provided for this image

The failed test case signifies that application 3 for namespace 1 does not have any running pods. Hence this should be reported to the development team for immediate action as it may hamper the overall functionality.

Validating HTTP-Proxy status in a K8s cluster

Kubelibrary is yet to provide the full support for all components in the Kubernetes architecture.For validating such components we can utilize the “Run Process” command from the process library to execute custom shell/python scripts. Any logic that is not available in Kubelibrary or any other robot library can be implemented via such custom scripting.

Below is an example of validating httpproxies in a K8S cluster.

Similar to the above use-case, we initially define a data-set with 2 columns: the httpproxy pattern and microservice name.

Let’s name the data file as ?validate_httpproxies.csv and place it under test data.

${check_pattern},${microservice}
<pattern1>,<Application1>
<pattern2>,<Application2>
<pattern3>,<Application3>
<pattern4>,<Application4>        

Next we create a shell script which will check for the status of each http-proxy and throw an error if it is invalid. The custom script is placed under scripts named as “validate_httpproxies_clusterwise.sh”

#!/bin/sh




check_pattern=$1
microservice=$2
cluster_name=$3


kubectl config use-context $cluster_name >> /dev/null


proxy_status=`kubectl get httpproxy -A | grep $check_pattern | awk '{ print $5 }'`
if [ $proxy_status == "valid" ];
then
echo "Status for httpproxy of $microservice is valid"
else
echo "Error: Status for httpproxy of $microservice is invalid"
fi
        

Now we create a robot file(VALIDATE_HTTPPROXY_IN_A_CLUSTER.robot) to utilize the script for all the data. The test case will pass if the shell-script does not thrown any error in the output.

The call to the robot script would be as stated below:

robot --variable cluster:cluster1 VALIDATE_HTTPPROXY_IN_A_CLUSTER.robot

*** Settings ***
Library? ? ? ? ? ?OperatingSystem
Library? ? ? ? ? ?Collections
Library? ? ? ? ? ?KubeLibrary
Library? ? ? ? ? ?String
Library? ? ? ? ? ?Process
Library DataDriver? ? ? file=testdata/validate_httpproxies.csv? dialect=unix
Test Template? ?validate_use_cases


*** Test Cases ***
Validate httpproxy for ${microservice} in ${cluster}
? ? ? ? validate_use_cases? ? ? ${check_pattern}? ? ? ? ${microservice}


*** Keywords ***
validate_use_cases
? ? [Arguments]? ? ${check_pattern}? ${microservice}? ? ${cluster}
? ? ${result}=? Run Process? ? ?${CURDIR}/scripts/validate_httpproxies_clusterwise.sh? ?${check_pattern}? ${microservice}? ? ${cluster}
? ? # Set Test Message? ?\n${result.stdout}? ? ? console=True


? ? ?Should Not Contain? ? ? ${result.stdout}? ?Error? ?ignore_case=True

        

This script too would execute a single test case for each of the patterns and the overall result would be stored into a test report in html format.

No alt text provided for this image

Conclusion

Data-driven testing is a great alternative approach if your enterprise has huge volumes of data to be tested for the same scripts. The set-up may take a bit of time, but an efficient design makes development of automated test cases much faster.

Sudip Pyne

DevOps Engineer | Kubernetes & Cloud Expert (AWS and Azure) | Java (Spring Boot) Developer | CKAD Certified | Infrastructure Automation & Security

3 年

Got so much information.Thanks, AVIJIT Da.

回复
Sayak Majumder

Solution Architect for Information Platforms

3 年

Excellent article

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了