Comprehensive Guide to Develop Reliable Tests - Part One
Gabriel Martins

Comprehensive Guide to Develop Reliable Tests - Part One

Hi everyone.

I hope you are having a good week!

Recently, I started developing a repository to practice my knowledge and to increase its complexity as my skills grow. I’m happy to say that the first part is ready to be shared. I hope it will be useful for you to start testing and to give you some insights. If you have experience with testing, feel free to dive into this article, it may take some time, but it is priceless. If you are not familiar with testing, but would like to know how does it work, you can check out the images. They’ll give you a basic idea of how it works and how it is structured.

Feel free to send me a message on my Linkedin, I would love to help you.        

GitHub: https://github.com/gabrielhenriquemartins

Linkedin: https://www.dhirubhai.net/in/gabriel-henrique-martins-298b23189

WebSite for Testing: https://opensource-demo.orangehrmlive.com/web/index.php


Requirements

  • Docker Desktop
  • Last Python version
  • Libraries: Robot.api, Influxdb_client


Overview

Before I start detailing each part, I'd like to provide a broader perspective of the results. In this section, I will present a question first and then explain the solution I used to address the problem. This will help you understand the reason behind my decisions and my thought process.

With that in mind, let's get started!

What is the first step to start?

The first step is to understand and master a testing framework to become a great tester. Everything begins with reliable testing code. Several tools are available, such as Cypress, Karate, TestCafe, and Robot Framework. In this case, we will use the Robot Framework to develop our tests.

How can I document and search for new tests in my project?

Documentation is crucial for maintaining a well-organized repository and helps new testers quickly understand the codebase. To ensure clarity, each keyword developed for our tests should include a documentation section that explains its functionality and usage. Once the documentation is complete, we will use a “libdoc” command to generate the keyword documentation.

Robot Documentation
How can we enhance the visibility of the test status?

At this stage, the approach depends on the project management tool in use. We will integrate Jira with Xray for this purpose. First, you will need to create a Jira account and install the Xray plugin. We will then authenticate with Jira using the REST API and automate the creation of tests based on specific conditions defined in our code. Once tests are executed, we will import the results into Jira as a new "Test Execution" issue, linking it to our test plan.

Jira Test Executions
How can we verify the execution over time?

Jira and Xray provide test execution details (it is possible to add additional configurations, but these are out of the scope of this project). To better visualize test results over time, we will create a dashboard that aggregates some metrics, such as the total number of tests run each month, pass ratios, total execution time and etc. For this purpose, we will use InfluxDB and Grafana. InfluxDB will store the data, while Grafana will be used to build and display the dashboard.

Grafana Dashboard
How can I setup all the project and dependences in another machine?

Don't worry about dependencies, they can be managed effectively by creating a Docker image and running each application in separate containers. The Docker image will include all necessary dependencies, ensuring that the container environment remains consistent every time it is run.

Powershell, docker container

That’s it! So, lets get started!!!


Robot Framework

First things first, let’s detail the Robot Framework code. The Robot is a Python-based tool organized into four main sections: Settings, Variables, Test Cases and Keywords. Here’s a brief overview of each section:

  • Settings: This section declares all libraries and resource folders used by the current file;
  • Variables: This section defines all variables;
  • Test Cases: This section outlines the test scenarios, including the test name and scope;
  • Keywords: This section contains individual actions. Typically, a set of keywords forms a test case.


To keep our project organized, we should use these three key rules:

  • Separate Keywords from Test Cases: This ensures that keywords can be reused across different test scenarios;
  • Isolate Variables and Libraries: Place them in separate file to simplify maintenance and avoid duplication. This way, there will be a single file to update or to declare new ones. There are other methods to organize your variables, such as the Page Object Model, but it will not covered in this article;
  • Document Each Keyword: Adding documentation for each keyword enhances clarity and usability.

Project Structure

Robot Framework supports various libraries, such as the Excel Library for handling XLSX files, the Request Library for API interactions, Selenium and Browser Libraries for UI testing, as well as many others. In this document, we will primarily use the Request Library to interface with the Jira and Xray APIs and the Browser Library for web testing.

The Browser Library offers several advantages over Selenium, including video recording, headless mode execution (no need for a WebDriver), and other configurable parameters.

Browser parameters

I won't go into the details of the command line execution, you can find all the necessary information in the official Robot Framework documentation.

When executing tests, Robot Framework generates a log and a video file in the folder specified as the output. To prevent overwriting previous logs, a new folder is created based on the current date. A basic shell script will create an empty folder with the current date, and we will pass this date to the Robot output results. This ensures that the last log from each day is preserved.

Script folder creation

To manage the naming of video files, which often contain numerous characters, another shell script will rename the video files.

With these processes in place, we can effectively compare results from different executions. Below is an example of a log file output and the execution:

Robot Results

LIBDOC

LibDoc is a tool for documenting keywords in Robot Framework. It generates an HTML file that includes descriptions of all defined keywords, as well as the most commonly used ones. For more information on using LibDoc, refer to LIBDOC.

A basic LibDoc command, such as:

libtoc --output_dir /opt/robotframework/Docs/Library --toc_file output.html /opt/robotframework/Resource        

It will generate an HTML file like the one below.

LibDoc Keyword documentation

Jira and Xray

Jira with Xray provides a great platform for project management and test control. We’ll use a basic setup in Jira to simulate a project with a test plan and link our test executions to this plan.

Here are the steps to configure Jira and Xray:

  • Create a Jira Account: Sign up for a Jira account at Atlassian;
  • Install the Xray Plugin: Once logged into Jira, install the Xray plugin. You can find it at Xray;
  • Create a Project: Set up a new project and initiate the first sprint;
  • Create a New Story: Add a new story to the project;
  • Create a Test Plan: Define a test plan and associate it with the newly created story;
  • Generate API Credentials:

  1. Generate an API token for authentication in Jira;
  2. Generate a Client ID and Client Secret in Xray.

Jira and Xray Tokens

Although these configurations might seem extensive, they are essential for simulating a real-world scenario. Let’s move on!

Next, you need to update your Python scripts with this information. There are two files to modify:

Update Utils/import_test_to_jira.py:

  • Project_key
  • Test_plan
  • Jira URL
  • JIRA API TOKEN
  • User email

Update Utils/import_results_to_jira.py:

  • Project_key
  • Test_plan
  • Jira URL
  • JIRA API TOKEN
  • User email
  • Client_id
  • Client_secret


Field to configure in Import test to Jira

Ensure that new test cases include the string “log to console Jira issue: OTS-XXX” and a clear test case name. The Python script searches for “OTS-XXX” in your robot files to identify new test cases. If the identified test case does not exist, the script will create it in Jira, and replacing “OTS-XXX” in your robot file with the newly created issue number and appending “[Automated]” to the test case name. Maintain this structure to ensure the correct creation of the new tests. If the test case lacks the “OTS-XXX” label, it will not be considered for creation.

Once the test cases are created, Jira will link them to the test plan.

Before and After run the script "import test to jira"

The import script will check the most recent Robot Framework execution on the current date, and create an output.json based on the output.xml, and send the json file to jira. These will be associated with a new “Test Execution” issue in Jira. You can view all details of the test run under the test plan in the “Test Execution” section.

Test Execution

InfluxDB and Grafana

To ensure software quality, we need to compare multiple executions of our automation developed in Robot Framework across different days and verify the stability of these executions. This approach allows us to verify that the software behaves consistently over time and through repeated runs. By doing so, we can identify any significant fluctuations in execution time and confirm the stability of the software during the analyzed period.

For this reason, we have configured InfluxDB and Grafana to provide a clear overview of the last 30 days of executions. The created dashboard includes several valuable metrics, such as the total number of tests, pass ratio, and execution time, covering some of the most critical aspects of our test execution.

The Python script used to send data to InfluxDB can be found in Utils/server.py. To use it, just specify the path to your grafana/influxdb backup folder.

The created Grafana container is brand new. To load the dashboard configuration, timezone settings, and queries for visualizing all data, a script to restore the entire Grafana volume has been developed. This script can be found in /Metrics/backups/restore_grafana_and_influxdb.py. The script points to a previously created backup, restoring all information to that specific date.

Since I've mentioned the restore script, here's a brief explanation of the backup script. The backup script, located at /Metrics/backups/backup_grafana_and_influxdb.py, creates a copy of the current Grafana and InfluxDB volumes and stores the data in two separate folders, /Metrics/Backup/Grafana and /Metrics/Backup/InfluxDB, each identified by the current date. This Python script allows to generate backups at the end of each execution.

Backup and Restore Grafana and InfluxDB

To access the InfluxDB and Grafana dashboards locally on your machine, use the following ports:

  • Grafana: https://localhost:3000
  • InfluxDB: https://localhost:8086

Note: InfluxDB is currently using Telegraf to collect machine performance data. Although it is not being utilized at the moment, it is a set up for future dashboards.

Grafana Dashboard

Powershell

As you can see, several scripts need to be orchestrated during execution. To streamline this process, a PowerShell script was developed to automate everything in sequence, eliminating the need to handle command lines manually.

The PowerShell script can be found in /Utils/execution. There are three PowerShell scripts available:

  • windows_execution.bat: Executes the script without updating Jira.
  • windows_execution_update_jira.bat: Executes the script, creating and updating test cases in Jira.
  • windows_scheduler.bat: Redirects all logs to a .txt file. This script is currently configured to work with windows_execution.bat. If needed, you can modify it to include Jira imports by editing the file.

In this project, we will only use windows_execution_update_jira.bat. Feel free to improve the scheduler or make any modifications. To configure the script, change the following two variables:

  • DIRECTORY: Specify the path to your test repository.
  • DockerRepo: Specify your Docker Desktop folder.


Once configured, simply execute the .bat file. The script will:

  1. Open Docker Desktop if it is not already open.
  2. Destroy all existing containers and volumes.
  3. Recreate the containers.
  4. Execute the Robot Framework script.
  5. Generate the library documentation.
  6. Create and update test cases in Jira, if any.
  7. Create a test execution in Jira and update it with the Robot Framework results.
  8. Restore the Grafana and InfluxDB databases.
  9. Send the data of all executions from the last 30 days to InfluxDB.
  10. Create a backup file.

Powershell Script

Conclusion - Part One

For this project, we have achieved some goals:

  • Development of a Robot Framework UI automation;
  • Creation of a Grafana dashboard to visualize all executions side by side;
  • Automated creation of test cases in Jira;
  • Importing test results into Jira;
  • Backups of the dashboard;
  • Code documentation.


Next steps - Part Two

The next steps will be developed in a separated repository, starting from this one. It will include the following plan:

  • Incorporate a Jenkins container for orchestration;
  • Backend tests using Robot Framework;
  • Introduction of Cypress to Orange HRM;
  • Jira validations using the Jira API, followed by a new dashboard with:

Most used test cases; Obsolete tests; Empty tests; Number of tests by project and more...


Future Backlog:

  • Gatling for performance tests;
  • Postman with Newman;
  • A new Gatling Dashboard;
  • A new dashboard displaying machine performance and internet stability during execution;
  • Kubernetes (K8s) integration.


I hope this helps. Let's keep improving every day! Cheers!!!        
Marciel de Lima Oliveira

Staff Engineer at Nokia | Optical/IP Networks | Network Test Engineer | R&D Systems Testing | Enthusiast of IT, Telecom, and Entrepreneurship

7 个月

Hi Gabriel, great job! Lib Browser and Lib DOC were my tips, right? ??? Joking apart. One of the biggest challenges in the automation process is defect analysis and root cause, post-regression. It is a time-consuming job, and it is still a manual process. What do you think about adding your backlog something like an IA algorithm that could permit you to predict a standard deviation for defects, and in a second moment, perform an automated root cause analysis? ??? Keep thinking out of the box! ???

Fabiano Louzada

Senior Software Quality Assurance Engineer

7 个月

Esperando a parte dois hein :)

要查看或添加评论,请登录

Gabriel Henrique Martins的更多文章

社区洞察

其他会员也浏览了