"Regression Testing and Test Automation", make a mini-framework in java for test automation.
We have reached the fourth episode of our journey into the world of software testing. This article will focus on “Regression Testing” and “Test Automation”. We will also look at a mini framework I built to automate the testing of some parts of the software of a treadmill for cardiovascular exercise.
So let's start with a regression testing definition.
“Regression testing" is the selective testing phase that is intended to detect defects introduced during modification of a system or a system component in order to verify that the modifications have not caused unintended adverse effects or to verify that a modified system or system component still meets specified requirements (IEEE definition). In simpler terms regression test tries to answer the question: “Does it still work after fixing it?” or “Does it still work after adding this feature?” or even “Are we sure that the programmer in adding this new functionality did not break other parts of the system?"
For effective regression testing, it is crucial to plan it in parallel with the software development schedule. This ensures that testing keeps pace with the ongoing modifications and helps in maintaining the integrity of the software.
Do you remember the software development life cycle picture that we saw in the second article of this series (5.) ? For every change or addition of new features to an existing software, the bugs trend shows peaks. These peaks need to be smoothed out thanks to the collaborative effort of tester and developer during regression testing.
So regression testing provides overall stability to the software application by keeping a check on the functionality of existing features. It becomes an inevitable step after each new code change, ensuring that the system can resist to extensions and changes.
In the realm of software, particularly when there is an architectural deficit, the smallest change to the code can cause a domino effect capable of altering the core functionality of a product. Therefore, regression testing plays a key role in investigating product architecture, which is critical to determine the root cause of both product success and failure.
In this regard, the dilemma of a great computer scientist Robert Martin is famous: "THE GREATER VALUE".
The dilemma goes like this (1.):
Function or architecture? Which of these two provides the greater value? Is it more important for the software system to work, or is it more important for the software system to be easy to change?
If you ask the business managers, they’ll often say that it’s more important for the software system to work. Developers, in turn, often go along with this attitude. But it’s the wrong attitude. I can prove that it is wrong with the simple logical tool of examining the extremes.
You may not find this argument convincing. After all, there’s no such thing as a program that is impossible to change. However, there are systems that are practically impossible to change, because the cost of change exceeds the benefit of change. Many systems reach that point in some of their features or configurations.
If you ask the business managers if they want to be able to make changes, they’ll say that of course they do, but may then qualify their answer by noting that the current functionality is more important than any later flexibility. In contrast, if the business managers ask you for a change, and your estimated costs for that change are unsustainably high, the business managers will likely be furious that you allowed the system to get to the point where the change was impractical.
(by Robert Martin “Clean Architecture” page 42).
Sorry, perhaps with these considerations by Robert Martin I have gone a little off the main topic, but being an engineer who has been involved in software development for more than thirty years, I am way too sensitive to discussions on software structure to resist the temptation to fall into it.
So now let's go back to our main topic and see how to perform regression testing.
Consider an example where a software development company is working on releasing a new "Health and Fitness Apps" for mobile devices. The primary requirement is to release their first build with only the core features. Before product release, a black-box test is conducted with 1000 test cases to ensure the basic functionalities. The initial build is ready to hit the market if it passes the tests successfully.
However, with the success of the first product the business team comes back with a requirement to add the diet management and others new premium features. The product team develops those and adds them to the existing app, but with the addition of new features, a regression test is required. Hence, they write 100 new test cases to verify the functionality of those new features. However, they will have to run those 1000 old test cases already conducted to ensure essential functions haven’t been broken.
In the software world today there is a strong push towards agility. There is an emphasis on adopting an iterative process, push new code often and break things if necessary. Regression testing ensures that with frequent pushes, developers do not break things that already work. The regression testing example shown below emphasizes its importance.
In the image we see that when we add the F2 function to V1 version we must also repeat the test of F1 function before releasing V2 version, when we add F3 function to V2 version we must repeat the test of F1 and F2 functions before releasing V3 version, and so on.
Regression testing works this way and may sound tedious but is an effective method of finding occasional issues that don’t always appear.
Regression testing strategies vary for each organization (2.). However, some fundamental steps are always the same:
Regression testing involves the implementation of three basic techniques: retesting everything, selecting specific regression test cases or prioritize test cases.
Let’s understand each technique:
Understanding and implementing these regression testing techniques are vital steps in ensuring the continued stability and reliability of a software application. Each method offers unique advantages based on the specific requirements and changes introduced, providing flexibility and efficiency in the testing process.
?
As we have already mentioned regression testing can significantly benefit from the incorporation of test automation. Automating the testing process improves efficiency, accelerates the identification of potential issues, and ensures a more complete evaluation of the software's functionality after changes.
The automation of repetitive test cases allows for faster feedback on the impact of changes, contributing to a more agile and reliable software development lifecycle.
Test automation includes two distinct meanings:
The automatic generation of test cases implies having a complete and structured requirements specification in documented use-cases and we will address this topic in the next article.
Instead the automation of test execution is the topic of this article, so let's start by looking for an answer to the following question:
Why opt for automation?
There are several persuasive reasons (3.), (4.). Automation allows for the easy repetition of tests, making it extremely useful for tasks like regression testing and stress testing. It becomes particularly advantageous when testing involves highly repetitive tasks with different values but the same procedure. Test cases, once created, hold significant value, as they are reusable across multiple products. Additionally, automation proves beneficial when testing time is limited, as it efficiently addresses the scenarios mentioned above.
Furthermore automation test helps create a continuous integration environment where after each code injection, the test suite automatically runs with the new build. Using tools of continuous integration and continuous delivery like Jenkins, we can create jobs that run test cases after a build is deployed and send test results to stakeholders.
However, achieving complete automated execution is a complex process. It involves putting the system into the proper state, providing inputs, running the test case, collecting results, and ideally, verifying the results. This complexity represents a challenge in the automation process.
Despite its advantages, automation comes with associated costs. The cost of automation could exceed that of manual testing. Moreover, utilizing automation tools requires a non-trivial understanding of the tools themselves. Automated tests also have a finite lifetime, sometimes relatively short.
Observations in the field of test automation reveal that automated tests are 3 to 30 times more expensive to create and maintain. They can break easily when the system under test undergoes changes.
Sometimes it is better to develop a specialized framework for automating tests of your product rather than using general tools like Selenium, Ranorex Studio, etc.
As an example of Test Automation in software testing let's consider a scenario where test automation is applied to the testing of a treadmill for cardiovascular exercise. In this example, we'll focus on key functionalities like speed control, incline adjustments, and safety features.
1. Speed Control:
2. Incline Adjustments:
3. Emergency Stop Functionality:
4. Heart Rate Monitoring:
5. Pre-set Workout Programs:
领英推荐
These are the Benefits of Test Automation for Treadmill Testing:
By incorporating test automation in treadmill testing, manufacturers can enhance the reliability, safety, and overall quality of their cardiovascular exercise equipment.
Let's now explore how to automate the testing of a treadmill. A couple of years ago, I developed a framework to automate the testing of one of the most critical software components of a treadmill responsible for acceleration/deceleration ramps, speeds, and safety mechanisms such as STOP and emergency features.
To provide context, let's delve into the general architecture of treadmill software. The software is essentially divided into two parts:
The two parts are connected via a fast USB channel, enabling command transmission from the Android part to the real-time part and event transmission in the reverse direction.
As you can understand, the real-time part is the most delicate and challenging to test, lacking the ability to stimulate it through the GUI. Imagine what happens when changes are made to the motor inverter component or when a new inverter needs to be managed. It becomes terrifying, requiring retesting of all behaviors related to the motor and machine safety. Covering all behaviors of this software part solely through the user interface and physical devices like STOP and emergency buttons or joysticks is very complex.
So, we decided with the test engineers to list all test cases to be tested and then transform them into easily implementable scripts. Once the testers agreed the script grammar, I developed the framework using the open-source BeanShell interpreter. BeanShell is a Java source interpreter with scripting language features.
I adapted the interpreter to the agreed commands and created the automatic execution engine for the test suite. Below, we can see a script for a specific test case in the test suite.
// Setup
eq.Start().Wait(1);
// Test Body
eq.Set( MaxSpeed, 8 ).Set( Speed, 5 ).
Wait(30).TestEqual( Speed, 5 ).Set( Speed, 10 ).
Wait(30).TestEqual( Speed, 8 ).Set( Speed, 500 ).
Wait(15).TestEqual( Speed, 8 ).Set(Acceleration, 1).
Set( Speed, 3).Wait(5).TestEqual( Speed, 3 );
// Teardown
eq.Stop().Wait(5);
The script is divided into an initial part, a body, and a final part. The initial part (setup) is used to set up the execution environment under the conditions needed for the test. The body is the actual test. The final part (teardown) is used to restore the environment to normal conditions so that subsequent tests can be executed.
Let's analyze the script's grammar. Through "eq," we access the machine functionalities. "Set" allows us to configure properties such as Speed, MaxSpeed, and Acceleration, etc. "TestEqual" enables us to evaluate predicates on the properties. "Wait" is used for waiting, and "Start" and "Stop" are employed to activate or deactivate the motor.
This script performs the following operations:
Below, we see the first page of the test suite list, comprising over 50 test cases developed by our test engineers.
Each test case has its script, which can be launched individually. Running the script above gives the following result:
The same result is also saved in a file. The environment allows for launching the entire test suite.
Recently, I've developed an extension in order to integrate actions performed on the display into the test. This need arose due to a series of issues related to the uninitialized state of a third-party hardware-software component, crucial for a precise analysis of walking and running. The problem occurred very rarely, we had to start the exercise about a hundred times to see the problem. We had to do at least a minute of exercise before checking if the device was working correctly. Clearly, manually verifying the problem resolution was too expensive, time consuming and also extremely tedious. The verification test for resolving this issue was conducted using the script below:
// Test Body
("for( int i = 1; i < 500; ++ i )
{ display.Touch(START);
eq.Set( Speed, 10 ).Wait(30);
optogait.CheckAvailable() ;
display.Touch(STOP).Wait(3).
Touch(STOP_POPUP).Wait(3).Touch(HOME).Wait(3);
}", "for loop");
This script repeats the following operations 500 times:
Running the script gives the following result (I only show the first two iterations):
script: "for loop"
executing script body
touch event at 600 720
setting Speed to 10.0
waiting 30 second[s]
optogait is available
touch event at 600 720
waiting 3 second[s]
touch event at 600 600
waiting 3 second[s]
touch event at 100 720
waiting 3 second[s]
touch event at 600 720
setting Speed to 10.0
waiting 30 second[s]
optogait is available
touch event at 600 720
waiting 3 second[s]
touch event at 600 600
waiting 3 second[s]
touch event at 100 720
waiting 3 second[s]
I believe it would be interesting to take a look at the code in the TestEngine class, where, in the executeSingleTest method, I specialized the Beanshell interpreter to understand the concepts of the mini-language for automation test scripts (e.g., "eq", "Speed", "MaxSpeed", "Acceleration", "TestEqual", "Wait", "Start", "Stop"). After the interpreter has been trained to understand the grammar I call the eval method in order to execute the script. See Snippet1.
Following that, I've included the redefinition of the executeSingleTest method to also handle the display and the optogait device for gait and running analysis. I extended the automation test mini-language to include concepts related to "display", "touch", and "optogait". I've also listed several areas of the GUI relevant for interaction with training programs (START, STOP, STOP_POPUP, and HOME). See Snippet 2.
For those of you that are interested, I recommend to visit my GitHub link where you can download the source code of the testing automation framework. Obviously I had to stub all the parts that directly interact with the equipment. The sources under the "asm" and "bsh" packages represent code I borrowed from the "Beanshell" interpreter, for which I have provided the link. This comprises only a portion of the original interpreter code, specifically the part needed for developing the framework.
Instead, the code under the "framework" package constitutes the code of the testing automation framework.
A special thanks to Carlo Pescio , with whom I collaborated in the implementation of this project.
I would like to conclude this article by debunking some myths about automated testing.
That's all about Regression Testing and Test Automation.
see you on the next episode
I remind you my newsletter "Sw Design & Clean Architecture"? : https://lnkd.in/eUzYBuEX where you can find my previous articles and where you can register, if you have not already done, so you will be notified when I publish new articles.
Thanks for reading my article, and I hope you have found the topic useful,
Feel free to leave any feedback.
Your feedback is very appreciated.
Thanks again.
Stefano
References:
1. Robert Martin “Clean Architecture”- Prentice Hall (September 2017) . pp 40-45
2. Sommerville, "Software Engineering", Ninth edition, Addison Wesley (2011). pp 205-233. - chapter 8.
3. Marnie L. Hutcheson, “Software Testing Fundamentals: Methods and Metrics”, John Wiley & Sons (2003) - chapter 8.
4. Elfriede Dustin, “Effective Software Testing: 50 Specific Ways to Improve Your Testing”, Addison Wesley (2002) - chapters 7-8.