"Regression Testing and Test Automation", make a mini-framework in java for test automation.

"Regression Testing and Test Automation", make a mini-framework in java for test automation.

We have reached the fourth episode of our journey into the world of software testing. This article will focus on “Regression Testing” and “Test Automation”. We will also look at a mini framework I built to automate the testing of some parts of the software of a treadmill for cardiovascular exercise.

So let's start with a regression testing definition.

“Regression testing" is the selective testing phase that is intended to detect defects introduced during modification of a system or a system component in order to verify that the modifications have not caused unintended adverse effects or to verify that a modified system or system component still meets specified requirements (IEEE definition). In simpler terms regression test tries to answer the question: “Does it still work after fixing it?” or “Does it still work after adding this feature?” or even “Are we sure that the programmer in adding this new functionality did not break other parts of the system?"

For effective regression testing, it is crucial to plan it in parallel with the software development schedule. This ensures that testing keeps pace with the ongoing modifications and helps in maintaining the integrity of the software.

Do you remember the software development life cycle picture that we saw in the second article of this series (5.) ? For every change or addition of new features to an existing software, the bugs trend shows peaks. These peaks need to be smoothed out thanks to the collaborative effort of tester and developer during regression testing.

Failure Curve for Software.

So regression testing provides overall stability to the software application by keeping a check on the functionality of existing features. It becomes an inevitable step after each new code change, ensuring that the system can resist to extensions and changes.

In the realm of software, particularly when there is an architectural deficit, the smallest change to the code can cause a domino effect capable of altering the core functionality of a product. Therefore, regression testing plays a key role in investigating product architecture, which is critical to determine the root cause of both product success and failure.

In this regard, the dilemma of a great computer scientist Robert Martin is famous: "THE GREATER VALUE".

The dilemma goes like this (1.):

Function or architecture? Which of these two provides the greater value? Is it more important for the software system to work, or is it more important for the software system to be easy to change?

If you ask the business managers, they’ll often say that it’s more important for the software system to work. Developers, in turn, often go along with this attitude. But it’s the wrong attitude. I can prove that it is wrong with the simple logical tool of examining the extremes.

  • If you give me a program that works perfectly but is impossible to change, then it won’t work when the requirements change, and I won’t be able to make it work. Therefore the program will become useless.
  • If you give me a program that does not work but is easy to change, then I can make it work, and keep it working as requirements change. Therefore the program will remain continually useful.

You may not find this argument convincing. After all, there’s no such thing as a program that is impossible to change. However, there are systems that are practically impossible to change, because the cost of change exceeds the benefit of change. Many systems reach that point in some of their features or configurations.

If you ask the business managers if they want to be able to make changes, they’ll say that of course they do, but may then qualify their answer by noting that the current functionality is more important than any later flexibility. In contrast, if the business managers ask you for a change, and your estimated costs for that change are unsustainably high, the business managers will likely be furious that you allowed the system to get to the point where the change was impractical.

(by Robert Martin “Clean Architecture” page 42).

Sorry, perhaps with these considerations by Robert Martin I have gone a little off the main topic, but being an engineer who has been involved in software development for more than thirty years, I am way too sensitive to discussions on software structure to resist the temptation to fall into it.

So now let's go back to our main topic and see how to perform regression testing.

Consider an example where a software development company is working on releasing a new "Health and Fitness Apps" for mobile devices. The primary requirement is to release their first build with only the core features. Before product release, a black-box test is conducted with 1000 test cases to ensure the basic functionalities. The initial build is ready to hit the market if it passes the tests successfully.

However, with the success of the first product the business team comes back with a requirement to add the diet management and others new premium features. The product team develops those and adds them to the existing app, but with the addition of new features, a regression test is required. Hence, they write 100 new test cases to verify the functionality of those new features. However, they will have to run those 1000 old test cases already conducted to ensure essential functions haven’t been broken.

In the software world today there is a strong push towards agility. There is an emphasis on adopting an iterative process, push new code often and break things if necessary. Regression testing ensures that with frequent pushes, developers do not break things that already work. The regression testing example shown below emphasizes its importance.

Regression Test (releases over time)

In the image we see that when we add the F2 function to V1 version we must also repeat the test of F1 function before releasing V2 version, when we add F3 function to V2 version we must repeat the test of F1 and F2 functions before releasing V3 version, and so on.

Regression testing works this way and may sound tedious but is an effective method of finding occasional issues that don’t always appear.


Regression testing strategies vary for each organization (2.). However, some fundamental steps are always the same:

  1. The first step is to "identify the changes" in the source code, in this step the developer must provide with all the information about the changes the tester who then has to design the regression test. It is necessary to detect the modified components and the impact this has on the existing essential characteristics of the product.
  2. The second step is "Select test cases to rerun", the test cases selected to rerun basing on the modified modules of the source code. In this case it is not necessary to test the entire test suite. The new feature may make some tests in the test suite obsolete so in this phase, the test cases are classified into reusable and obsolete test cases, where the reusable ones are selected for regression testing. Obsolete ones, instead, are not to be used for future test cycles.
  3. If we also have some automatic tests available, we must provide a phase where we can "separate the test cases into automated and manual". Automated test cases are faster than manual cases managed by humans. Also, one can reuse the test code multiple times in the automated scenario. Hence, categorizing test cases is a crucial step in regression testing. We'll talk about automatic testing later.
  4. The next step is to "Prioritize test cases". In this step, the collected test cases are ranked based on their importance. High priority has test cases that cover essential features, lower priority has test cases that cover features that do not fall under the essential category but have a certain importance.
  5. In the final step, "each test case is execute" at an appropriate, scheduled time to verify if the product performs as expected. Here, automated or manual testing can be employed depending on the needs and requirements. Automated tools can help speed up test case execution.

Regression testing involves the implementation of three basic techniques: retesting everything, selecting specific regression test cases or prioritize test cases.

Let’s understand each technique:

  • Retesting Everything: This method applies regression tests to every existing test suite when multiple important changes are made or significant restructuring occur in a software application. This technique is key to identifying and resolving all bugs; however, it is time-consuming and resource-intensive. So this approach is implemented by considering contexts. For instance, full regression is the preferred choice when an application is moved to a new platform or when architectural restructuring takes place.
  • Selection of Regression Test Cases In this technique, we have the flexibility to choose the areas for regression testing. Relevant sections are selected based on the extent of impact changes may have on the application. With this approach, limited test cases are applied to related areas, reducing the effort, resources, and time required for regression testing.
  • Prioritization of test cases. This regression testing technique allows us to choose the test cases that we should give the first priority in the testing process. The test cases are selected based on several factors, such as most commonly used functionalities, feature failure rate, and business impact of certain features. Additionally, newly incorporated functionalities and customer-centric features are considered the highest priority test cases.

Regression testing : basic techniques.

Understanding and implementing these regression testing techniques are vital steps in ensuring the continued stability and reliability of a software application. Each method offers unique advantages based on the specific requirements and changes introduced, providing flexibility and efficiency in the testing process.

?

As we have already mentioned regression testing can significantly benefit from the incorporation of test automation. Automating the testing process improves efficiency, accelerates the identification of potential issues, and ensures a more complete evaluation of the software's functionality after changes.

The automation of repetitive test cases allows for faster feedback on the impact of changes, contributing to a more agile and reliable software development lifecycle.

Test automation includes two distinct meanings:

  • Automation of Test Generation.
  • Automation of Test Execution.

The automatic generation of test cases implies having a complete and structured requirements specification in documented use-cases and we will address this topic in the next article.

Instead the automation of test execution is the topic of this article, so let's start by looking for an answer to the following question:

Why opt for automation?

There are several persuasive reasons (3.), (4.). Automation allows for the easy repetition of tests, making it extremely useful for tasks like regression testing and stress testing. It becomes particularly advantageous when testing involves highly repetitive tasks with different values but the same procedure. Test cases, once created, hold significant value, as they are reusable across multiple products. Additionally, automation proves beneficial when testing time is limited, as it efficiently addresses the scenarios mentioned above.

Furthermore automation test helps create a continuous integration environment where after each code injection, the test suite automatically runs with the new build. Using tools of continuous integration and continuous delivery like Jenkins, we can create jobs that run test cases after a build is deployed and send test results to stakeholders.

However, achieving complete automated execution is a complex process. It involves putting the system into the proper state, providing inputs, running the test case, collecting results, and ideally, verifying the results. This complexity represents a challenge in the automation process.

Despite its advantages, automation comes with associated costs. The cost of automation could exceed that of manual testing. Moreover, utilizing automation tools requires a non-trivial understanding of the tools themselves. Automated tests also have a finite lifetime, sometimes relatively short.

Observations in the field of test automation reveal that automated tests are 3 to 30 times more expensive to create and maintain. They can break easily when the system under test undergoes changes.

Sometimes it is better to develop a specialized framework for automating tests of your product rather than using general tools like Selenium, Ranorex Studio, etc.

As an example of Test Automation in software testing let's consider a scenario where test automation is applied to the testing of a treadmill for cardiovascular exercise. In this example, we'll focus on key functionalities like speed control, incline adjustments, and safety features.

1. Speed Control:

  • Manual Testing: Testers manually adjust the treadmill speed using the control panel and observe if the belt moves at the expected pace.
  • Automated Testing: Automation engineers create a script that programmatically adjusts the treadmill speed through the control interface. The script verifies if the actual speed matches the set speed within acceptable tolerance.

2. Incline Adjustments:

  • Manual Testing: Testers manually adjust the incline settings and observe changes in the treadmill's slope.
  • Automated Testing: Automation scripts simulate incline adjustments and validate whether the treadmill responds correctly by altering the slope according to the input.

3. Emergency Stop Functionality:

  • Manual Testing: Testers initiate the emergency stop button to ensure the treadmill halts immediately.
  • Automated Testing: Automation scripts trigger the emergency stop function and verify that the treadmill ceases operation promptly, ensuring user safety.

4. Heart Rate Monitoring:

  • Manual Testing: Testers manually check heart rate sensors on the treadmill handles and compare the readings with a pulse monitor.
  • Automated Testing: Automated scripts simulate different heart rate scenarios, validating that the treadmill accurately captures and displays heart rate data.

5. Pre-set Workout Programs:

  • Manual Testing: Testers manually select and run various pre-set workout programs to ensure they adjust speed and incline as expected.
  • Automated Testing: Automation scripts execute pre-set workout programs and verify that the treadmill follows the programmed sequences correctly.


These are the Benefits of Test Automation for Treadmill Testing:

  1. Efficiency: Automated tests can rapidly cycle through various speed, incline, and program scenarios, providing faster feedback than manual testing.
  2. Repeatability: Automated scripts ensure consistent and repeatable test scenarios, reducing the chances of overlooking defects in different test cycles.
  3. Regression Testing: Automated tests can be easily rerun whenever there's a software update or a change in the treadmill's firmware, facilitating efficient regression testing.
  4. Safety Validation: Automated testing helps validate critical safety features, such as emergency stops, ensuring the treadmill complies with safety standards.

By incorporating test automation in treadmill testing, manufacturers can enhance the reliability, safety, and overall quality of their cardiovascular exercise equipment.


Let's now explore how to automate the testing of a treadmill. A couple of years ago, I developed a framework to automate the testing of one of the most critical software components of a treadmill responsible for acceleration/deceleration ramps, speeds, and safety mechanisms such as STOP and emergency features.

To provide context, let's delve into the general architecture of treadmill software. The software is essentially divided into two parts:

  1. High-Level Part: This runs on the main processor, where we've developed a custom version of the Android AOSP platform.
  2. Low-Level and Real-Time Part: This is executed by a microcontroller running a real-time operating system (FreeRTOS). It manages the motor inverter board and other real-time devices for gait and running analysis, and also various sensors.

The two parts are connected via a fast USB channel, enabling command transmission from the Android part to the real-time part and event transmission in the reverse direction.

As you can understand, the real-time part is the most delicate and challenging to test, lacking the ability to stimulate it through the GUI. Imagine what happens when changes are made to the motor inverter component or when a new inverter needs to be managed. It becomes terrifying, requiring retesting of all behaviors related to the motor and machine safety. Covering all behaviors of this software part solely through the user interface and physical devices like STOP and emergency buttons or joysticks is very complex.

So, we decided with the test engineers to list all test cases to be tested and then transform them into easily implementable scripts. Once the testers agreed the script grammar, I developed the framework using the open-source BeanShell interpreter. BeanShell is a Java source interpreter with scripting language features.

I adapted the interpreter to the agreed commands and created the automatic execution engine for the test suite. Below, we can see a script for a specific test case in the test suite.

// Setup
eq.Start().Wait(1);

// Test Body
eq.Set( MaxSpeed, 8 ).Set( Speed, 5 ).
Wait(30).TestEqual( Speed, 5 ).Set( Speed, 10 ).
Wait(30).TestEqual( Speed, 8 ).Set( Speed, 500 ).
Wait(15).TestEqual( Speed, 8 ).Set(Acceleration, 1).
Set( Speed, 3).Wait(5).TestEqual( Speed, 3 );

// Teardown
eq.Stop().Wait(5);
        

The script is divided into an initial part, a body, and a final part. The initial part (setup) is used to set up the execution environment under the conditions needed for the test. The body is the actual test. The final part (teardown) is used to restore the environment to normal conditions so that subsequent tests can be executed.

Let's analyze the script's grammar. Through "eq," we access the machine functionalities. "Set" allows us to configure properties such as Speed, MaxSpeed, and Acceleration, etc. "TestEqual" enables us to evaluate predicates on the properties. "Wait" is used for waiting, and "Start" and "Stop" are employed to activate or deactivate the motor.

This script performs the following operations:

  • Activates the motor.
  • Waits for 1 second.
  • Sets the lowest acceleration value (0.17 Km/h/sec) for gait rehabilitation in post-stroke patients.
  • Sets the maximum speed threshold to 8 Km/h.
  • Sets the target speed to 5 Km/h (with the previously set low acceleration, it takes approximately 27 seconds for the motor control to reach 5 Km/h).
  • Waits for 30 seconds.
  • Checks that the speed has reached the target value of 5 Km/h.
  • Sets a target speed of 10 Km/h.
  • Waits for 30 seconds.
  • Checks that the speed has reached the target value of 8 Km/h.
  • Sets a target speed value out of range (100 Km/h).
  • Waits for 15 seconds.
  • Verifies that the speed has not changed; the command must be ignored by the inverter firmware.
  • Sets the acceleration value to 1 Km/h/sec.
  • Sets the target speed to 3 Km/h.
  • Waits for 5 seconds.
  • Checks that the current speed has reduced to the set target.
  • Deactivates the motor.


Below, we see the first page of the test suite list, comprising over 50 test cases developed by our test engineers.

Each test case has its script, which can be launched individually. Running the script above gives the following result:

The same result is also saved in a file. The environment allows for launching the entire test suite.

Recently, I've developed an extension in order to integrate actions performed on the display into the test. This need arose due to a series of issues related to the uninitialized state of a third-party hardware-software component, crucial for a precise analysis of walking and running. The problem occurred very rarely, we had to start the exercise about a hundred times to see the problem. We had to do at least a minute of exercise before checking if the device was working correctly. Clearly, manually verifying the problem resolution was too expensive, time consuming and also extremely tedious. The verification test for resolving this issue was conducted using the script below:

// Test Body
("for( int i = 1; i < 500; ++ i ) 
{ 	display.Touch(START); 
	eq.Set( Speed, 10 ).Wait(30); 
	optogait.CheckAvailable() ;
	display.Touch(STOP).Wait(3).
	Touch(STOP_POPUP).Wait(3).Touch(HOME).Wait(3); 
}", "for loop");        

This script repeats the following operations 500 times:

  • Generates a touch-pressed event at the coordinates of the START button, initiating the exercise at the minimum speed and with an acceleration of 1 Km/h/sec.
  • Sets the target speed to 10 Km/h.
  • Waits for 30 seconds.
  • Checks that the Optogait device has activated (if it doesn't activate, an error has occurred).
  • Generates a touch-pressed event at the coordinates of the STOP button, prompting a confirmation popup for exercise termination, concluding the exercise.
  • Waits for 3 seconds.
  • Generates a touch-pressed event at the coordinates of the HOME button, returning to the main screen.
  • Waits for 3 seconds, and the cycle continues.


Running the script gives the following result (I only show the first two iterations):

script: "for loop"
executing script body

touch event at 600 720
setting Speed to 10.0
waiting 30 second[s]
optogait is available
touch event at 600 720
waiting 3 second[s]
touch event at 600 600
waiting 3 second[s]
touch event at 100 720
waiting 3 second[s]

touch event at 600 720
setting Speed to 10.0
waiting 30 second[s]
optogait is available
touch event at 600 720
waiting 3 second[s]
touch event at 600 600
waiting 3 second[s]
touch event at 100 720
waiting 3 second[s]        

I believe it would be interesting to take a look at the code in the TestEngine class, where, in the executeSingleTest method, I specialized the Beanshell interpreter to understand the concepts of the mini-language for automation test scripts (e.g., "eq", "Speed", "MaxSpeed", "Acceleration", "TestEqual", "Wait", "Start", "Stop"). After the interpreter has been trained to understand the grammar I call the eval method in order to execute the script. See Snippet1.

Following that, I've included the redefinition of the executeSingleTest method to also handle the display and the optogait device for gait and running analysis. I extended the automation test mini-language to include concepts related to "display", "touch", and "optogait". I've also listed several areas of the GUI relevant for interaction with training programs (START, STOP, STOP_POPUP, and HOME). See Snippet 2.


Snippet 1.


Snippet 2.

For those of you that are interested, I recommend to visit my GitHub link where you can download the source code of the testing automation framework. Obviously I had to stub all the parts that directly interact with the equipment. The sources under the "asm" and "bsh" packages represent code I borrowed from the "Beanshell" interpreter, for which I have provided the link. This comprises only a portion of the original interpreter code, specifically the part needed for developing the framework.

Instead, the code under the "framework" package constitutes the code of the testing automation framework.

A special thanks to Carlo Pescio , with whom I collaborated in the implementation of this project.

I would like to conclude this article by debunking some myths about automated testing.

  • "100% Automation is Possible": Except for some very specific applications, achieving 100% automation is not feasible. Exploratory tests and usability tests are examples of test cases that cannot be automated.
  • "Automation will Replace Manual Testing Jobs": While it's true that the rise of automation testing and various tools has led to a shift towards the need for full-stack testers or those with a dual role, working on both manual and automated testing, automation will never completely eliminate the need for manual testing.
  • "Developers Make Better Automation Testers": Although a developer may have a slight advantage in coding, a tester will still be able to think from a testing perspective and try to create more robust test scripts with multi-gate verification.
  • "Automation is Expensive": If done correctly, automation can reduce overall testing effort and resource requirements, thus saving project costs in the long run.

That's all about Regression Testing and Test Automation.

see you on the next episode


I remind you my newsletter "Sw Design & Clean Architecture"? : https://lnkd.in/eUzYBuEX where you can find my previous articles and where you can register, if you have not already done, so you will be notified when I publish new articles.

Thanks for reading my article, and I hope you have found the topic useful,

Feel free to leave any feedback.

Your feedback is very appreciated.

Thanks again.

Stefano


References:

1. Robert Martin “Clean Architecture”- Prentice Hall (September 2017) . pp 40-45

2. Sommerville, "Software Engineering", Ninth edition, Addison Wesley (2011). pp 205-233. - chapter 8.

3. Marnie L. Hutcheson, “Software Testing Fundamentals: Methods and Metrics”, John Wiley & Sons (2003) - chapter 8.

4. Elfriede Dustin, “Effective Software Testing: 50 Specific Ways to Improve Your Testing”, Addison Wesley (2002) - chapters 7-8.

5. S. Santilli: " https://www.dhirubhai.net/pulse/unveiling-effectiveness-black-box-testing-software-quality-santilli-nrv8f /".

要查看或添加评论,请登录

社区洞察

其他会员也浏览了