Automated Testing and Simulation
Automated Testing and Simulation
By Kim H. Pries and Jon M. Quigley
Simulation generally refers to a model of a process or function; for example, we can simulate the general behavior of a manufacturing process, a motor vehicle, or any other object for which we have knowledge about inputs, outputs, and behavior. Wedding simulation with automated testing allows test organizations to achieve benefits such as increases in testing speed (through-put), increases in test coverage for both hardware and software, and the ability to test before hardware and software becomes available.
Both simulation and testing have specific goals. For simulation, we want to facilitate requirements generation, uncover unknown design interactions and details, and reduce development cost by spending less time chasing unproductive solutions, and by creating fewer actual parts. Much of this activity facilitates testing in quantifying the requirements, making testing more productive. For testing, we want to achieve defect containment, reduced product warranty cost, and some level of statistical indication of design readiness.
Automated testing
Automated testing involves the following components:
- The use of scripts to drive tests
- Hardware to support the scripts
- The use of other scripts to record results
- A theory or philosophy of testing (risk aversion and philosophy about regression testing)
- The ability to detect faults
In short, automated testing is nearly always a hardware and software proposition.
Scripting
The automated test team can use scripting languages such as Ruby, Python, Perl or other languages so long as they have toolboxes to help drive the hardware side of the process. We have also used Visual Basic for Applications driving a spreadsheet and hooked to one of Scilab, Matlab, or Labview as a test and documentation driver. The bottom line is that the driver must be sophisticated enough to run the tests unaided and personnel must be appropriately skilled to design, test, and execute the code.
We can record the results of our testing using the script language we used to execute the tests. These results are recorded objectively by measuring pre-defined outputs and known failure conditions against requirements. A sophisticated test would also account for previously unknown failure conditions by flagging any behavior outside an expected range as a fault. The script language also writes results to the test plan which creates the report, and then the script either publishes the report.
Hardware
While implementing automated testing, we use a variety of tools:
- Cameras for visual indication
- Mass media storage for data link, analog and digital information
- Scopes and meters
- Real system hardware
- Actual product hardware
- Mechanical actuators
- Lighting
- Temperature/humidity test boxes
A shopping list like the previous list can make the hardware portion of automated testing expensive. Hence, we need to always ensure that automated testing provides the untiring speed and correctness that we cannot achieve with human labor. Comparing hours to test to material cost to test will provide some indication regarding investment. Recurring human costs to test is not inconsequential, moreover the repeatability of human effort is not comparable to automation.
Testing theory
It is wise to establish a testing theory or approach to unify the test plans and provide a rationale for the architecture of the test suites. In general, we would expect to see an element of compliance testing which is executed to written and derived requirements and consists of routine or expected actions. An extension to this type of testing is combinatorial testing, wherein all inputs receive stimulation and the expected response values are known. When properly designed, combinatorial testing may also elicit failures from factor interactions. Finally, we expect to see destructive testing (in the lab, but not in production), where we will overstress the product beyond specification or design limits to characterize the design, failure, and destruction limits. Stopping at compliance testing can provide the false belief that the product is capable, and is a less than optimum solution.
Detecting faults
Automated test equipment must be able to detect faults or deviations from the expected response. We can accomplish this through clear identification of individual pass / fail criteria. In some cases, we may believe we have identified all failure modes; in other cases, anything that is not nominal should be flagged for review. Often, to detect faults, the automated tester may need the ability to do optical character reading, calibration/movement detection with spec limits, sensing of signal limits, and color and illumination detection to name a few.
Objectives of simulation
Simulations can be performed to analyze the behavior of a system. Not all simulations are used for automated testing. Regardless, general objectives consist of:
- Evaluation of various design concepts quickly without material investment
- Demonstration of system integration (peer reviews and customer feedback)
- Refinement of design concepts by predicting effects of performance parameters on system behavior
- Verification that the simulated object performs correctly across a wide range of nominal and fault scenarios
- Identification of key variables that impact the system and realization of the system implications, particularly with respect to potential interactions
- Reliability consequences
- Theoretical simulation results as reference for the practical verification.
Simulation levels
Simulation is not simply hardware done in software. Often, we have various mixes of hardware and software. For example, constructive simulators are purely computational, with all elements, including the hardware, simulated on a computer. On the other hand, we can have virtual simulators, wherein we use partial simulation in hardware and other parts of the system or systems simulated in pure software. Finally, we might use live simulation, which provides live, contrived exercises and is often used to stress system limits (e.g., aircraft and vehicle dynamics simulators).
The military use of live fire is a form of simulation and it has analogues in other test environments. It requires a set of artificially contrived demands upon the system, which start at nominal and become progressively severe. An example of these applied to civilian use, could be vehicle stability testing via interaction with other vehicles and obstacles. In most cases, we are trying to get close to real life conditions.
Simulation Activities
To begin to develop a simulation, we could go through the following process:
- Identify the simulation goals for our specific project
- Prepare for simulation by:
* Identifying parameters
* Modeling parameters
- Run the prototype simulation
- Perform the test (physical test to compare actual performance to simulation results)
- Gather data
* Compare test results with model parameters
* Identify new parameters needed
- Re-run simulation if needed and gather data again
- Analyze data
- Update models
* Determine if additional parameters are necessary
* Review the range of parameter values as a sanity check
- Design updates
Clearly, we may need several iterations before developing a simulator / simulation that provides sufficient verisimilitude to be valuable.
Figure 2 Simulation Activities
Simulation models
We will discuss three kinds of simulations models:
- Discrete-event simulation
- Agent-based simulation
- Real-time simulation
Discrete-event simulation
With discrete-event simulation, events occur chronologically but not necessarily in real-time. The simulator responds to events as if it were a state machine with specific events triggering changes of state. This kind of simulator is often used for accelerated analyses of factors in the simulation model. It is also commonly used for automated testing. Some example of discrete-event simulators is the commercial manufacturing plant simulator ARENA and the open source tool SimPy, which is a Python-based simulator.
Agent-based simulation
With agent-based simulation the focus is less on events and more on the behavior of agents. The agents function autonomously or semi-autonomously. We can achieve complex behavior from simple rules as well as emergent behavior from simple rules and relatively few components. Examples of uses of agent-based simulation are ant colony optimization and swarm optimization. An open source tool for agent-based simulation is the Netlogo language.
Real-time simulation
Real-time simulators are often used for training purposes—speeding them up can improve participant reflexes. A typical example is a flight simulator used to train or refresh pilots. In real-time simulator, events occur in correspondence with actual conditions. A simple example is the Microsoft Flight Simulator. Often hardware-in-the-loop simulations attempt to come as close to real-time simulation as possible.
Simulation as test preparation
We can also use simulation as a tool for the preparation of all kinds of tests, including automated testing. This approach extends beyond requirement elicitation. The factors involved are the following:
- Set up test scenarios
- Set up test environment
- Identify key test measurable and instrumentation needs
- Human resource needs
- Material resource needs
- Test sequencing
- Identification of “passing” criteria
Conflict between Simulation and Test Results
Because simulators are instantiations of models, we will occasionally see a conflict between the abstraction of the model and the reality of actual product testing. On the testing side, we would review our test assets, our measurement methods, our tactics, and the operational environment. On the simulator side, we would review the model for accuracy; identify any missed parameters as well as the range of the previously identified parameters.
Simulation
We divide simulation per se into
- Scripting or programming to provide realistic stimuli for hardware/software
- Occasional special hardware
- Different levels
* Light simulation
* Medium simulation
* Heavy simulation
* Distributed simulation
Light Simulation
What we call ‘light’ simulation occurs in software when the software engineer ‘feeds’ data to new routines through the argument list, builds a wrapper to provide simulated data to routines, and white box testing permitted. White box testing occurs when the tester knows the internals of the function under test. In this instance, we would monitor the impact of this simulated information’s on performance within the various software routines. In black box testing, we only know the inputs and expected outputs and we observe the behavioral changes of the outputs as the input values are change. In this instance, the simulation results on the output are what is critiqued.
Medium Simulation
With ‘medium’ simulation, we can have the hardware simulated or emulated. The stimuli will be apparently from outside the product code and/or hardware under test and the interactions should be detectable. In some cases, we are using the actual hardware for part of the test or study activity and software for the remainder.
Heavy Simulation
Under ‘heavy’ simulation all potential input devices are simulated or in hardware—in some cases all devices are simulated using software. All potential input ranges are exercised on main system under study and there will be no white box testing.
Distributed Simulation
Distributed simulators involved geographically-separated components communicating across a network. In fact, some devices may not reside on the test bench or in a laboratory. We would expect to see multiple simulators that stimulate the unit under test running on different systems. Most often, these kinds of simulator systems are used in Department of Defense scenarios. Some major commercial vehicle and automotive companies will use these kinds of systems to simulate multiple controllers or ECU (electronic control units) on a data bus.
Validating Simulators
As with all test equipment, the simulator must be validated; that is, we must compare our model with reality to ensure adequate levels of authenticity. In the case of a supplier, it is wise to solicit customer input. We suggest that the behavior of existing subsystems must be known thoroughly. The simulator must be good enough but it does not have to better than that; in short, it needs accurate signals and the ability to randomize or add noise to behavior. If we go beyond a certain point, the simulator ceases to simulate and becomes the actual device simulated, which misses the cost-effectiveness and malleability of using a simulator in the first place. This setup then becomes distributed systems testing.
Simulation and Automated Testing Together
Finally, we use automated testers—predominantly hardware—and simulators—principally software—to enable automated testers to provide early warning of issues during product or service development. Simulators can save money when the hardware is difficult to acquire and the hardware is expensive. We believe that the use of automated testing and simulation in tandem is a good marriage.
Figure 3 Simulation / Verification Over Project duration
Conclusion
As we have seen, automated testing, when used wisely, speeds up routine testing, confirms numerous design permutations, may be used with combinatorial testing to discover interactions, and can be executed full-time (24/7) because the machines don’t tire.
Simulation allows for requirements identification, evokes unknown interactions, provides for testing before hardware delivery, can execute “what if” scenarios, and allows complete control of stimuli.
Together, automated testing and simulation provide a powerful tool for executing tests, eliciting problems, and characterizing new products
For additional information on testing and simulation, consult:
- Test and Evaluation Guide (Defense Acquisition University Press) January 2005
- Software Engineering Institute - CMMI