Lean Load Testing in Agile Projects
Can Lean Load Tests be incorporated into Agile Projects to manage performance as software is developed? Traditional load testing projects can get very complicated and expensive very quickly, so I understand the aversion to Load Testing within Sprints of an Agile project.
Lean Load Testing needs to focus on computational complexity in the early phases of development. Setting up such tests for computational complexity is difficult for developers - but should be 'bread and butter' for experienced Load Testers.
Managing Technical Debt
Ward Cunningham used the term "Technical Debt" way back in 1992 when describing the ongoing burden that builds up when development teams take shortcuts, hacks and quick fixes. The analogy with Debt is very appropriate, as it implies that the cost in servicing that debt compounds over time - and can ultimately result in the demise of a project or even a whole organisation. A Digital Organisation can not achieve the required rate of change in its core software with a heavy technical debt burden.
Creepy Technical Debt
Performance problems often surface in the form of technical debt, creeping from one little shortcut or oversight and infecting the other nearby functionality in a stealthy and creepy manner. Each subsequent change to the 'infected' code can both bury the problem and extend it's scope. There is only one word to describe such technical debt: Creepy!
In her books on Agile Testing, Janet Gregory highlights the need for "Timely load and stress testing", as a means of checking that the infrastructure is 'up to the job'. This is a traditional and well accepted practice. However - if the focus of such testing is on the infrastructure, then it will probably not be possible until and appropriately architected environment is configured with a reasonably complete build. Unfortunately, a significant amount of 'technical performance debt' could be in the software by that point in time.
Janet also highlights the need to constantly 'evaluate the amount of technical debt dragging it down and work on reducing and preventing it' and I propose running lean tests each sprint that seek to uncover the creepy performance related technical debt.
Computational complexity
There is a concept of 'computational complexity' and it is key to one particularly nasty flavour of Technical Debt. When considering the complexity of a program, we consider how many CPU cycles it will take to run, how much memory and how many I/O operations it needs to run.
Obviously all software should behave 'correctly'. Functional 'correctness' often dominates technical considerations, but efficiency is also very important, and sometimes impacts on 'correctness'. For example, in a MMOG where players run around the virtual game space and shoot at each other - collision detection is very important. If collision detection is perfect - but too slow - then a player may return fire BEFORE the code identifies that the player has been fatally wounded by another player. Such a problem would cause great angst among players - causing some to stop playing that particular game. The code for simple scenarios may work fine, but in a game with a large number of concurrent players, the collision detection logic may degrade exponentially with the number of moving objects in the game space.
There are basically four ways to describe complexity, and they apply to both time and memory. The best is logarithmic. The number of iterations to find a record in an indexed list that used a bisection search is bounded by the log (base 2) of the number of items in the list. It may take 10 iterations to find 1 item out of a list of 1,000 items, or 20 to find 1 out of 1,000,000. The complexity is low for this type of algorithm, as doubling the problem space only requires one more iteration. Linear is next, where each item is searched. A search though a list of 1,000,000 items will take 1,000 times more resource than a search through a list of 1,000 items. As bad as linear is, polynomial and exponential functions are much worse again. An exponential function may require 1,000 times more processing (or memory) for a slight increase in problem size.
The focus of load and performance testing during Agile development should not focus on infrastructure, or capacity, or absolute performance, but on unacceptable computational complexity.
Lean Load Testing
When working in an Agile development project, there is a lot of activity. There are probably several environments with varying amounts of data in each, and activity within those environments is dependant on the workload of individuals within the team that need to use those environments. This looks like a problem from a load testing perspective, but not from a computational complexity perspective.
The load tests should be based on the most simple 'happy paths' through the application that are able to push the bounds of computational complexity. For example, in an online store, the script may build shopping carts with varying numbers of items, for example, 1, 10, 30 and 50 items, and the transactions being reported from the test should distinguish between the number of items in the cart. The customers making orders should be a mix of new customers, moderately active customers (with 10 - 20 orders of history) and very active customers with 100+ previous orders. In this way, the results of such a test may show performance problems based on non-linear degradation of some algorithms used by the application.
For example, if the response time for a simple test with shopping carts sizes of 1, 10, 30 and 50 items was as follows, then it would be clear that large shopping carts have a major performance problem.
The variation in response time over the day would not be a big issue, as the environment may not be controlled, but the performance degradation that is obvious with an increasing number of shopping cart entries is obvious.
Zooming in on the performance of just the single item cart transactions, as shown below, highlight that the performance of the fast single item transactions have the same pattern over the day as the slower 50 item transactions. This suggests that the variation over time is a function of the environment rather than the application. For this reason, the portion of time that appears least impacted by other users of the environment should be considered 'best performance' and should be used for future comparisons of similar processing.
The other point that can be taken from these results is the variation for each of the transactions. This should be investigated as it may be a function of the type of product being purchased (i.e. is a stock lookup required). Variability is a key measure of quality in non-IT industries, as the more variable a process is, the more likely its parameters will fall outside the acceptable limits. Identifying the root cause of variability can lead to a much more consistent and acceptable user experience.
Unexpected User Interface behaviour
Problematic and intermittent User Interface problems are sometimes related to the timing of backend processing. Running a load or stress test may induce such problems, that are not normally seen, but that could sometimes occur in production. For example, data for a drop-down choice may take too long to be returned by the server, and may present users with a User Interface that stops them from proceeding. By noting this behaviour, a performance tester could 'pair up' with the developer to solve the problem and run immediate tests to validate the solution.
Reducing (Performance Related) Technical Debt
By running simple time-boxed load tests that target computation complexity of the solution - rather than performance requirements of the solution - many nasty performance problems that would normally be identified during formal load and performance testing could be identified and addressed much earlier in the development process - thus eliminating that class of technical debt early on in the project.
It is not necessary to wait until the final stages of a project for a production like environment to conduct such tests, if the purpose of testing is to hunt down complexity related technical problems.
Paul McLean is the owner of RPM Solutions with 15 years of deep technical experience with all things Load Testing. He applies creative approaches to identify and eliminate defects that lead to performance and stability problems, through a regime of methodical testing that is in line customer with timeframe and cost constraints.
Here are some of his other LinkedIn articles:
Co-founder at GameDriver Product Leader with a proven track record of driving customer value and innovation from zero to launch and beyond.
9 年Nice write up, Paul!