Performance is not just reliability and availability
There's no truer maxims in the world of complex IT systems than Murphy's Law, and its corollary Finagle's Law. These state:
"Anything that possibly can go wrong, does." Murphy's Law
"Anything that can go wrong, will - at the worst possible time." Finagle's Law
When you consider these within the context of IT systems, not only does this relate to functional bugs and errors, but out in the wild with many active users, this very often relates to poor performance under load. Poor performance under load often leads to disastrous consequences that aren't bugs, but lead to:
- Damage to a company's reputation;
- Loss of customers - both existing and potential customers
- Financial losses
Very often, Murphy and Finagle can only be fully understood with the benefit of 20/20 hindsight after these unusual point-load situations are encountered by large numbers of real users.
To help mitigate these performance risks before they hit production users, systems must be tested under high load conditions. Performance testing is the formal process of finding and fixing them before users do.
There are many examples of IT systems failing under load; ticketing systems being unable to cope with unexpected demand; Apple's (now superseded) MobileMe service failing due to "a lot more traffic to our servers than we anticipated"; Skype being unavailable for two days due to "masses of Skype users restarting their computers at the same time due to a Windows Update patch triggered an unknown bug in the system" and many, many more.
These peak, or extreme point load performance tests are hugely important and need to be planned and mitigated for, but there are other performance issues that are equally important and can very badly affect business reputations, customer experiences and ultimately the bottom line.
Understanding and setting performance targets that meet or exceed customer expectations will vastly improve customer experience, loyalty and conversion rates. To do this, performance testers must consider the system's users and its usage patterns.
Performance is not just about reliability and availability. Even when the system is robust and stable, Customers still intuitively measure their experience at every contact point. They compare their experience against their own expectations, not against a set of arbitrary IT performance benchmarks.
When knowledge of customer expectations, local customer context and extensive past experience are combined, then this yields a much better and true indication of how system performance will be viewed by customers. Only then can you answer whether the system works reliably and as expected.
When you test a system's performance not only for stability, scalability and under stress, but doing this with the full understanding of how these things affect real customers and their unique customer experience under all load conditions yields far better results than hard and fast performance benchmarks.
Good performance testers test systems for all these things, including how customers would view response times, even without them being explicitly stated. The results help take technology from simply being functional, to delivering quality levels that meet and exceeds customer expectations before they even know they have them.