Why do we never learn in IT?
I honestly can’t think of any other major industry which consistently over-spends, under-delivers and repeats the same mistakes time after time. In IT we often hear statistics like this from the Business Council of Australia, 2012
Large technically complex Australian projects have a poor performance record with a failure rate of over 75%
and, according to a McKinsey and Co. and University of Oxford study in 2012, which analysed 5,400 IT projects;
Large IT projects run 145% over budget, 107% over time and deliver only 51% of value - and worse - 17% go so badly they threaten the very existence of the company itself.
Even if these studies are way off and things have improved since then, so say, things are only half as bad as they were, if this was any other industry those kind of stats would mean businesses would go the wall, lawyers would be suiting up, and there’d be judicial enquiries and massive public outcries.
So why are these kinds of figures still part and parcel of life in the world of Information Technology?
78% of respondents reported that business is usually or always out of sync with project requirements – Geneca, 2011
In my line of work, I’m often called on to provide quotes to various clients to verify performance of their IT projects before they go into production. And just to be specific here, when I use the phrase “before they go into production” I’m actually using fancy IT jargon - in any other industry I’d say something like this-is-last-second-insanity-you’re-batshit-crazy-what-are-you-thinking?
Timeframes I often have to work with are frustratingly short and last minute. More often than not, and certainly more often that I'd like, I have to (de)scope and shoehorn performance testing in just a couple of weeks out from major system changes to core business and IT processes.
[The Australian State of] Victoria’s Auditor General report 2016 – The Victorian public sector does not have a good track record of successfully completing ICT projects… These investments often do not meet functionality expectations nor demonstrate expected outcomes, cost much more than the planned budget, and/or are delivered much later than planned.
In a recent example, one client was looking for a full range of performance tests for significant updates to key business-critical APIs used for their everyday core functions. They issued a request for quote for this testing less than three weeks out from their drop-dead implementation date – and a week of that was for them to decide whether they took a shine to the proposal or not. At this point I could drop in a few choice non-IT phrases to describe my happiness with their approach. But, as independent performance testing is my stock-in-trade, I just smile, nod and reply as best I can to meet their brief: Professionally, cheerfully and just a tiny little bit – ahem – read: massively – frustrated and hamstrung;
Why do we keep shooting ourselves in the foot? Why do we never learn that this is in no way anything near – not even the same solar system as – best practice? In testing 101 and ISTQB fundamentals multiple choice questions, this is the control answer that's obviously wrong - the one that's there to catch out the mouth-breathers.
Imagine for a second, ACME Aircraft Co. did the same. A couple of weeks out from rolling out a new aeroplane, or even an updated version of one of their existing models, ACME go ahead and place an ad in Flyer’s Weekly looking for a test pilot to check a few aeroplaney things out for them before they go live with their new airliner.
They’ve done a few of these before, but this one’s a little bit different, bigger, newer, fancier - but what the heck - they build aeroplanes for a living. They know what they’re doing. They’ve tested all sort of wings, wheels, rudders, fuselage, engines and self-deploying explosive escape slides before. All the pieces worked last time. It’s easy. It’s just a case of deciding and planning which bits you need bolted together, assembling them and making sure everything works. Great! Now they start planning to get this thing ready to use. Seats and carpet are in and they’ve had a few test passengers get onboard and try things out a couple of times. Everyone on; everyone off; everyone on again. They fired the engines up and taxied up and down the runway a few times with people onboard, waving and smiling toothy grins out the window to cheering execs as they go past. This new plane looks great and it’s ready to go. Very nearly. They've been building and testing for ages, months, years even. They're soooo close to the finish line.
With a touch under two weeks before the first lucky passengers climb aboard, they've just got the finishing touches to fix up. Last minute things. They decide to go through the replies from the ad they put in Flyer’s Weekly. Here – this one looks okay – tested a couple of rockets, helicopters and specialises in airliners. Fantastic. But this guy wants to test the plane taking off multiple times, different landings, level flight, different climb rates, gliding, overloaded, overspeed, stalling, in bad weather, night landings, looping-the-loop, barrel-rolling, with engine fires, and more. Crazy!
ACME decide they don’t need any of that overkill. They’ve done this loads of times before and never had major issues. And besides they’ve only got a bit of time and money left to make sure this thing actually holds together. They'll see what this little beauty can do up in the air, when it's up to speed with a couple of hundred people strapped in.
They fuel her up, start the engines with the (grumbling) pilot at the pointy end with the firm intention of getting him to give that tick in the box for a successful take off, level flight and landing. What else could possibly go wrong? Two weeks is plenty, right? They know what they’re doing. These things are pretty much all the same - lift, thrust, drag; wings, engines, wheels; easy right? Wrong!
Thankfully ACME Aircraft Co. would never operate this way; car manufacturers don’t build cars this way; house builders don’t work this way; restauranteurs don’t trade this way; electricity power plants aren’t built and powered up this way. There are processes, procedures, guidelines, standards, certifications, quality inspections and (thankfully) legislation to stop things like this from happening.
Unfortunately though - after decades of bad experiences, thousands of lessons not learned, multiple very costly disasters, and a never-ending stream of IT performance failures across all industry sectors - many IT projects still test critical performance at this insanely-last-second-this-is-batshit-crazy-what-are-you-thinking? point.
We in IT are seemingly the only industry that allows this. When will we ever learn?
Albert Einstein is often attributed with coining the phrase;
“Insanity is doing the same thing over and over again and expecting different results.”
Whether he actually said this is debatable, but one thing is certain and you don’t have to be Einstein to understand it; we’ve been making these same mistakes around performance testing for years and getting the same results. What does this make us in IT? What are your thoughts?