Beware Techno-hubris
As a 33 year veteran of high technology, I am obviously a huge believer in the benefits that tech brings to individuals and society in general. The internet was the most important advance in human society in centuries. Nevertheless, there is a disturbing and growing trend in technology that I want to call out:
Techno-hubris.
The tragic recent example which motivated me to write this article is was the case of the 737 Max. As soon as I heard of the second accident I said "this sounds like a software bug". It turns out that it was worse than a bug...it was techno-hubris.
The people responsible for this software must feel terrible. None of us would want to live with that guilt. I am not going to heap blame on people who were honestly trying to do their best, probably under trying circumstances and with unrealistic deadlines. However, I think we can all agree that the world would have been better if these accidents did not occur, and we should search for ways to prevent repeating the mistake.
There are many examples every year. I will discuss a few more later. First let me define my new term.
Techno-hubris: noun. The simplistic belief that any problem can be fixed by technology alone, and that more and newer technology always makes a system perform better.
Here are a few more examples:
"Smart" locks
I love that my little Mazda has a remote unlock and push button start. Great feature. However, recent studies have shown that all of the currently available remote car locks are easily hackable. Someone can literally steal your car in under 10 seconds if they have the right equipment. The only car deemed to have acceptable security in this study was a dirt cheap compact that had an old fashioned metal key.
My favourite smart lock story is the one where hackers remotely locked all of the locks in a small hotel. Guests were either locked in their rooms or locked out. The hoteliers had to pay the ransom really quickly to minimize the damage to their reputation. They subsequently switched back to low-tech locks. (see this article for example)
It seems like these security devices were designed without the input of security experts. Wow.
Cryptocurrency
It took centuries of innovation to come up with 20th century banking technology. There are intricate trust webs in place. There is deposit insurance. There is confidence (outside of Venezuela) that our fiat currency will be worth something tomorrow, and we can actually use it to buy something concrete.
In the 21st century, some people decided all this can be replaced by some whizzy technology with very little effort. Guess how it is working out?
It is difficult to keep track of the number of ways in which people have been ripped off by the various cryptocurrency scams, never mind the number of victims or the vast quantity of money lost. Well, the money isn't "lost" of course: it just changed hands from its rightful owner into that of a variety of hackers, ponzi artists, and plain old thieves. The number of exploits will probably never be known, but it appears endless right now.
Sure, a bank can be robbed, but industry wide government mandated insurance protects you from loss in that case. With cryptocurrency, you're at the mercy of some anonymous people, untried technology, and weak financial reasoning.
Self check out and check in
Genius managers at airlines and stores have decided to "save money" by installing annoying point-of-sale equipment that they expect their customers to fight with in order to give them money. Let's all admit it is a really crappy user experience. Sure, I realize that after I've done it at a particular store several times, I'll eventually learn all the annoying UX flaws in their system. However, I'd rather just show up at store where the cashiers handle this.
Walmart and Westjet are not paying for my time. Why should I do the work so they're more profitable? Furthermore, the "cost savings" are very dubious. The last time I was at Walmart there were zero cashiers on duty, so I had to use their frustration generation station. It worked perfectly, frustrating me extremely efficiently. I bought one item, it took me 5 minutes of swearing at the machine. Three Walmart employees were standing around to help old men like me. The person who helped me was very nice. The machine was crap. The same person could have checked me through in 1/10th the time, and I would have left the store happy, instead of annoyed.
Part of the reason the UX of these machines is terrible is that they really need to protect against shoplifting. The metrics show that they lose a fortune this way in stores that have self checkout. Since their margins are tiny, even a small loss to theft is a big problem. That is part of the reason they need to pay people to stand around (instead of being cashiers): they're watching to make sure you're not a thief. The Walmart machine thought I was trying to steal my own reusable shopping bag. The same warning kept popping up and preventing my checkout until the helpful and friendly person stepped in to defeat the machine.
I will resume shopping at Walmart when they get some cashiers again. If they need to save money, the salary of the manager who decided to install the machines would be a great way to cut back. They could probably hire 3 cashiers for that person's salary. Excellent trade-off.
Solving the techno-hubris problem
So how do we address this problem? Just like any problem, the first step is awareness. Identifying the existence of the problem in ourselves, our teams, and our projects it most of the battle. Don't be embarrassed if you find it in yourself, just admit it and address it.
The concept is a bit abstract, so looking for concrete warning signs is a useful approach to identification. Here are some symptoms of the disease:
Re-inventing the wheel
All of the examples I've cited above feature this warning sign. They are taking a solved problem, and adding technology just because it is possible, or "cool". This is usually driven by fetishization of the latest technology. Just because something is cool and new doesn't make it automatically better.
The smart locks are stupider than the old locks based on ancient technology. Sure, a person can walk up to my car or house and spend a few minute looking really suspicious using tools that are illegal to even possess while they break in. With the "smart" locks, bad guys can control your lock from around the world, or walk up to your car and drive away just like they own it! Smart.
Not considering the humans as part of the overall system design
Engineers and computer scientists focus on what they can control: the technology and software. They tend to ignore the people interacting with the system because they can't control them. That is exactly why they need to pay more attention to them. Peoples' behaviour will not change readily to adapt to the technology. The technology must adapt to the people.
When I worked in telecom, there was a bizarre obsession with making sure dial-tone was always delivered when a landline receiver was picked up. The idea was that it was some kind of a catastrophic failure if this didn't occur. They ignored the fact that every single user will behave in the obvious and rational way when this occurs: they'll simply hang up and pick up the receiver again. Whew, catastrophe averted.
In the example of the 737 Max, the opposite problem occurred by ignoring the user: they created a catastrophe where none needed to exist. Deliberately ignoring the input from a pilot and co-pilot who had years of flying experience and fully functioning faculties, because obviously the computer knows better how to fly a plane than a person.
Don't get me wrong. One day I believe fully automated flight can be safer than it is now. However, the airline safety statistics show it is already incredibly safe. It is hard to improve. In contrast, most human automobile drivers are not great, and road accident deaths are all too common. Autonomous vehicles have a low bar to exceed to improve on driver safety. Just following the speed limit dramatically improves improves safety. That is pretty easy to program.
Lack of respect for the problem domain
Technology professionals bring valuable knowledge to the problems they address. However, there is no substitute for deep and broad knowledge of the actual problem being solved.
How does somebody working on a lock system not understand that its primary function is to keep the locked objects safe for their rightful owner? Techno-hubris, that is how.
How can a Walmart I.T. manager actually think that customers will be happier wasting their time figuring out their POS system than having a brief and pleasant interaction with a cashier? Techno-hubris.
How can an aeronautics engineer think that their code can out-fly an experience pilot? Techno-hubris.
How can a blockchain nerd think that applying a simple algorithm will provide a level of security equivalent to or better than the centuries-old banking system? Techno-hubris.
You cannot solve a problem adequately without understanding the problem domain very thoroughly. Even experts in the problem domains have taken many iterations of improvement to get to where the current solution are. You can't throw away all this knowledge at the start of your project and expect to exceed the performance of the system you are replacing. What could be more obvious?
A preference for complexity over simplicity
People and teams who suffer from techno-hubris always want to cram as much of technology into a solution as possible. Complex solutions inherently have more failure modes than simple ones. The failure modes can be really difficult to uncover by testing.
Never forget Occam's Razor (https://en.wikipedia.org/wiki/Occam%27s_razor) when you are designing. If you can make the system so simple that you can verify it by inspection, that is ideal. Usually that is unachievable, but it should always be the target.
Conclusion
As a technology professional, you don't want to be in the position of working really hard to make something that already works worse. You really don't want to be responsible for your users losing their money, their patience, or their lives.
Keep your eye out for techno-hubris in your professional life. If you spot it, fixing it is simply a matter of doing the opposite of what I described in the warning signs above:
- Don't re-invent the wheel. If a problem is adequately solved, leave it alone. Focus on the areas where there is real opportunity to exceed the performance of the existing system.
- Respect your fellow humans. Ultimately, if you're trying to improve the lives of end-users, you need to take end-users' behaviour into account. It is entirely predictable that a grumpy old man will be unhappy having to fight with a checkout machine instead of being greeted by the smiling face of a cashier. The old man may never shop at Walmart again...probably not what Walmart management intended.
- Respect the problem domain. Spend days or weeks consulting and re-consulting experts in the field at every stage. Do you think a pilot would tell an engineer they want the plane to override their flight control input? The question was not asked, or the answer was ignored.
- Keep designs simple. Everything you add to a design adds new bugs, security vulnerabilities, and unforeseen failure modes. Never add something to the design unless it is actually necessary.
Good design practices can help, too. But I hope it is clear after this article that the problem is more one of philosophy than of technical ability or execution. We need to approach technology with the right attitude. If we do, it can benefit us. If we do not, it can really make a mess of things.
Teaching XML technologies - online -since 2007 to international audiences (XML, XSLT, XSL-FO)
5 å¹´Great article. Countless times recently I’ve thought to myself...â€Hmmm - they must have fired all their human factors peopleâ€. Hey, we’re saving money doncha know!
Technical Writer
5 å¹´On reflection, it seems like a detailed and comprehensive failure mode analysis would address most of my concerns.? At every step of an interaction in a UX sequence (use-case), you should be asking: "What could possibly go wrong?" - Timmy Turner Likewise, for each component in the system, if it fails in any one of its intrinsic failure modes, you need to ask "what could possibly go wrong?"? How would the database being hacked affect our users?? How would a sensor failing affect our system's behaviour?? How would a user respond when the system exhibits this behaviour? Every failure mode should have either a manual fallback or at the very least a benign passive failsafe.