Automation and The Substitution Myth

Automation and The Substitution Myth

The underlying and often unexamined assumption for the benefits of automation is the notion that computers/machines are better at some tasks, and humans are better at a different, non-overlapping set of tasks. Historically this has been characterized as HABA-MABA (Humans are Better at; Machines are Better at) or the Fitts list, based on the work of Paul Fitts, a psychologist and researcher at the Ohio State University in the early to mid 1900s.

More recently, this has been described in human factors research as “functional allocation by substitution” (Hollnagel, 1999; Dekker and Woods, 2002), promoting the idea that we can simply substitute machines for people. This myth relies on the belief that people and computers have fixed strengths and weaknesses, and therefore all we need to do is give separate tasks to each according to their strengths.

The Fitts List from Human Engineering for an Effective Air Navigation and Traffic Control System, National Academy of Sciences, Washington, D.C., 1951.

We often proceed with designing automated systems (e.g. CI/CD deployment pipelines, load balancing) without examining a number of consequences of this approach.

Where automation is introduced, new human roles emerge.

Dekker and Woods identify a number of consequences of this approach in their 2002 paper:

  1. Designers of automation tend to imagine that the desired outcomes of automation (e.g. lower workload, higher accuracy) and only those desired outcomes will occur. In software systems, you’ll see examples of this where we expect something like CI/CD pipeline automation to “reduce the number of errors that can take place in the many repetitive steps of CI and CD.” But everyone has experienced an erroneous config getting pushed out via that very same pipeline and havoc ensuing.
  2. Automation increases the need for humans to deal with the fact that it does not have access to all real-world parameters for accurate problem solving in all contexts, and may in fact make it harder for humans to directly impact the system in the ways they want when they need to solve problems or troubleshoot incidents (Billings 1996; Sarter and Woods 19974). We don’t tend to have app-specific dashboards that surface the internal state of programs, e.g. what does the automated process think the state of the world is, what is it trying to do, etc. This kind of view into the system is particularly important when dealing with automation that acts independently, running autonomously all the time (load balancers, orchestration systems), as opposed to automation that is triggered by humans to do a thing at a specific time, such as logging observations and generating reports.
  3. Allocating aspects of the system to automation don’t necessarily absorb that function without other unintended consequences, instead it creates new categories or functions that humans must take on, such as figuring out where to find information about what the automation is doing. Automation transforms people’s work and forces them to adapt in novel and unexpected ways. If you’ve ever not known which dashboard to check in order to understand what’s happening, or you have to dig through logs vainly hoping it will give you a clue, you’re familiar with this phenomenon.
  4. Lastly, taking advantage of computers’ strengths for automation does not necessarily replace human weaknesses. It often creates new human weaknesses or requires the development of new strengths, often in unanticipated ways (Bainbridge 1983). This can often lead to pockets or silos of expertise, such that when Amy the Kafka wizard isn’t around, no one else knows how to figure out what’s going wrong or why.

The last of these points is one of the key “Ironies of Automation” identified by Lisanne Bainbridge in her influential 1983 paper, which I'll cover in my next post.

(If this kind of material intrigues you, I recommend checking out the newly-formed Resilience in Software Foundation. It notably has a very active Slack group of folks discussing just these kinds of topics.)

References

Bainbridge, L. (1983). The Ironies of Automation. Automatica, 19, 775-779. (Conference proceedings).

Hollnagel, E. (1999). 'From function allocation to function congruence” In Dekker, S., and Hollnagel, E., eds, Coping with Computers in the Cockpit. Ashgate, London.

Sarter, Nadine & Woods, David & Billings, C. (1997). Automation surprises. Chapter from the book Handbook of Human Factors and Ergonomics, John Wiley and Sons, (pp.1926-1943).

Dekker, S. & Woods, D. (2002) MABA-MABA or Abracadabra? Progress on Human–Automation Co-ordination. Cognition Tech Work 4, 240–244.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了