Design Rules Based on Analyses of Human Error

Design Rules Based on Analyses of Human Error

This older paper (1983) from Don Norman is pretty interesting - exploring design approaches to performance variability.

There’s MUCH newer and comprehensive work on the topic (including his book and newer papers), but still a decent read.

Caution is advised because of the paper’s age (particularly with the psychological elements; although, you may find the principles themselves haven't changed a lot).

Background

Errors can sometimes be serious, and often be frustrating; nevertheless, “There is little need for most of these errors”, and that “Most are system induced, a result of inappropriate system design” (emphasis added).

Many errors “spring from insensitivity on the part of the designer to the needs and functions of users of all abilities, from novice to expert (and, despite their protests, themselves)”.

Intentions are described simply as the highest level specification of a desired action; which may result from conscious or unconscious. The high level specification starts a chain of processing that normally accomplishes the intention

An error of intention is called here a mistake whereas an error in the execution of the intention is a slip. Both are important, with different underlying principles and solutions. This paper focuses primarily on slips.

Different classifications of slips exist, but at the basic level and a very simplistic model of human activity, it’s assumed that an intention activates a number of cognitive schemas, each with an activation value and set of triggering conditions.

A schema controls behaviour “whenever the combination of its activation value and the goodness of match of its trigger conditions reaches an appropriate level.

Preventable errors in human-computer interaction stem primarily from the classification scheme in this table – including mode, description, capture and activation errors.

Mode errors suggest the need for better feedback

It’s said that modes “will be with us as long as we wish to be able to do more operations than we have special keys”. A mode, simply put, is a distinct setting within machines/programs which the same user input will produce different results in other settings (different modes).

Most complex devices are said to have special keys, like watches, aircraft automatic pilots, software. Don says that “A large class of errors is the mode error : doing the operation appropriate for one mode when in fact you are in another”.


These occur when people believe that the system is in one state (mode), when the system is in another state; leading to inappropriate performance or actions.

Importantly, “Mode errors occur frequently in systems that do not provide clear feedback of their current state”.

Various systems provide the opportunity for mode errors (the paper refers to text editors, which I’m not sure is as relevant now, but can still occur like with the insert setting), digital watches and aircraft autopilot systems (the latter of which is still relevant, but maybe in different ways).

There are “obvious ways” to minimise mode errors:

1. Don’t have modes

2. Ensure that modes are distinctively marked

3. Make the commands required by different modes different, so that a command in the wrong mode won’t lead to difficulty.

These suggestions can’t always be followed, and even if they are, they can also lead to other classes of difficulties. Hence, the designer “must balance the trade-offs associated with one procedure against those of another”.

Some systems use rows of lights to identify the system state/mode. It’s observed that while these rows of lights are clearly well-intentioned but “they lead to another class of errors: description errors.

[** Dr Who’s Tardis or any retro futuristic show usually have terrible HF/E interface designs…naughty futuristic time lord designers.]

Rows of identical switches or displays “invite errors in which you do the right operation on the wrong item”, and the middle lights of a row are easily confused, especially when a user is rushing or when viewed from the peripheral.

Don suggests that it’s possible to minimise the number of system modes by appropriate design procedures, but “for complex systems, I do not believe it is possible to eliminate modes, and, moreover, the partial elimination usually creates new problems for users--usually a penalty for the expert user”.

Description errors suggest the need for better system configuration

A description error is when there’s insufficient specification of an action, and the resultant ambiguity leads to an unintended act. The unintended act is said to be often closely related to the desired one.

Often they can be amusing – like somebody fishing in a boat and after cleaning the fish, throwing the clean fish back into the water instead of the entrails.

Description errors also happen operationally, where they can be serious. Description errors are said to be common in the use of switches or controls, especially when the operations are similar.

The role of design in description errors “is especially bad in the design of nuclear power plant control rooms, where switches and controls are laid out in neat, logical, nice-looking rows”, which results in “clear potential for confusion, for reading the wrong instruments, and for operating the wrong controls” [Again remembering this is 1983 nuclear plant rooms.]

Some approaches to counter this problem:

1. Arrange instruments and controls in functional patterns, or like in the form of a flow chart of the system

2. Use of shape coding to make controls and instruments look and feel different from one another

3. Make it difficult to do the wrong or dangerous things, or things that are not reversible

For computers, these principles are presented as:

1. Screen displays and menus being functionally organised

2. Designing menu display headings as distinct from one another

3. Make it difficult to do the wrong or dangerous action


Lack of consistency leads to errors

Another class of errors is when “a person attempts to rederive an action sequence and does so improperly, forming a sequence appropriate for an action different from the one intended”.

E.g. People apply a different schema to the one that was necessary, due to commonalities or overlaps between the functions.

This can occur primarily through a lack of consistency in the system structure. Similar situations occur in the interpretation of signals, like instruments. Don argues that “The basic concept involved here is that when people lack knowledge about the proper operation of some aspect of a machine, they are apt to derive the operation by analogy with other, similar aspects of the device”.

The derivation people adopt may be unconscious or conscious, and it can influence behaviour without people realising it. Forming conclusions from the relationships of one system to another is a “common and powerful method of human thought, but it can lead to error if the mapping from one domain onto the other is not consistent”.

He notes that while there’s instances where a lack of consistency is desirable, and is designed deliberately into a system with careful thought, usually when the normal sequence is tedious and performed frequently.

The inconsistencies can also introduce error traps and make learning more difficult.

For solutions:

1. One could make the command structure and instruments etc more consistent, even at the cost of a little efficiency”.

2. Although “A better solution would be to redesign the entire system to yield both consistency and ease of operation”.

Capture errors imply the need to avoid overlapping command sequences

A capture error “occurs when there is overlap in the sequence required for the performance of two different actions, especially when one is done considerably more frequently than the other”.

While somebody attempts the infrequent action, the more common act gets done.

For solutions:

1. One way is to minimise overlapping sequences, although this may not be possible

2. Another way is to try and catch the issue where it occurs; for instance, “If the system knows what the intention of the user is (perhaps by requiring the user to indicate the overall intention), it could be designed so that at the critical choice point the proper path was flagged or in some other way brought to the attention of the operator”

On the above, use of sufficient feedback about the system’s state should be provided.

Activation issues suggest the importance of memory reminders

Activation errors involve two classes:

1. inappropriate actions are performed: inappropriate action sequences are activated either by being related to a desired sequence or through prompts from the world. It’s often driven by a memory failure

2. appropriate actions aren’t performed

Memory aids may be important to prevent latter class – appropriate actions not performed; whereas the former “form of activation error may very well not be preventable”. Since the former may not be entirely preventable, it’s suggested that “the system should be designed to be tolerant of them”.

It’s argued that if a set of operations is interrupted with other activities, steps in the original sequence may be forgotten; and a “good system design will not let this happen”, but will “redisplay uncompleted sequences (or unanswered questions) whenever there is a chance that they are no longer visible to the user”.

People will err, so make the system insensitive to them

Wrapping up the findings, it’s argued that:

1. People will err even in the best designed systems

2. In managing the effects of error, systems should allow actions to be reversible as far as possible

3. Not everything is reversible, e.g. ejecting from a military plane, so requiring safety checks/confirmations for these irreversible actions should be designed-in (another example is confirming

4. Irreversible actions could require “considerable mental force”, and this goes beyond just asking somebody to confirm something, as if something is routinely asked then people will naturally tick and flick. Hence, “if the command is given in error, it is likely to have the confirmation invoked as part of the same error; in our experience, the confirmation is as apt to be in error as much as the original command”.

Providing a summary of lessons:

1. Provide feedback, where the state of the system is clearly available to the user in a form that’s unambiguous and makes the set of options readily available; avoiding mode errors

2. Different classes of actions should have dissimilar command sequences/menu patterns to avoid capture/description errors

3. Actions should be reversible as much as possible, and if irreversible and of high consequence, make it difficult to do

4. Maintain consistency through the system of structure and design, minimising memory problems

5. Start with psychological mechanisms, using knowledge of processing mechanisms to derive important constraints

6. Base the design around mental models of people; mental models are formed around the world and systems and how they interact. Mental models are used to predict system behaviour and guide action. However, “People's mental models .. have interesting properties, sometimes being derived from idiosyncratic interpretations of the system”.

Mental models “must operate within the constraints of the human processing system” and thus, studying mental models provides an important tool for understanding human-system interaction

7. Draw on human performance in a variety of situations to construct an analysis of appropriate form of human-machine interfaces.

Link in comments.

Author: Norman, D. A. (1983). Design rules based on analyses of human error.?Communications of the ACM,?26(4), 254-258.

William S. Brown

Erstwhile Human Factors Scientist at Brookhaven National Laboratory

4 个月

“…you may find the principles themselves haven't changed a lot.” Indeed. One would expect that the principles would not have changed at all - permanence being their defining quality. Thanks for the post. I’ll see your 1983 and raise you 1981.

Mike Gobbo

Director of Operations, EHS & Security

4 个月

Design of everyday things should be on every safety pro’s book shelf

Ben Hutchinson

HSE Leader / PhD Candidate

4 个月

Study link: https://dl.acm.org/doi/pdf/10.1145/2163.358092 My site with more reviews:?https://safety177496371.wordpress.com

回复

要查看或添加评论,请登录

Ben Hutchinson的更多文章

社区洞察

其他会员也浏览了