Failure to See The Major Cause of Accidents

Failure to See The Major Cause of Accidents

The Science of Accident Causation that I used to develop and prove my Lenses that negate the Blind Spots caused by the Pillars, Mirrors, Vehicle Bodies and Spectacles. Accident Prevention Saving Lives. There's so many things that can cause accidents and this article explains them well.

"Carriages without horses shall go,

And accidents fill the world with woe."

-Mother Shipton (circa. 1530)

A comprehensive study of road safety (Treat et al., 1977) found that human error was the sole cause in 57% of all accidents and was a contributing factor in over 90%. In contrast, only 2.4% were due solely to mechanical fault and 4.7% were caused only by environmental factors. Other studies have reported similar results.

Why do humans make so many driving errors? The answer is that they don't. Such studies are highly misleading. As Rumar (1982) notes:

"they tend to use human factors as a scrap box. Every accident behind which we do not find any technical error tends to be explained by the human factor."Humans have limited information processing abilities and must rely on three fallible mental functions: perception, attention and memory. When a driver fails to avoid an accident because the situation exceeds these limitations, it is often called "human error." In reality, it is often the situation that is primarily responsible, not the driver's response to it. It is a well known bias of human judgment to commit the "fundamental attribution error," to vastly overrate human factors to vastly underrate situation factors when trying to explain why events have occurred.

In this article, I shall provide a brief overview of human information processing limitations and explain how they can interact with situational factors to contribute to road accidents. This is a "first-principles" approach to accident investigation because it draws on knowledge of basic human psychological processes. Instead of looking at the driver from the outside, I try to understand his/her mental processing and how it interacts with the environment.

However, the overview is general, so I will ignore many details and equivocations that would be required in a more scientific dissertation. Moreover, the article will discuss only information processing and leave response, reaction time, etc. for another day. Lastly, although cast in terms of road accidents, a similar analysis would apply to other areas of man-machine error.

2.0 Human Information Processing Overview

People driving down a highway are bombarded with a steady flow of information. Most of the information is visual input, the road itself, other vehicles, pedestrians, signs, the passing scenery, etc. Moreover, the driver may be processing other information sources such as auditory input (listening to the radio, talking on a cell phone, carrying on a conversation with another passenger), or internal input (remembering directions or planning what to make for dinner).

If the visual information flow is low, there may be enough mental resource to carry on all tasks simultaneously. But attentional demands may exceed supply when:

  • the flow becomes a torrent (driving fast)
  • the information is low quality (poor visibility)
  • resources must be focused on a particular subset of information (a car close ahead)
  • the driver's capacity is lowered by age, drugs, alcohol or fatigue.

There may not be enough mental resource for all tasks. The driver then "attends" only a subset of the available information, which is used to make decisions and to respond. All other information, goes unnoticed or slips from memory.

In sum, information processing works like this: the information from the visual and possibly auditory environment is detected by the senses ("preattentive" stage) while other information may be recalled from memory. If there is too much to process, the driver attends an information subset and the rest is ignored ("attentive stage"). Lastly, the driver makes a decision and possibly a responses based on the attended information.

Research has shown that accidents occur for one of three main reasons. The first is perceptual error. Sometimes critical information was below the threshold for seeing - the light was too dim, the driver was blinded by glare, or the pedestrian's clothes had low contrast. In other cases, the driver made a perceptual misjudgment (a curve's radius or another car's speed or distance). The second, and far more common cause, is that the critical information was detectable but that the driver failed to attend/notice because his mental resources were focused elsewhere. Often times, a driver will claim that she/he did not "see" a plainly visible pedestrian or car. This is entirely possible because much of our information processing occurs outside of awareness. Mack and Rock (1998) have amazingly shown that we may be less likely to perceive an object if we are looking directly at it than if it falls outside the center of the visual field. This "inattentional blindness" phenomenon is doubtless the cause of many accidents.

Lastly, the driver may correctly process the information but fail to choose the correct response ("I'm skidding, so I'll turn away from the skid") or make the correct decision yet fail to carry it out ("I meant to hit the brake, but I hit the gas"). I will not discuss response errors, but see "Medical Error and Mental Acts of God."

2.1 A Hypothetical Example

To illustrate how analysts use this information processing approach to investigate accident causes, I will use a hypothetical example. A common situation occurs when a driver strikes an "object, " another car, pedestrian or bicycle, and the analyst must attribute the accident's cause. (I'll refer to "object" in order to avoid using the standard laboratory term, "target"!).

Mr. X, age 55, is driving down a secondary road, Hobart St., at 9:00 PM in an unfamiliar part of town. He is late because he promised to pick up his wife at 8:45. Mr. X is listening to the hockey game on the car radio while he looks for Front St., where his wife said to turn in order to reach his destination. Ms. Y, wearing a dark blue coat and white hat, crosses in the middle of Hobart St without looking. Mr. X does not see her and strikes Ms Y with his car. Police arrive and question Mr. X, who says that he never saw the pedestrian. Mr. X admits that he has had a few beers but his blood alcohol content is .06, within the legal limit. The police do not charge him with DUI. What caused the accident?

3.0 Detailed Description of Information Processing Stages

3.1 Preattentive Stage and Attention

The figure1 below schematically depicts the two information processing stages, "preattentive" (or "ambient) and "attentive" (or "focal"). Visual information is detected by the most elementary parts of the nervous system, the eyes, ears, etc. in the preattentive stage. At this point, the visual input is coarsely processed for raw sensory attributes such as color, shape, size, and location in the visual field. Meaning is not attached to an object, so Mr. X 's information processing system might register a blob of blue (coat) or white (hat) in the visual field, but would not yet interpret the blob as a person. In fact, he would not be consciously aware that it was there.


This preattentive stage has four important properties:

  • It is automatic and occurs without volition, so we are unaware that we are doing it.
  • Information remains in sensory memory for only a small fraction of a second. If not penetrating the attention filter, it is then permanently lost.
  • It only analyzes as far as basic properties of color, size, location, etc. The meaning of the blue blob is unknown.
  • It has a very large capacity. It can process the entire visual field simultaneously.

This last property is critical, because the vast quantity of information is too large for subsequent processing stages to handle. There needs to be a mechanism for selecting an information subset for more detailed analysis.

This mechanism is called "attention" and is sometimes depicted as a spotlight that focuses processing on a selected part of the visual field - it defines an area of 3-D space for detailed examination. Attention is usually viewed as a filter the driver uses to focus his limited mental resources to important parts of the visual field and to exclude extraneous parts.

To see how this all works in practice, imagine a driver moving through the environment. Some sensory information (a blob of blue) registers in the peripheral field, where acuity is low. Something is there, but the driver doesn't know what it is. Next, the driver involuntarily moves his eyes and the attentional spotlight toward the object for further processing. In doing so, the driver causes the object's image to fall on the fovea, the area of the retina with the highest resolution. The blob becomes a well-defined shape.

Note that the driver's eye is automatically drawn to the potential object. Given that there are many objects in the visual field, why is the driver's attention drawn to any one in particular? Research shows that some object properties make then "pop out" and automatically attract attention. This is a complex topic (e. g., Green, 1991; Green, 1992; Wang, Cavanagh and Green, 1994), but generally speaking, objects are more likely to pop-out and be conspicuous if they:

  • are large
  • have high brightness contrast
  • move or flicker rapidly or suddenly appear
  • are meaningful. We can often "automatically" detect and respond to a highly familiar cue - if someone says our name, we immediately notice.
  • are expected

This automatic attraction of attention is important in driving. Research shows that drivers spend half or more of their time looking directly ahead to the point where the road meets the horizon (generally the focus of expansion). If it weren't for pop out, the driver would fail to see any object that was not straight ahead on the road.

However, this very simple model ignores a few details. The attentional beam has variable intensity, so the driver may examine a large area with low attention or a small area with great attention. On a sunny day with no distractions, the driver can open the beam up and take in the entire scene. On a dark night in rain, visibility is poor, so the driver might narrow the bean and make it more powerful. If the driver sees a hazard such as a stalled car, the driver might narrow the beam even more and direct all its power on the car. Attention has a fixed capacity, which can be distributed to different purposes.

However, don't take the beam metaphor too literally. The driver can divide attention to both the road and to a cell phone conversation. However, both the processing of the cell phone conversation and visual input draw from a common attentional reservoir. There is no problem as long as there is enough attention to go around. If conditions (high speed, poor visibility, cell phone static, etc) cause the attentional demand to exceed the supply, then the driver cannot attend all tasks simultaneously and some information will be lost. In addition, people can direct attention toward specific objects rather than to locations in space. A driver looking for a blue building will notice blue objects more readily.

Lastly, there are two distinct sources of attentional control. As described above, attention may be automatically attracted. In addition, however, the driver can also voluntarily control the beam, as he does when scanning the visual field.

3.2 Attentive Stage and Working Memory

Sensory information passed through the attentional filter resides temporarily in a processing stage called "working" or short-term memory. Working memory is like a scratch pad where people collect the information (visual, auditory, knowledge stored in the permanent long-term memory) needed to interpret sensory input and to make decisions. Working memory, however, has two severe limits that often play a role in accidents:

  • Information remains in working memory for a short time, maybe 30 seconds, if it is not used or refreshed. The driver could refresh working memory, for example, by continuously looking at the blue blob. Once the driver looks away, the blue blob must be processed or it will be lost within a very short time.
  • Older Information may be flushed out at any time by newer input. Working memory has very low capacity, so new information may chase out old. For example, several studies show that recall of road signs is remarkably poor. The researchers stopped drivers a few hundred yards after a road sign and found that recall was as low as 18%, although the signs had been seen only seconds before. Presumably, new information had pushed the signs out of working memory. Since working memory records all sorts of information, a few words from radio or cell phone, could also fill it up and cause other objects to be forgotten.

Perhaps the best way to understand the limitations of working memory is by means of the classic "Cocktail Party Phenomenon," which everyone has experienced. You are at a cocktail party and having a conversation with someone. You understand the words of your partner. You are also aware of the buzz of other conversations, although they are unintelligible. In terms of information processing, the system is only decoding these conversations as far as the sensory level and not for meaning. We are so fast at interpreting speech sounds, that we are generally unaware that detecting the sounds and interpreting them are separate mental processes. The buzz sounds come into working memory, but you do not have the capacity to interpret both your partner's "sounds" as well as those of other conversations in the room.

However, someone behind you might say your name. This automatically attracts your attention to this other conversation. You can now understand that conversation but your partner's words become a meaningless buzz. If you try to switch back to your partner, the first thing out of your mouth will likely be "What did you just say?" because his last words, detected as a meaningless buzz, if at all, are already gone.

We can now at least partially answer the question as to why people can look directly at an object and still not see it. First, we are not conscious of sensory input until it is stored in working memory. If it doesn't get through the attentional filter, it doesn't exist for us. Second, once stored in working memory, information is easily forgotten. If we haven't refreshed or stored the information in long-term memory, it may be lost.

3.3 Attentive Processing and Long-Term Memory

Once in working memory, the driver interprets the blue shape's meaning by finding information in another area of memory called "long-term" memory. This is the permanent store of information and knowledge that we all carry around in our heads.

Recall that attention can be controlled automatically or purposefully. Some retrieval from long-term memory (as when recognizing a familiar object) seems to occur automatically with little or no attentional expenditure. However, sometimes we actively search memory (as when trying to recall instructions or making plans). This requires attentional resources and adds a load to working memory. In other words, thinking or recalling information can also cause information to be lost from working memory.

4.0 What Caused the Collision?

In the hypothetical situation described above, the accident would not have occurred if everything had worked properly. Mr. X would:

  • detect Ms. Y's blue coat or white hat as a blob
  • turn eyes toward her to define the shape
  • retrieve the necessary information from memory to identify the shape as a person
  • decide to apply brakes
  • apply brakes

We will discuss how the accident conditions relate to the first three factors.

4.1 Preattentive Processing: Sensory Detection

The starting point of any visual analysis is the question: Should Mr. X have detected Ms. Y. After all, if the conditions would have made Ms. Y undetectable at the sensory level (it was too dark, etc.), then no further information processing would have been possible.

"Contrast" is the most important variable in determining whether Ms. Y was detectable. An object's visibility is determined, not by its absolute brightness (actually "luminance")or color, but by the relative brightness or color between the object and its background. If visibility limitation is a possible factor, then it is important to perform a visibility analysis: determine the viewer's eye position and then measure the light coming from the object and also the light coming from the background. Finally, calculate the contrast.

The next step is to determine whether the actual contrast was sufficiently high for object detection. This is not straightforward, however, since many factors affect the minimum contrast necessary to see an object in a given set of circumstances. These factors can be divided into two classes, environmental and driver:

Environmental

  • Size: Size is not the physical size in inches or centimeters but rather "visual angle," which roughly gives the size of the retinal image.
  • Distance: Generally speaking, the closer the more visible - visual angle grows with decreasing distance.
  • Visual Field location: Vision is best when objects are imaged in the fovea, the highest resolution part of the eye. This occurs when the driver looks directly at the object. If the driver saw the object in the peripheral field (the corner of the eye), then the visibility estimate must be lowered to account for the reduced vision. There may be exceptions, however, as moving objects may become more visible in the periphery.
  • Duration: Visibility increases with longer duration, although there are a few exceptions to this rule.
  • Motion/Flicker: These can make an object more visible. The influence of motion on visibility depends, however, on several other factors such as size and velocity.
  • Masking and Camouflage: Objects are also harder to see when the background has forms or textures and easiest when the background is uniform.
  • Glare: Humans adapt to the light levels around them. When a very bright light, one that is far above current adaptation level, suddenly appears, it can reduce visibility (disability glare) and cause drivers to look away (discomfort glare). The glare effect is most obvious at night when the driver is adapted to a lower brightness. The sudden appearance of headlights can temporarily blind. Even after the headlights pass, vision is still poorer due to their effect on driver light adaptation level. Glare effects increase greatly with age and are major problems for older drivers.
  • Weather: Rain, snow and fog all decrease visibility.

Driver

  • Age: Contrast sensitivity falls with age. The effect is small until about age 45, when the effect increases rapidly. Moreover, older drivers are more likely to have eye diseases, which further impair vision.
  • Adaptation State: Visibility is best when the driver is adapted to the same mean luminance as the background.
  • Optical Status: Visibility decreases when the driver is not wearing optical correction for the viewing distance.
  • Arousal Level (sleepy vs. awake): Humans are often less able to detect objects when their arousal level is low. Fatigue, alcohol, drugs and other medication can affect arousal level.
  • Uncertainty: Visibility is best when the user knows when and where the object will be located. Any spatial or temporal uncertainty raises threshold. Most real world viewing situations involve at least some uncertainty.
  • Expectation: Viewers can be greatly affected by their expectations. If a driver comes to the same intersection everyday and has never seen a pedestrian, it is less likely that s/he will see the figure walking out from behind the car. Research suggests that humans inhibit attention in visual field locations where input is not expected.

A visibility analysis would note that Ms Y was wearing a dark blue coat, which would have little contrast against the dark background existing at 9:00 PM. On the other hand, the white hat would show up very well. The hat is unfortunately small compared to the coat and might still be less visible than the coat. Of course, if the background were bright, say a brightly illuminated shopping strip, then the dark coat might be highly visible and the white hat relatively hard to see. In an actual investigation, the analyst would have to use a light meter to make readings of the pedestrian's clothing and the background and then estimate size and distance in order to calculate exact values. The reading would ideally be taken at the same date and time and under the same weather conditions as the actual accident. If not possible, then the analyst would have to use other sources of data to estimate contrast.

If Mr. X were looking straight ahead or perhaps searching for the Front St. sign, Ms. Y would likely be seen only in the low resolution peripheral field as she steps off the curb. This significantly increases the contrast needed to see her. Further, note the interesting paradox that as Mr. X approaches Ms. Y, her image becomes bigger (and more detectable) but falls further in the peripheral field (making her less detectable). If Ms. Y were running, the motion would increase her visibility more than if she had strolled casually. Any car headlights or bright neon signs causing glare would further increase necessary contrast.

Lastly, the contribution of some environmental factors is very difficult to estimate numerically. More often than not, there is no simple way to factor in the effects of background masking, driver light adaptation, odd shapes, etc.

Now for the driver. Mr. X is 55 years old, so there is an age loss of contrast sensitivity, a factor of about 1.8. Moreover, he had had a few beers, so his blood alcohol level was .06. Although this is below the legal limit, research shows that .06 is a high enough BAC to seriously impair vision. This is an important point to remember for litigation: even though a driver is within legal limits, he may still be functionally impaired, especially if there are negative environmental factors such as low lighting or poor roadway design. By the way, was he wearing optical correction? Was the correction correct? Does he have any eye disorders?

Mr. X knows that pedestrians probably cross at intersections and has developed an expectation that pedestrians, if they appear, are likely to be there. He would not expect to see Ms. Y cross in the middle of the block, further decreasing detectability. If Mr. X had frequently driven down the same stretch of highway and never seen a pedestrian there before, then his expectations would be even greater that no pedestrian was likely to appear.

In this case, there are many factors, which would make Ms. Y difficult to see: the low light level of night time driving, Ms. Y's dark coat produced low contrast (assuming a black background), her location in the peripheral field, the driver's age, his blood alcohol level, and his expectations.

4.2 Attentive Processing: Attention and Working Memory

Let's assume that Ms. Y's contrast level were above detection threshold. The next step is to assess the likely operation of attention and working memory. We would want to look at all sources of input to working memory and to examine any factors affecting Mr. X's attentional capacity.

Mr. X was driving on a dark, unfamiliar street with low visibility and looking for the Front St. sign. He was possibly listening to the radio and/or trying to recall his wife's directions. Since he was late, he was probably driving fast.

All of these factors would combine to stress attentional capacity. The large number of information sources (visual, radio, recall) and low visibility conditions would overload attention, so some information was ignored. The visual attention beam would undoubtedly become very narrow and weaker (to conserve resources), so that he would have a very difficult time seeing objects in the peripheral field. Since he would probably be looking either directly ahead or up at street signs, the chances of seeing Ms. Y, crossing at an unexpected location in the middle of the block, would be very poor.

The fast driving would cause working memory to continually fill and require the rapid loss of old information. It is quite possible that Mr. X could have looked directly at Ms. Y but still not recall seeing her either because the information was filtered out due to attention being allocated elsewhere (listening to the radio, recalling directions, planning the next turn, etc.) or was displaced from working memory before it could be properly interpreted and stored in long-term memory.

Moreover, factors lowering Mr. X's attentional capacity undoubtedly contributed to the accident. At 55 years old, his age probably had a modest effect. The .06 BAC also likely contributed to lowering his attentional capacity.

4.3 Conclusion

The accident was probably caused by a large number of factors working in concert: the driver's hurry, age, attention being shared across several inputs (radio, road and recall), moderate blood alcohol level, uncertainty about the directions and unfamiliarity with the street. Factors such as headlight glare and optical correction may have also played a role.

Ms. Y's low visibility clothing also contributed by making her less conspicuous, even if she was above detection threshold. Lastly, she crossed the street at an unexpected location, further making detection more difficult.

5.0 Final Remarks

This article has provided a general overview of how human information processing can be used to determine accident causes. However, somewhat different analyses might be performed in other accident types. For example, this accident didn't involve perceptual misjudgment, a frequent cause of accidents. Drivers often misjudge road curvature, the speed of their own or another vehicle, distance, etc. Knowledge of human perceptual processes can also be used to analyze accidents in such misjudgments.

Lastly, accidents sometimes occur because drivers accurately perceive and interpret information but fail to respond appropriately because they make the wrong decision or because they make the right decision but perform the wrong response.

Footnotes

1Recent research suggests that this model is too simple. However, it is accurate enough to convey the important concepts.References

Green, M. (1991) Visual search, visual streams and visual architectures. Perception and Psychophysics50, 388-403

Green, M. (1992) Visual Search: detection, identification and localization. Perception21, 765-777.

Green, M. (2003) Skewed View: Accident Investigation, Occupational Health & Safety Canada, pp 24-29, June

Mack, A. and Rock, I. (1998) Inattention Blindness. MIT Press: Cambridge.


Rumar, K. (1982). The human factor in road safety . Invited paper at 11th Australian Road Research Board Conference. MIT Press: Cambridge.

Treat, J. R., Tumbas, N. S., McDonald, S. T., Shinar, D., Hume, R. D., Mayer, R. E., Stanisfer, R. L. and Castellan, N. J. (1977) Tri-level study of the causes of traffic accidents. Report No. DOT-HS-034-3-535-77 (TAC).

Wang, Q., Cavanagh, P., and Green, M. (1994) Familiarity and pop-out in visual search. Perception & Psychophysics. 56, 495-500.


Martin Graham.

Director at Manchester Technical Innovations Ltd | Engineer

1 年

Fascinating read! The breakdown of how human information processing limitations interact with environmental factors sheds new light on road safety. The "Lenses" you've developed could be a game-changer in mitigating these limitations, potentially saving countless lives by negating blind spots. The article underlines the importance of understanding human psychology in designing safer roads and vehicles. Innovation in this space, like your work, is crucial. Looking forward to learning more about your technology! #RoadSafety #HumanFactors #Innovation

Beatrice Freeman

Author of "Introverts Make Better Networkers". Award-winning speaker who will help you improve your big speech and audience engagement for your next international conference, and energise your networking.

3 年

This is an eye-opener, in more sense than one.

Chris Rushbrooke

Creating positive impact for charities and good organisations through awesome brands, websites, design, social media and campaigns ??

3 年

I find this really interesting. My colleague was telling me that she ironically actually feels safer in their tiny car because she can "see more of the road" than what she can see in the big family SUV. Wonder if it's anything to do with this.

Darren Wales

Driver Training Manager Cleveland Police (UK) Chief Driving Instructor (CDI)

3 年

Great piece Jock, thank you!

Peter Marcus

*BLGAi a Hybrid Ai business coach for you *PrivateGPT’s with organizational proprietary data embedded for events/conference management *BLGAi working together with you to bring sales growth into your business

3 年

Sharing Value

要查看或添加评论,请登录

社区洞察