Artificial Intelligence and the "Single Pilot" issue
Dr. Jose Sanchez-Alarcos (Eur. Erg.)
Aviation Psychologist. EASA external expert in Aircraft Design and Production, Aircraft maintenance, Mgmt & Org, New and emerging technologies and others. Member of European Commission Human Factors Group in ATM.
I would like to start by introducing Mr. Stuart Russell: Stuart Russell is co-author, together with Peter Norvig, of a text that can be considered the Bible of artificial intelligence: "Artificial Intelligence: A Modern Approach". In the sentence in the picture, about driverless vehicles, Russell mentions something that all Human Factors specialists know very well:
Extreme automation, when it fails and returns control to the human, presents a situation where vital cues to decide could be missing. These cues would have been appearing with the driver involved in the process but, when the system returns control to the driver, they are absent and, therefore, the human as an emergency resource does not function. It is like a fireman whose hose has been removed.
Here are some Aviation-specific situations that could help clarify what can and cannot be done. This is especially relevant at a time when major players are trying to sell the idea of "single pilot" aided by new-generation automatic systems and artificial intelligence. Let's look at it:
Among the many things that happened on this flight, an engine explosion cut data wires, connecting sensors to the information systems they served. Consequently, the pilots had to perform many of the calculations by hand, with the information available or by performing experiments to know the extent of control of the aircraft.
An engine explosion cut the hydraulic lines to the controls and a pilot, flying as a passenger, was operating the engine throttle levers, since this was the only accessible control.
The freezing of a sensor led to erroneous speed data and incorrect information, to the point that, at times, the alarms did not work because the aircraft's systems interpreted that the aircraft was not flying but was on the ground.
The freezing of two sensors caused the systems to misinterpret the situation, raising the nose of the aircraft and the aircraft not responding to the controls. Subsequent investigation indicated that the aircraft would have responded to the compensator.
In the famous "Hudson River landing", the double engine shutdown was foreseen by the manufacturer above 20,000 feet, not during the initial ascent. The checklists were long and unhelpful, and the systems could not give information on the ability to reach the airport or not.
We could add many more cases to this list, but these are enough to illustrate an idea: Advanced systems, with or without artificial intelligence, can be a help when everything is going well but, in certain circumstances, they not only do not help but can contribute to aggravating the situation.
What would happen in the situation of a single pilot, supposedly assisted by advanced information systems? The cases are pretty illustrative: An unforeseen situation, a sensor failure, or a disconnection between the sensors and the systems they serve, can make a situation unmanageable.
Obviously, the above is merely an approximation of the problem. It can be treated in much greater depth without falling into either such classic extremes as essentialism or discussion of algorithm design.
It seems that the Peter principle - the idea that everyone rises to their level of incompetence - is not restricted to organizations but to the resources they use. Technological progress is welcome, but it must be made clear where it does not reach and why.
Today, a detailed analysis of the cases mentioned above -and the many more that could be added- would make it clear why the single-pilot idea is not a valid option.
System Safety Engineering and Management of Complex Systems; Risk Management Advisor...Complex System Risks
1 年No magic here... Consider, we have human reliability requirements for military and commercial pilots involving contingency training, simulation training and currency.? With all the requirements pilots still lose situational awareness. What do you think a 17 year-old or grandma or grandpa do during a contingency? Systems will fail, inadvertently operate, increase system risk with complexity; Humans will fail; Design systems to fail safe; Design systems to enable human monitoring; Design systems to enable early detection, isolation, correction, and recovery… Systems will fail, inadvertently operate, increase system risk with complexity; Humans will fail; Design systems to fail safe; Design systems to enable human monitoring; Design systems to enable early detection, isolation, correction, and recovery… Design systems to enable humans not disable humans: If you don't know what's in the gray, white, or black box you better have controls in place around the magical boxes; Isolate safety-critical stuff from non-safety critical stuff.?
Consultor e Investigador en Aviación
1 年Muy buen artículo, supuestamente un cambio como el que mencionas requiere un sesudo análisis de riesgos o un “safety case”. Quiero hacer notar que estamos ante un problema paradójico ya que hay casos en que la tripulación múltiple no evitó algún error, lo que cuestiona la fiabilidad del componente humano y en otros casos, como los que citas (algunos) donde la tripulación funcionó y pudo manejar la situación.
Driven by a Passion for Aviation Safety & Innovation | Independent Consultant | Aviation Safety Mentor | Empowering Agile Transformation | Speaker & Writer
1 年Great article! Thank you for sharing this thought-provoking article about the use of advanced information systems and artificial intelligence in transportation. I found your concerns about the potential limitations and safety implications of these technologies to be particularly compelling. As an Aviation Safety specialist, I have seen firsthand the benefits and challenges of using these technologies. While they can be incredibly helpful in many situations, there are certainly limitations to what they can achieve. For example, in situations where there are unexpected system failures or unpredictable events, human intervention may still be necessary to ensure safety. One potential solution could be to incorporate more robust training and support systems for pilots or drivers using these technologies. This could include simulators or other tools that help them develop the skills necessary to manage unexpected situations. Overall, I think this is an incredibly important discussion and one that requires ongoing collaboration and communication between industry experts, policymakers, and the public. I would love to continue this conversation with you and hear your thoughts on potential solutions or other related topics.