The limits of data-driven technologies in the world of complexity
Giacomo Belloni
Researcher I Scholar I Author I Lecturer I Thoughts to Shape a More Sustainable and Kinder Future. MA, MSc (ASM), Executive MBA.
The modern scientific approach has exponentially expanded the fields of study and research. What was inseparable and indivisible before the scientific revolution of the seventeenth century is now broken down into manageable smaller parts. We now focus more on straightforward and quantifiable portions isolated from the organic whole. We can now disassemble reality to reduce it into so many elements small enough to be understood and described in their processes with verifiable scientific rules and conventions. Breaking down the unit into expandable parts means increasing intricacy. Splitting research areas into expandable subsets, where verified hypotheses form the basis for new knowledge, causes us to lose the overarching vision, adding further complexity.
As long as a system is simple, we can anticipate all its variables, and everything is predictable. Weaver, in 1948, used the billiard table metaphor to explain "disorganised complexity". If dozens of balls move incoherently, colliding with each other, "so many variables turn out to be impracticable" (Weaver, 1948, p. 537). Today, with the speed and capacity of calculating modern computer systems, Weaver's "impracticable variables" have become practicable and, therefore, foreseeable.
This is valid only until something new, unexpected, and without previous evidence appears.
Wiener argues that until the 19th century, systems were linear, dynamic, and predictable with deterministic certainty. The 20th century is instead the era of complicated and non-linear dynamic systems, whose development is not easily predictable with deterministic certainty. In these complex systems, the overall behaviour cannot be predicted solely based on the simple components or elements. Their dynamics are characterised by chaoticity, where factors of instability and uncertain and random variables intervene, the impact of which cannot be defined in advance (Di Nuovo, 2021).
?
The inability of AI to face unprecedented events
Modern complexity regularly produces improbable and unforeseen events that have never been experienced before (Taleb, 2007). The data, even if immense, on which intelligent technologies rely does not consider unpredictability but only facts and events that have already occurred, captured and stored in the archives. According to Herberg and Torgersen (2021), future events are expected to unfold faster, with more significant discontinuity, leading to increasingly complicated operational environments and more unforeseen effects. Consequently, we will face these ever-growing circumstances and unexpected events more rapidly than ever.
?
In aviation, resilience is the ability to adapt and react successfully to unpredicted, novel and unforeseen conditions for which no specific procedures exist (Pierobon, 2021). Today, pilots are trained on purpose to "build resilience to deal with unexpected events and generate confidence and competently address challenges encountered in flight operations" (IATA, 2013, p.22). By nature and by professional culture, pilots cannot accept any non-resilient technology that does not possess the ability to deal with unexpected, not-routinary eventualities (Herberg & Torgersen, 2021). Pilots cannot accept if decisions and actions are carried out exclusively on data that is not supervised or controlled by human experience, flexibility, and ethical values. These conditions are essential to cope with today's complex operational scenarios and to address the impact of highly improbable events, the "Black Swan" (Taleb, 2007). Instead, pilots feel decisive due to their unique ability to make value judgments and determine the rightness or wrongness of actions or decisions (Towers-Clark, 2023). These qualities, such as empathy and sentiments (Herzfeld, 2015), cannot be delegated to any sociopathic intelligence due to its aversion to harm (Christov-Moore et al., 2022).
In a 2019 article, Mongan and Kohli analysed the two devastating Boeing 737 MAX accidents to reflect on how the technology should be implemented. Even though their reflections were related to the medical field, they can be used in any domain.
The Boeing 737 Max features larger engines than its previous models, positioned further forward and higher on the wings. This position causes aerodynamic instabilities, managed by the Maneuvering Characteristics Augmentation System (MCAS), a completely automated system that does not interact with the pilots. However, as expected at the time, if the sensor providing information to the MCAS had malfunctioned, maybe indicating a high angle of attack, the computer would have pushed the aircraft nose toward the ground. This anomaly, not foreseen by the automation, happened twice, causing the death of 346 people.
Mongan and Kohli argue that, at the time, MCAS was a closed-loop system: the automatic system's output directly triggered an aircraft reaction, leaving the pilots with little time to grasp the situation and intervene effectively.
The Boeing 737 Max MCAS was not an artificial intelligence device, even though Mongan and Kohli define it as such in their article. Still, it provides food for thought on how future technologies should be integrated into the cockpit.
Although artificial intelligence is recognised as having the ability to improve performance in general, I exclude the possibility that it can do so when unexpected, non-linear, and unprecedented situations arise. Artificial intelligence excels at recognising patterns and making predictions based on historical data; however, it struggles with events that do not have precedents. While some artificial intelligence models are designed to detect anomalies in data, this is not the same as predicting a "Black Swan" (Taleb, 2007) event. Expert operators, like pilots, are able to provide context and interpret signals that artificial intelligence might dangerously overlook. Human nature is capable of coping with these conditions with irreplaceable resilience to implement the safety of operations.
Artificial intelligence is the natural evolution of technological progress in every field, even in the cockpit of airliners. However, the human operator will exploit this advantage to improve the quality of his performance, and not the other way around.
We will see where machine learning will lead us; at this point, however, data-driven artificial intelligence, without the ability to make decisions based on values and without operator management, will not take us far.
领英推荐
The limits of data-driven technologies in the world of complexity?? 2024?by??Giacomo Belloni?is licensed under?CC BY-NC 4.0?
References
·????? Christov-Moore, L., Reggente, N., Vaccaro, A., Schoeller, F., Pluimer, B., Douglas, P.K., Iacoboni, M., Kaplan, J.T. (2022) Are Robots Sociopaths? A Neuroscientific Approach to the Alignment Problem. Research Gate.
·????? Di Nuovo, S. (2021). La sfida della complessità e le neuroscienze. Psicologia Contemporanea.
·????? Herberg, M., Torgersen, G.E. (2021). Resilience Competence Face Framework for the Unforeseen: Relations, Emotions and Cognition. A Qualitative Study. Frontiers in Psychology.
·????? Herzfeld, N. (2015). Empathetic Computers: The Problem of Confusing Persons and Things. A Journal of Theology.
·????? IATA (2013). Evidence-Based Training Implementation Guide. International Air Transport Association. Montreal, Geneva.
·????? Mongan, J., Kohli, M. (2019). Artificial Intelligence and Human Life: Five Lessons for Radiology from the 737 MAX Disasters. National Center for Biotechnology Information. Online: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8017379/.
·????? Pierobon, M. (2021), AeroSafety World, Flight Safety Foundation: Defining Resilience: Explaining the term is easy; putting the concept into practice is more challenging. https://flightsafety.org/asw-article/defining-resilience/ ?
·? ??Taleb, N. N. (2007). The Black Swan: The Impact of the Highly Improbable. Random House. New York.
·????? Towers-Clark, C. (2023). Without Human Values, What Stops AI From Acting as A Sociopath? Forbes. https://www.forbes.com/sites/charlestowersclark/2023/11/21/without-human-values-what-stops-ai-from-acting-as-a-sociopath/?sh=37fc9e41dea9.
·????? Weaver, W. (1948). Science and Complexity. American Scientist. Vol. 36, No. 4, pp. 536-544.
·????? Wiener, N. (1948).?Cybernetics: Control and communication in animal and machine. Cambridge, MA: MIT Press.