The Symbiosis Between Machine Intelligence and Human Cognition
Human Intelligence augmented by machine intelligence for better decision making.

The Symbiosis Between Machine Intelligence and Human Cognition

In my previous (and first) article I discussed the impact of artificial intelligence and automation on our working lives. Exploring how so-called "White Collar" work is evolving as low intensity tasks are automated by software applications. The remaining workload predominantly requiring higher cognitive effort and thus increasing the risk of burnout.

With 9 in 10 adults in the UK experiencing high or extreme stress in the past year and 1 in 5 needing to take time off work due to poor mental health caused by pressure or stress, we must focus on challenging the causes of chronic stress across society and preventing burnout. - Mental Health UK, Burnout Report

In this article inspired by a recent post on partnerships in the space exploration sector, I want to explore the concept of human-machine collaboration and how by using the distinct advantages of both types of cognition we can redress the balance.

Just to be clear, I am not talking about technology like Neuralink here, where there is a direct machine-human interface. This article focuses on how people can use the new wave of artificial intelligence based tools to improve their working life and reduce the risk of burnout.

"Human–AI symbiosis refers to people and AI working together to jointly solve problems and perform specific tasks. The recent developments in deep learning models and frameworks have significantly improved the efficiency and performance of human and AI collaborations." - Association for Computing Machinery, Transactions on Multimedia Computing, Communications, and Applications Vol. 20, No. 2A Study of Human–AI Symbiosis for Creative Work: Recent Developments and Future Directions in Deep Learning

In an age where disruption has become the new normal, the synergy of human intuition and artificial intelligence's (AI) analytical capability presents as a pivotal strategy for addressing the increasing complexities of our modern world.

This synthesis embodies a new approach, where the depth of human cognition and the breadth of machine intelligence merge to navigate the unpredictable and sometimes tumultuous times we live in, illuminated by Black Swan events and the nuanced ability of human decision-making.

The recent phenomenon of the 'Copilot' AI agent is a starting point for this collaboration, but what are the limitations and risks presented by working alongside AI?

Understanding Black Swan Events

The term "Black Swan" was popularised by Nassim Nicholas Taleb in his seminal work to describe events that are highly improbable, carry significant impact, and, only retrospectively, appear predictable. The metaphor originates from the ancient Western belief that all swans were white, a notion that was proven false upon the discovery of black swans in Australia. This revelation fundamentally shifted the understanding of what was considered possible, much like the recent Black Swan events in global society.

“When you develop your opinions on the basis of weak evidence, you will have difficulty interpreting subsequent information that contradicts these opinions, even if this new information is obviously more accurate.” - Nassim Nicholas Taleb, The Black Swan

One of the most poignant examples of a Black Swan event in recent history is the COVID-19 pandemic.

Despite the occurrence of epidemics and pandemics throughout history dating back to the Athenian Plague of 430 BC, the bubonic plague "Black Death" between 1346 to 1353 AD and more recently the Spanish flu epidemic of 1918, the world was largely unprepared for the magnitude and impact of COVID-19.

Can the lack of apparent preparation be attributed to our reliance on historical data without adequately accounting for contemporary changes that exacerbated the spread and impact of the virus? No doubt that factors such as unprecedented levels of low-cost air travel, increased population mobility, the widespread use of air conditioning as a potential distribution network, and rising population density in urban areas contributed to the rapid global spread of the disease.

Most of the analysed studies agreed that population density and human mobility had a significant and direct relationship with COVID-19 infections. - Conditioning factors in the spreading of Covid-19: Does geography matter? sciencedirect.com

Do these contemporary circumstances highlight the limitations of predictive models that failed to consider the full range of possible future scenarios?

The Black Swan and AI's Limitations

Nassim Taleb's exploration of Black Swan events serves as a pointer to the inherent limitations of AI. These models, from Linear Regression and Large Language Models (LLMs) to Deep Learning and Bayesian Models, primarily learn from historical data, making them less effective against unprecedented events. This limits the models to utilising statistical analysis for extrapolation which can leave AI models struggling with the unforeseen and the unprecedented.

For instance, models like Decision Trees and Deep Learning algorithms may fail or produce spurious results when encountering data vastly different from their training sets. Similarly, Bayesian Models, though adaptable, rely heavily on prior assumptions and reinforcement learning that might not encompass the possibility of such rare events. Anomaly Detection models can signal outliers but are incapable of predicting their occurrence or impact.

These models then are likely to struggle in the face of Black Swan events, illustrating a critical gap that human insight is uniquely positioned to fill.

Integrating Chris Voss's Human Intuition and Behavioural Insights

Chris Voss, a former FBI negotiator and author of "Never Split the Difference," underscores the significance of human intuition in understanding and influencing others. Chris's insights are complemented by the 7-38-55 rule of communication. This rule posits that 7% of communication is conveyed through words, 38% through tone of voice, and 55% through body language. This rule highlights the depth of non-verbal cues in human interaction. This is a level of complexity that AI, in its current form, struggles to fully comprehend and interpret. Commonly available AI models are often limited to syntactic and semantic sentiment analysis or intent analysis using similar approaches.

"In basic terms, people’s emotions have two levels: the “presenting” behavior is the part above the surface you can see and hear; beneath, the “underlying” feeling is what motivates the behavior." - Chris Voss, Never Split the Difference

Chris Voss's strategies leverage this deep understanding of human psychology, emphasising empathy, active listening, and the strategic use of tactical empathy to achieve negotiation outcomes. These tactics showcase the unique capabilities of human intuition and cognition that AI has yet to match.

This is especially true when considered alongside the principles of behavioural economics as this enriches our understanding of human cognition's unique strengths. Which are predominently focused around identifying and interpreting emotions and body language, the more subtle types of communication that are actually incredibly insightful.

The 7-38-55 rule of communication and insights from Daniel Kahneman's exploration of cognitive biases introduce a layer of complexity into human-AI interactions.

The Role of Biases in Decision-Making

Both AI and human decisions are prone to biases, yet the manifestation of these biases differs significantly between the two. AI's algorithmic decisions, trained on historical data, can unintentionally perpetuate existing prejudices. When preparing data and compiling datasets engineers have to be incredibly conscious of how biases can be prevented.

Whereas human biases emerge from emotional, situational, and cognitive heuristics built up over time and through experience. These biases can sometimes be unperceivable to us because of the nature of the way we interpret information and make decision.

Daniel Kahneman refers to this as System 1 (instinctive) and System 2 (reasoned) thinking. Our instinctive thoughts are automatic, where as our reasoned thoughts are conscious and require cognitive effort.

An example of how powerful our instinctive thoughts are can easily be demonstrated. If you look at a photograph you don't have to consciously choose to identify the subject as you 'just know' you are looking at a picture of a cat, dog, building, car, or whatever. However, if you look at an equation like 4x + 5=30 ? x you have to apply cognitive effort to determine the answer. You have to use reasoning to know that the answer is 5.

“The illusion that we understand the past fosters overconfidence in our ability to predict the future.”Daniel Kahneman, Thinking, Fast and Slow

Daniel Kahneman expands of the role of biases in decision making in his book "Noise". Where he discusses how various factors and cognitive biases such as anchoring and the halo effect impact our ability to make deterministic outcomes. This combination of instinctive versus reasoned thinking and the inclusion of biases highlight the intricacies of human decision-making.

Recognising and addressing these biases is crucial in crafting AI systems that augment human decision-making without inheriting its flawed predispositions. This is especially important as we, humans, seem far more forgiving of human error than of machine error.

The Theory of Symbiosis: Combining Machine Intelligence with Human Intelligence

The intersection of behavioural economics and AI paves the way for a more sophisticated approach to decision-making, one that leverages the strengths of AI and the breadth and depth of human intuition.

This symbiotic relationship aims to utilise AI's data-processing and analytics capabilities while integrating human cognitive strengths, such as empathy, creativity, and strategic thinking.

By doing so, it promotes a decision-making paradigm that is not only more adaptive and equitable but also capable of confronting the uncertainties of our world with a nuanced understanding.

It allows the AI models to take on more of the effortful cognitive tasks whilst allowing you to instinctively make decisions with relative cognitive ease. In doing so you will be less at risk of burnout induced from operating for extended periods at a high level of cognitive intensity.

One important aspect to consider when adopting this approach is something Daniel Kahneman refers to as 'Decision Hygiene'. It focuses on improving the quality of judgements, and that is something which a combination of a data-driven approach paired with intuition and empathy can deliver.

In theory, a judgment of risk should be based on a long-term average. In reality, recent incidents are given more weight because they come more easily to mind. Substituting a judgment of how easily examples come to mind for an assessment of frequency is known as the availability heuristic.”― Daniel Kahneman, Noise.

Statistical analysis can provide the detailed long-term average which the human can then use to adjust their judgement during the decision making process, thus overcoming the availability bias.

Conclusion

The advancement of artificial intelligence (AI) has been remarkable, and swift. Propelled by high-performance computing, rapid data networks, scalable storage solutions, and the internet that providing a near-infinite pool of training data. As we edge closer to the reality of Artificial General Intelligence (AGI), capable of widespread application, it's imperative to thoughtfully navigate its utilisation.

This journey towards AGI highlights the need for a strategic approach, blending AI's computational ability with the softer insights of human cognition, to harness the full potential of this technology in harmony with our ethics and values.

"AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and?creativity." - OpenAI.com

As with today's AI based applications and use cases the key to maximising the efficacy of AGI will lie in a thorough understanding of how your organisation operates. Identifying how and where best to implement human-machine collaboration. And in ensuring that the AGI agent is context aware by making current, real-time data available in a controlled and secure manner.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了