Tea or Coffee?
thanks again dall-e

Tea or Coffee?

Every day, the average person makes approximately 35k decisions.? Every decision results in a win or a loss, a more or less favourable outcome, tea or coffee, happiness or sadness, maybe even life or death – I guess the choice in language depends on the importance of the situation.? But 35k – I wonder how many I’d make if I stopped procrastinating?!


Whilst chatting to a colleague last week about the topic of ‘Responsible AI’, which was a broad conversation, it got me thinking about human decision making (or more specifically about intuition) compared with machine decision making.? Our conversation sparked a debate about the often-accepted use of ‘gut feel’ in decision making, where an explanation as to a particular decision might be difficult to articulate vs our insistence that machines (or rather, AI) must be able to explain every step.? Why do we allow one and not the other?? We left the conversation with a comment that ‘intuition is fine, until things go wrong’ – so maybe that’s why the AI intuition equivalent i.e. a ‘black-box’, isn’t acceptable?


There is, justifiably, a LOT of focus on AI safety now and a LOT of speculation about the pace of AI development, where we are heading with it and whether we are properly prepared for what’s around the corner.? The recent developments at OpenAI, particularly around the rumoured project Q* or Super Intelligence, have intensified these debates. Apparently, OpenAI researchers warned their board about a potential breakthrough in artificial general intelligence (AGI), which sounds exciting and concerning, but other reports suggest Q* is something that, so far, has just shown signs it’s good at primary school maths. ?I guess we will find out at some point, GPT5 maybe?


So, what is ‘Responsible AI’ - Responsible AI refers to the development of AI solutions that are ethical, fair, and unbiased. This concept is vital for ensuring AI systems align with human values and societal norms. The emphasis on responsible AI stems from a growing awareness of the potential risks and ethical dilemmas posed by AI, including issues related to privacy, transparency, and the perpetuation of biases. As AI systems become more integrated into our every day lives, the demand for responsible AI practices that uphold ethical standards and promote trustworthiness is paramount.? Explainable AI is a subset of this, focused on making AI decision-making processes transparent and understandable to humans. It involves clarifying the mechanisms behind AI predictions and decisions, making it possible for users and stakeholders to comprehend and trust AI outputs. The significance of XAI lies in its ability to bridge the gap between AI’s complex algorithms and human understanding, fostering a sense of reliability and confidence in AI systems, especially in critical applications like healthcare, finance, and legal decision-making.


Human intuition plays a pivotal role in decision-making, often complementing rational analysis. Intuition, which stems from an individual's experiences and knowledge, enables quick, subconscious processing of information to arrive at decisions.? Intuition, while not always perfect, demonstrates the human ability to make quick judgments in situations where data may be incomplete or ambiguous.? Intuition actually relies on massive amounts of contextual and non-contextual experiences (which, is really just data) that we draw upon, some tangible and some intangible (like maybe the way someone shot you a dirty look whilst you were speaking one time, which is why you now don’t like them!), so intuition in fact is actually based on many, many, data points because we have general intelligence that we can bring to bear and machines don’t (yet).? Also, humans have a conscience, a morality, an understanding of consequences of our actions and understand context and, most of us, do not singularly pursue an outcome at the expense of everything else– ie we don’t run people over to get somewhere fast!


Fear of Loss - Humans are naturally inclined to fear losses more than they value gains; a phenomenon known as loss aversion. This psychological trait significantly impacts decision-making, often leading to risk-averse behaviours. In the context of AI, this fear of loss can manifest as a heightened demand for accountability and transparency, especially in high-stakes scenarios. The reluctance to fully trust AI decisions without clear explanations is partly rooted in this inherent fear of negative outcomes.


'Black box' AI models, which do not readily reveal their internal workings, present a significant dilemma. While these models can offer efficient and effective solutions, their lack of transparency raises concerns about trust and accountability. The acceptability of 'black box' models might vary depending on the context; in critical areas like healthcare or autonomous vehicles, transparency is clearly expected, whereas in less critical domains, would we be willing to trade the explainability for the chance of a more desirable outcome? Actually, if a black-box model led to excellent outcomes for humans, let’s say in medicine, would we sacrifice that for understanding why?? I guess we want the best of both worlds?


I’m not pretending to have the answers to this and I suspect the drive for explainability will rise to the top, but it was an interesting conversation and one that will run on.

Sofia Ihsan

Peter Allen

EY Analyst | AI & Data | Technology Consulting | BSc & MSc Psychology

9 个月

Very thought-provoking article. I love your exploration of how behavioural economics concepts (e.g., loss aversion, intuition) affect perception of AI decision-making vs. human decision-making. Thank you for getting us thinking.

回复
Sonali Bora

MBA Student at IIM Lucknow (IPMX Co '25) | Experienced in Digital, Content, and Brand Marketing Strategy | Podcast Producer

9 个月
回复
Timea Ivacson

AI & Data ? Manager @ EY ? MSc Statistics ? BSc Economics ? BSc Psychology ? PGCE Mathematics

9 个月

Powerful article, probably the best I read recently. Really thought provoking. Loving the reflection on responsible AI, and the comparison with human conscience, "loss aversion" and intuition. Point after brilliant point: "Intuition, which stems from an individual's experiences and knowledge, enables quick, subconscious processing of information to arrive at decisions.? Intuition, while not always perfect, demonstrates the human ability to make quick judgments in situations where data may be incomplete or ambiguous.? Intuition actually relies on massive amounts of contextual and non-contextual experiences (which, is really just data) that we draw upon, some tangible and some intangible (like maybe the way someone shot you a dirty look whilst you were speaking one time, which is why you now don’t like them!), so intuition in fact is actually based on many, many, data points because we have general intelligence that we can bring to bear and machines don’t (yet)."

回复
Anna Stolk

Head of Process Improvement

9 个月

Great article!

回复
Sofia Ihsan

EY Global Responsible AI UKI Consulting Leader

9 个月

Great article Lee Brown MCMI ChMC

要查看或添加评论,请登录

Lee Brown MCMI ChMC的更多文章

  • Dungeons, Dragons, Magic and AI

    Dungeons, Dragons, Magic and AI

    Last week, I hit pause on my usual dive into the depths of AI and data to spend some much-needed quality time with my…

  • Quantum Leap: Exploring the Future of UK's Renewable Energy with our QREALM Project

    Quantum Leap: Exploring the Future of UK's Renewable Energy with our QREALM Project

    In the dynamic landscape of renewable energy, the UK is embarking on a transformative journey, aligning its green…

  • Quantum AI about to drive another disruption?

    Quantum AI about to drive another disruption?

    The impact of quantum technologies is accelerating. In a recent survey of UK leaders, 47% believe that quantum…

    2 条评论
  • Not a story about Sam!

    Not a story about Sam!

    Given LinkedIn seems to be awash with speculation about what happened on Friday..

    4 条评论
  • Is GenAI the end of Critical Thinking and Creativity?

    Is GenAI the end of Critical Thinking and Creativity?

    In an era where "Let's ask ChatGPT" is becoming as common as "Let's Google it," it's worth pondering whether our…

    4 条评论
  • AI and My Nonagenarian Grandad

    AI and My Nonagenarian Grandad

    When discussing with a colleague how to clearly explain the mechanics of an LLM (Large Language Models), they quipped…

    7 条评论
  • Generating value from AI

    Generating value from AI

    How do we enable organisations to generate value from AI? In today's digital age, Artificial Intelligence (AI) and its…

    5 条评论
  • From Quiz Shows to Existential Threats

    From Quiz Shows to Existential Threats

    GenAI's potential impact on society has created both excitement and concerns. Talk of GenAI is everywhere right now…

    1 条评论
  • Leap into a Big Data Payoff...

    Leap into a Big Data Payoff...

    Two great events in two weeks, Informatica World and Strata. Coincides with two events of our own, the release of our…

  • Return to Sender?

    Return to Sender?

    I got a mail through LinkedIn last week, which in itself is not unusual, I get lots (I’m not boasting), but this was…

    6 条评论

社区洞察

其他会员也浏览了