Tea or Coffee?
Every day, the average person makes approximately 35k decisions.? Every decision results in a win or a loss, a more or less favourable outcome, tea or coffee, happiness or sadness, maybe even life or death – I guess the choice in language depends on the importance of the situation.? But 35k – I wonder how many I’d make if I stopped procrastinating?!
Whilst chatting to a colleague last week about the topic of ‘Responsible AI’, which was a broad conversation, it got me thinking about human decision making (or more specifically about intuition) compared with machine decision making.? Our conversation sparked a debate about the often-accepted use of ‘gut feel’ in decision making, where an explanation as to a particular decision might be difficult to articulate vs our insistence that machines (or rather, AI) must be able to explain every step.? Why do we allow one and not the other?? We left the conversation with a comment that ‘intuition is fine, until things go wrong’ – so maybe that’s why the AI intuition equivalent i.e. a ‘black-box’, isn’t acceptable?
There is, justifiably, a LOT of focus on AI safety now and a LOT of speculation about the pace of AI development, where we are heading with it and whether we are properly prepared for what’s around the corner.? The recent developments at OpenAI, particularly around the rumoured project Q* or Super Intelligence, have intensified these debates. Apparently, OpenAI researchers warned their board about a potential breakthrough in artificial general intelligence (AGI), which sounds exciting and concerning, but other reports suggest Q* is something that, so far, has just shown signs it’s good at primary school maths. ?I guess we will find out at some point, GPT5 maybe?
So, what is ‘Responsible AI’ - Responsible AI refers to the development of AI solutions that are ethical, fair, and unbiased. This concept is vital for ensuring AI systems align with human values and societal norms. The emphasis on responsible AI stems from a growing awareness of the potential risks and ethical dilemmas posed by AI, including issues related to privacy, transparency, and the perpetuation of biases. As AI systems become more integrated into our every day lives, the demand for responsible AI practices that uphold ethical standards and promote trustworthiness is paramount.? Explainable AI is a subset of this, focused on making AI decision-making processes transparent and understandable to humans. It involves clarifying the mechanisms behind AI predictions and decisions, making it possible for users and stakeholders to comprehend and trust AI outputs. The significance of XAI lies in its ability to bridge the gap between AI’s complex algorithms and human understanding, fostering a sense of reliability and confidence in AI systems, especially in critical applications like healthcare, finance, and legal decision-making.
领英推荐
Human intuition plays a pivotal role in decision-making, often complementing rational analysis. Intuition, which stems from an individual's experiences and knowledge, enables quick, subconscious processing of information to arrive at decisions.? Intuition, while not always perfect, demonstrates the human ability to make quick judgments in situations where data may be incomplete or ambiguous.? Intuition actually relies on massive amounts of contextual and non-contextual experiences (which, is really just data) that we draw upon, some tangible and some intangible (like maybe the way someone shot you a dirty look whilst you were speaking one time, which is why you now don’t like them!), so intuition in fact is actually based on many, many, data points because we have general intelligence that we can bring to bear and machines don’t (yet).? Also, humans have a conscience, a morality, an understanding of consequences of our actions and understand context and, most of us, do not singularly pursue an outcome at the expense of everything else– ie we don’t run people over to get somewhere fast!
Fear of Loss - Humans are naturally inclined to fear losses more than they value gains; a phenomenon known as loss aversion. This psychological trait significantly impacts decision-making, often leading to risk-averse behaviours. In the context of AI, this fear of loss can manifest as a heightened demand for accountability and transparency, especially in high-stakes scenarios. The reluctance to fully trust AI decisions without clear explanations is partly rooted in this inherent fear of negative outcomes.
'Black box' AI models, which do not readily reveal their internal workings, present a significant dilemma. While these models can offer efficient and effective solutions, their lack of transparency raises concerns about trust and accountability. The acceptability of 'black box' models might vary depending on the context; in critical areas like healthcare or autonomous vehicles, transparency is clearly expected, whereas in less critical domains, would we be willing to trade the explainability for the chance of a more desirable outcome? Actually, if a black-box model led to excellent outcomes for humans, let’s say in medicine, would we sacrifice that for understanding why?? I guess we want the best of both worlds?
I’m not pretending to have the answers to this and I suspect the drive for explainability will rise to the top, but it was an interesting conversation and one that will run on.
EY Analyst | AI & Data | Technology Consulting | BSc & MSc Psychology
9 个月Very thought-provoking article. I love your exploration of how behavioural economics concepts (e.g., loss aversion, intuition) affect perception of AI decision-making vs. human decision-making. Thank you for getting us thinking.
MBA Student at IIM Lucknow (IPMX Co '25) | Experienced in Digital, Content, and Brand Marketing Strategy | Podcast Producer
9 个月Steve Karam
AI & Data ? Manager @ EY ? MSc Statistics ? BSc Economics ? BSc Psychology ? PGCE Mathematics
9 个月Powerful article, probably the best I read recently. Really thought provoking. Loving the reflection on responsible AI, and the comparison with human conscience, "loss aversion" and intuition. Point after brilliant point: "Intuition, which stems from an individual's experiences and knowledge, enables quick, subconscious processing of information to arrive at decisions.? Intuition, while not always perfect, demonstrates the human ability to make quick judgments in situations where data may be incomplete or ambiguous.? Intuition actually relies on massive amounts of contextual and non-contextual experiences (which, is really just data) that we draw upon, some tangible and some intangible (like maybe the way someone shot you a dirty look whilst you were speaking one time, which is why you now don’t like them!), so intuition in fact is actually based on many, many, data points because we have general intelligence that we can bring to bear and machines don’t (yet)."
Head of Process Improvement
9 个月Great article!
EY Global Responsible AI UKI Consulting Leader
9 个月Great article Lee Brown MCMI ChMC