#49: AI and Arguments for Humans

#49: AI and Arguments for Humans

Episode 49 is a real gem, with futurist, media analyst, and one of Denmark’s leading experts joining me to discuss change in an age of AI, and why it is more important than ever to keep a close eye on the delimitations between humanity as technology.

Beyond the splendid episode, I also asked Maria Santacaterina, author of Adaptive Resilience, to help me write up a further commentary on the episode, which you will find below.

Read more about Christiane Vejl? at the bottom of this newsletter.

As always, I am very happy that we once again managed to make an episode that explores more than it explains ??

Listen to the episode here:

Apple: The Only Constant on Apple Podcasts

Spotify: The Only Constant | Podcast on Spotify

Spreaker: The Only Constant (spreaker.com)


What made this episode special to Maria Santacaterina?:

The following newsletter is kindly written by guest author previous podcast guest Maria Santacaterina:

Navigating complexity in an era of accelerated technological disruption is increasingly challenging. As AI systems become more deeply embedded in modern organizations, it begs the question: What does it mean to be human in a world increasingly driven by computers?

Lasse raised this critical question in his conversation with Christiane, author of Arguments for Humans, in the latest episode of The Only Constant podcast. They peel back the layers of AI’s role in business, exploring both the operational implications and existential risks.

The Misunderstanding of Intelligence: AI’s Imitation Game

AI models can mimic human speech and mirror thought and behaviour patterns. This is arguably the greatest strength and most striking limitation of current state-of-the art LLMs.

“When we speak or write, we have a history, we have experiences, we have emotions, we have relationships. The AI doesn’t have that.” Large Language Models (LLMs) operate on statistical patterns and probabilities, not understanding. They simulate coherence, but this seeming coherence is not the equivalent of human comprehension, Christiane explained.

This distinction matters a great deal in business. AI might generate reports, draft emails or even suggest strategic decisions, but it lacks the contextual awareness that underpins true insight.

Christiane shared an anecdote: when prompting an AI to create an image of “salmon swimming in a creek,” the AI generated an image of a salmon fillet, ready for cooking. “The machine doesn’t know what it means to be a salmon.”

Business leaders take note: outputs generated by AI systems appear to be accurate on the surface, but they may be missing the deeper layers of meaning. If they’re left unconstrained, AI models can cloud human judgment. Hallucinations or confabulations are a well-known in-built feature.

The Productivity Paradox: Efficiency vs. Effectiveness

There’s a pervasive belief AI inherently boosts productivity, but Christiane adds nuance to this argument. “When it makes mistakes, they’re extremely stupid,” she noted. AI’s efficiency can become a liability since flawed outputs are increasingly difficult to detect without context and domain specific expertise.

Invariably this creates a productivity paradox. While AI accelerates processes, it demands additional layers of human oversight. ?A human-in-on-over the loop is a prerequisite for Enterprise security.

“You need to be good at what you’re doing and good at using the AI at the same time,” Christiane emphasised. While the promise is that AI effectively reduces workload and improves efficiency.

AI is fundamentally changing the nature of work. It requires employees to develop skills in their specific areas of expertise and learn to challenge AI-driven systems that are supervising tasks and acting on our behalf to varying degrees. Al literacy is no longer optional.

The widespread diffusion and accessibility of AI tools through natural language interfaces is a double-edged sword. While it lowers the barrier to entry, it also introduces new risks. “It’s just chaotic coding,” Lasse noted. Natural language prompts feel intuitive, but they’re still a form of programming. Bad programming leads to bad outputs.

The Turing Trap: Machines Are Not Colleagues

Businesses have rapidly integrated AI chatbots into customer services, operations and even leadership roles. AI can convincingly imitate humans in its interactions and there’s a growing risk of anthropomorphising these tools.

“We tend to anthropomorphize anything that looks like it has eyes,” Christiane observed, but “it’s not more human than a toaster.” AI is a tool, not a colleague. AI bots may use natural language in delivering outputs, and seemingly exhibit humour or empathy. However, they are not genuine expressions of human emotions.

The illusion has tangible implications in the workplace. If employees treat AI as a sentient entity, there’s a risk they become overly dependent on these tools and assume there’s a level of inherent understanding and judgment that simply doesn’t exist.

AI bots are beginning to influence social norms and anti-social behaviours. Rudeness or employee detachment are seemingly more common in the workplace. This undermines business confidence, team performance and productivity. It adversely affects the organisation’s culture.

Bias at Scale: Automating Inequity

One of the most significant business risks associated with LLMs and related AI applications, is their potential to amplify and perpetuate inherent biases.

?“The data is historic, right? And it doesn’t necessarily correlate with the future that we want to have,” Christiane explained. AI systems are trained on vast datasets that reflect past realities, including societal biases such as gender, race and class.

For example, AI-driven hiring tools might select male candidates for technical roles or fail to select underrepresented groups in leadership positions. Algorithmic biases are often very subtle, making them difficult to detect without rigorous human oversight.

Christiane noted “you can use AI to avoid bias in a recruiting process.” With careful design, AI can be used to select candidates in a more equitable manner. The challenge lies in ensuring that the AI tool is both technically robust and ethically sound. Business leaders need to demonstrate accountability by design.

The Human Edge: Why Businesses Still Need People

Christiane’s core argument at the heart of this conversation, is that there are key aspects of humanity that machines cannot replicate.

“It’s about having a body in the world and sensing it. It’s about having experiences. It’s about having emotions and feelings,” Christiane emphasised.

In business, human judgment, creativity, ingenuity, social and emotional intelligence has an irreplaceable value. AI can process data fast, but it doesn’t understand the intricacies of client relationships or the subtleties of imperceptible concerns. It can draft marketing copy, but it cannot intuit the cultural resonance of a campaign. It can suggest strategic moves based on past events, but it cannot envision entirely new scenarios outside the scope of its training data.

We have a keen sense of our own mortality and this imbues our work with a sense of urgency and purpose in our search for meaning.? “The machine can be upgraded forever, in theory. [But] we have an expiration date as humans, which is why we have so many thoughts about being humans and create art, music and have relationships,” Christiane explained.

This self-awareness drives us forward, brings forth new ideas and fosters innovation. Through a lifetime of experiences, we build resilience and cultivate the kind of adaptive thinking that businesses also need to navigate the complexity of a rapidly changing world.

Towards a Business Philosophy of AI

Integrating AI into your business is not just a technical or operational challenge; it’s a philosophical and psychological challenge too. AI affects our whole being. We need to grapple with the new dynamics of change and resist being pushed to the edge of insignificance.

“Technology isn’t inherently good or bad, but it’s not neutral either,” Christiane noted.

The choices we make about how to develop and deploy AI, the datasets we use in training models and how we interpret AI generated outputs will shape businesses and our human identity.

Business leaders need to adopt a more sensitive approach to AI deployments: as they seek to harness AI’s capabilities, they should also remain vigilant and seek to understand inherent technical limitations.

This means fostering a true culture of innovation, where AI is viewed as a tool to augment human potential rather than replace it. Leaders’ integrity and strategic foresight will be paramount in ensuring that all AI systems and related applications align with human values, societal and environmental responsibilities.

Ultimately, the goal is not to resist technological change but to navigate its inherent complexity with due diligence, sensitivity and thoughtfulness. “Nothing is a constant but change,” Christiane observed and this applies to business strategy and similarly to our personal growth.

The challenge for today’s business leaders is to steer change within their organisations in a direction that preserves what makes each of us uniquely human, as we continue to engage with the AI tools of the future.

Knowing how to make good decisions might just be our greatest competitive advantage.

-- -- 13th February 2025, Maria Santacaterina -- --



Do you want to know more about Christiane Vejl? ?:

Christiane Vejl? is a futurist, media analyst, and one of Denmark’s leading experts on the relationship between humans and technology. With a background in media, telecommunications, and AI ethics, she has advised governments, corporations, and institutions on the societal impact of digital transformation. As the author of Arguments for Humans (2023), she explores the unique qualities that define human intelligence in an AI-driven world. Vejl? is a sought-after speaker in business and policy circles, focusing on AI’s influence on work, ethics, and society.

Beyond her research and writing, Vejl? serves on multiple advisory boards, including Denmark’s Data Ethics Council and the board of DR (Danish Broadcasting Corporation). She is an active investor in early-stage startups and has played a key role in shaping digital policy through leadership roles in various governmental and industry initiatives. Through her company, Elektronista, she provides insights into AI, digital trends, and the evolving relationship between technology and culture.


Salman Zaidi

Enabling reliable use of Data + AI | Living with Intention

2 周

Lasse Rindom Fascinating discussion!?

Dan Gabriel Jensen

Architect MAA | Urban Planner BFO | Synthographer | AI Explorer

2 周

Superexcited to diven into this episode, come weekend. Thanks Lasse and Christiane!

要查看或添加评论,请登录

Lasse Rindom的更多文章