Do We Trust AI? It’s Complicated.
Humans Vs AI - Who do we trust? Image: Midjourney

Do We Trust AI? It’s Complicated.

Artificial intelligence (AI) is changing the way businesses operate, influencing everything from product recommendations to customer service interactions. Yet, despite its growing presence, our trust in machines isn’t universal. Instead, it’s situational, and whether we trust AI over humans often depends on what we’re trying to accomplish. This is the essence of the “Word of Machine” effect, a concept introduced by researchers Chiara Longoni and Luca Cian.

Their work sheds light on when and why people trust AI recommendations and, just as importantly, when they resist them. The research offers valuable insights for marketers, showing how to use AI effectively in a world where human expertise still matters in many areas.

What Is the “Word of Machine” Effect?

The “Word of Machine” effect refers to the way consumer trust shifts between AI and human recommendations depending on the type of decision being made. It’s rooted in a basic perception: people see AI as logical, data-driven, and efficient, while humans are viewed as emotionally intuitive and better at understanding sensory or experiential needs.

For example, when a decision is practical or functional—like choosing a reliable car or finding a cost-effective insurance plan—AI is often trusted more. This is because people believe machines are better at crunching numbers and making rational decisions. On the other hand, when the choice involves emotions, aesthetics, or sensory pleasure—such as picking a romantic restaurant or selecting a piece of art—humans come out ahead. We instinctively trust their ability to understand nuance and emotional context.

Why Does It Matter?

For marketers, understanding these dynamics isn’t just theoretical; it’s essential for creating effective strategies. Misjudging when to use AI can alienate customers or undermine trust. For example, deploying a chatbot to recommend wines for a formal dinner may feel impersonal, whereas using AI to suggest the best-priced flight feels efficient and practical.

By aligning how they use AI with consumer expectations, businesses can enhance trust, improve customer satisfaction, and increase conversions. The stakes are high as AI continues to reshape industries, from retail to healthcare to entertainment. Companies that grasp these subtleties will have a competitive edge in designing customer journeys that resonate.

The Research Behind the Word of Machine Effect

Longoni and Cian’s research is based on nine studies that examine how consumers respond to AI versus human recommendations across different contexts. The results consistently reveal a divide between utilitarian and hedonic goals.

When the focus is utilitarian—solving a problem, achieving efficiency, or meeting functional needs—AI is often preferred. For example, in a study where participants chose hair treatments, 67% trusted the recommendation of an algorithm over a person when the goal was practicality. Conversely, when the goal was hedonic—fulfilling sensory or emotional desires—human recommendations were more trusted. In a similar study, 76% of participants chose human-recommended properties when emotional appeal was the priority.

Interestingly, the researchers found that hybrid models, where AI supports rather than replaces humans, can bridge the trust gap in hedonic scenarios. For instance, participants were more willing to accept AI’s role when it enhanced human decision-making, rather than fully automating the process. Additionally, interventions that challenge consumer biases—such as asking them to consider how their assumptions about AI might be wrong—helped increase trust in AI across contexts.

What Does This Mean for Marketers?

The “Word of Mouth” effect provides marketers with actionable insights for integrating AI into their strategies effectively. The key is knowing when to emphasize technology and when to lean on human expertise.

Match the Medium to the Goal

Marketers should align the use of AI or human expertise with the decision context. For utilitarian products, AI’s logical and data-driven nature builds trust. Conversely, for hedonic products requiring emotional or sensory appeal, human recommendations perform better.

Embrace Hybrid Models

Hybrid approaches that combine AI with human expertise can overcome consumer hesitancy, particularly in hedonic contexts. The research found that framing AI as “augmented intelligence” improved trust by highlighting how AI supports rather than replaces human input. For example, AI could recommend initial product options, with humans refining the suggestions to ensure emotional resonance.

Educate and Reassure

Transparency and education are critical to overcoming AI mistrust. In the research, prompting consumers to “consider the opposite” of their assumptions about AI reduced biases and increased acceptance. Marketers can achieve this by explaining how AI works, sharing real-world examples, and emphasizing its collaborative role with humans.

Be Ready to Pivot

Consumer attitudes toward AI are fluid, requiring flexibility and ongoing testing. Marketers should use A/B testing to experiment with AI-human interactions and monitor trends to adjust strategies. As AI technology evolves, younger audiences may be more receptive to it broadly, while older demographics may prefer human involvement in emotional decisions. Adapting to these shifts ensures long-term success.

The Takeaway:

The question “Do we trust AI?” doesn’t have a simple yes or no answer—it depends on the context. The Word of Mouth effect reveals that trust in AI is situational, driven by whether decisions are practical or emotional. While consumers rely on AI for logic and efficiency, they turn to humans for empathy and nuance.

For marketers, this balance between machine precision and human intuition is key to shaping trust and driving results. Businesses that understand these dynamics can align AI use with consumer expectations, creating experiences that are both efficient and deeply personal.

Ultimately, the tools we trust most are those that not only solve our problems but also respect our values and intentions. The future of marketing belongs to companies that master this balance—and win their customers’ trust along the way.

#marketing #ai #artificial intelligence

Source:

Longoni, Chiara, and Luca Cian. “Artificial Intelligence in Utilitarian vs. Hedonic Contexts: The ‘Word-of-Machine’ Effect.” Journal of Marketing, vol. 86, no. 1, 2022, pp. 91–108.

Chiara Longoni Luca Cian

Luca Cian

Killgallon Ohio Art Full Professor of Business Administration. Chair of the Marketing area at The Darden School of Business (UVA). Visiting at Columbia Business School

3 个月

Thanks for the coverage, Michael

要查看或添加评论,请登录

Michael K.的更多文章

  • AI is Broken. Let’s Use It Anyway?

    AI is Broken. Let’s Use It Anyway?

    A few weeks ago, I came across a post by a colleague Jill Sauter who was essentially saying ’no to AI’. Their example?…

    2 条评论
  • AI: It’s a Trap

    AI: It’s a Trap

    As I prepare to return to school, I have been reflecting on my journey with generative AI, how it has evolved, how I…

    6 条评论
  • Psychographic Profiling in the Age of AI

    Psychographic Profiling in the Age of AI

    Groundbreaking research by Michal Kosinski demonstrated that digital footprints can predict personality traits with…

    8 条评论
  • You’re Using AI Wrong

    You’re Using AI Wrong

    AI has become a powerful ally in creative processes, revolutionizing how we approach storytelling, art, marketing, and…

    5 条评论
  • When Empathy Backfires: How Cognitive Biases Distort Audience Understanding

    When Empathy Backfires: How Cognitive Biases Distort Audience Understanding

    Empathy is one of the most powerful tools in a marketer’s toolkit. It helps us connect with our audience, understand…

社区洞察

其他会员也浏览了