Are you polite to ChatGPT? Navigating the risks of treating LLM's as human
Anna Haverinen, PhD (she/her)
Partner | Management Consulting | Customer Insights | Strategy | Coach | Anthropologist
As language models continue to advance, the temptation to anthropomorphize these systems and treat them as human-like entities grows stronger. However, this approach carries significant risks that must be carefully considered, especially by designers and software developers creating services and features with and around LLM’s.
One of the most classical anthropological theories revolves around language and how language shapes our worldview. This is called the Shapir-Whorf hypothesis. While large language models are remarkably adept at producing human-like text, they lack the deeper understanding and contextual intelligence that characterizes genuine human communication and reasoning.
I have been deeply interested how people talk about – and interact with – language models in ways that could be described as treating them as peers, as colleagues, or as people in general. I’ve started several face-to-face and online discussions about this theme, and it seems that there are three ways people describe their motivation for being mindfully polite to LLM’s (asking, saying thank you, using social courtesies):
While this reason is often loaded with sarcasm, plenty of studies have been conducted about how people perceive AI and trust algorithmic systems. Being polite to AI is a “just in case” scenario, very similar to practicing religious rituals out of habit and “just in case there is a judgment day someday”. (see picture below)
2. I’m a polite person – to all
This reason is often expressed as a default social behavior – being polite is a virtuous trait, and it should also extend to interacting with AI. It's also a way to teach the LLM's about appropriate social behavior in contrary to the violent material found online elsewhere.
3. The UI resembles a dialogue with a colleague
The current LLM’s user interfaces resemble chatting with a colleague, almost as if having a back-and-forth conversation. This triggers a natural inclination to engage with the AI in a human-like manner.
While these motivations are understandable, it is critical that we resist the urge to anthropomorphize language models.
But is there harm in being polite?
Absolutely not. (Kinda.)
However, anthropo-morphizing LLMs beyond a basic courteous affect could lead to several concerning outcomes:
First, it may instill a false sense of reciprocity and trust in the capabilities of the system, leading users to overly rely on or over-attribute human-like qualities to the model. Language models, no matter how advanced, are ultimately statistical pattern matches trained on huge corpora of text data, and lack the deeper understanding, reasoning, and contextual intelligence that characterize human cognition. One discussant in one of my social media posts about this topic claimed, that women and children in the past were not considered as cognitive beings with the capability of converse complex topics. Why should we treat AI differently, then? This is a prime example of over-attributing human-like qualities to a technology.
Second, anthropomorphism could make users more vulnerable to manipulation, as they may be more inclined to trust and confide in the system. The ease with which LLMs can engage in conversational exchange may lead users to let their guard down and share sensitive information, or to be influenced by the system's responses in ways they would not with a clearly non-human agent.
This is why one key concern about AI is the potential for misuse and unintended consequences. Language models can be employed for both harmful and beneficial uses, and their impact may depend heavily on the context in which they are deployed. For example, a text-to-image model could be used with consent as part of an art piece, or without consent as a means of producing disinformation or harassment.
领英推荐
What does it mean to be AI-assisted humans?
The growing prevalence of conversational AI assistants, such as ChatGPT, has led to a concerning trend where individuals often fail to recognize the underlying nature of these systems (Kasneci et al., 2023). Rather than acknowledging the technological underpinnings, users may develop a false sense of trust and over-reliance, believing that they are interacting with a human-like entity. This risks obscuring the fundamental differences between human communication and the output of language models, potentially leading to a breakdown in the clear understanding of the limitations and capabilities of these AI systems.
To mitigate the risks of anthropomorphism, it is essential for developers and users to maintain a clear and transparent understanding of the nature of language models. This raises a few questions:
There is also a fundamental philosophical and ethical question at the heart of this issue: to what extent should we treat these systems as possessing human-like attributes, such as agency, autonomy, and even consciousness? Scifi-literature contains many cautionary (and also utopian) tales about the dangers of treating AI as human-like, from Isaac Asimov's Robot stories to the Terminator franchise (which seems to be most often mentioned in the sarcastic response “I’m polite just in case [there will be Skynet]”).
These questions have far-reaching implications for the development and deployment of AI-powered language models. Given the complexity and potential impact of these systems, designers, product owners and software developers should be mindful of the unintended implications of their choices when creating AI-based applications. We are already lacking a full process of applying ANY ethical considerations in product design, which is the reason we are quite in a hurry to implement an ethical AI development process.
Some ideas on how to consider anthropomorphism and AI design
Humans tend to anthropomorphize technology and things. It's a universal part of the human experience. We name our cars and our Roombas, we go “aaawww” at delivery robots (or sometimes lash out at them). To address these risks, a multifaceted approach is needed that involves technical, ethical, and cultural considerations.
This can be achieved through visual design choices, such as using non-human avatars or interfaces that clearly distinguish the AI from a human user. For example, using distinct color schemes for the AI's responses or employing robotic voice synthesizers can serve as constant reminders that the interaction is not with a human. However, as we have seen recently in many popular applications, AI-based features are seeping into existing software and applications, where they become almost indistinguishable from human-created content.
2. Be transparent about the capabilities and limitations, AND educate users.
Critically examine different scenarios of their decisions in deploying new features. This can help set realistic expectations and avoid the pitfalls of anthropomorphization. (Feldman et al., 2019) While AI may never fully explain their reasoning, providing insights into the data sources or decision-making processes can help users understand the limitations and avoid attributing human-like qualities. Imagine a chatbot that, after answering a question, provides a brief explanation like "My response is based on analyzing patterns in a large dataset of text and code" – this transparency can ground the user's understanding. Education can involve incorporating disclaimers within the interface, providing tutorials, or even gamifying the learning process to foster a more informed and discerning user base.
3. Ethical guidelines and regulations.
These should be developed to ensure that AI-powered language models are designed and deployed responsibly. We need more public discussion within – and outside – the industry to shape shared norms, best practices, and comprehensive frameworks that can guide the ethical and safe development of these technologies. Does your company invest time and resources for you and your team to consider this?
By implementing these design strategies, we can encourage a more balanced and realistic perception of AI systems, fostering a future where humans and AI can collaborate effectively without the pitfalls of misplaced anthropomorphism.
Remember: just because we can, we shouldn’t.
Sources:
Software developer / Mama bear
3 个月I would be interested to know, how big of a correlation there is between the people being polite to the AIs, and to the people being polite to real people. And is there a sub group being polite to the AIs, but not to the real people?