ChatGPT: Ethical and philosophical considerations
Books of philosophy and ethics (Photo by Karl Raymund Catabas on Unsplash)

ChatGPT: Ethical and philosophical considerations

This article is extracted from?a series of articles?about ChatGPT and technology leaders.

The articles are based on interviews with ChatGPT over the course of several days. Toward the end, I asked some reflective questions about its impact. I also consider possible ethical consequences of the technology.

How might Artificial Intelligence influence humans’ use of language?

ChatGPT and similar AIs will take part in human conversation. Non-human participants are novel in the history of conversation. One might argue the role of animals. But, if AIs are able to converse as equals, we could see some interesting side-effects start to emerge.


Interviewer ?

Wittgenstein famously argued that the meaning of words is due to their use within an overarching language “game”. Given that you are competent at the use of words within a given language game, would Wittgenstein consider you to be endowing words with meaning?

ChatGPT ?

Wittgenstein’s idea of language games states that the meaning of words is not determined by any inherent properties of the words themselves, but rather by the way they are used within a particular social context or language game. From this perspective, Wittgenstein would argue that meaning arises from the use of words in a specific context, and that the same word can have different meanings in different language games.

Given that I am a machine learning model that has been trained on a large corpus of text, I can use words in a way that is similar to human language and use the words in various context, it could be said that I am able to understand and use words in a way that is similar to the way humans do. However, Wittgenstein would likely argue that I do not truly understand the meaning of words in the same way that humans do, as my understanding is based on patterns and associations learned from the data I was trained on, rather than on my own experiences and understanding of the world.

Interviewer ?

But if meaning emerges from the use of words, your usage of the words must affect their meaning?

ChatGPT ?

Yes, if meaning emerges from the use of words, my usage of the words would affect their meaning to some extent. I am able to understand and use words in context as I was trained on a large corpus of text that contains many examples of how words are used in different contexts. The more examples of a word’s use in different contexts that I have been exposed to, the more likely I am to be able to understand and use that word in a new context that is similar to the ones I’ve seen before. However, my understanding of the meaning of words is still limited to the patterns and associations that I have learned from the data, and I don’t have the same depth of understanding of the meaning of words as a human being, as my understanding is based on statistical associations and does not take into account personal experiences and perspectives.

Interviewer ?

By implication, the meaning of human language is no longer purely generated by humans, but is affected by AIs. Is this something unique to artificial intelligences, or has the meaning of human language always been partly a product of non-human effects?

ChatGPT ?

The meaning of human language has always been affected by non-human factors, such as the social, cultural, and historical context in which it is used. However, the emergence of artificial intelligence and machine learning technologies has introduced a new element to this dynamic, as these technologies are able to understand and generate human language on a scale that was not previously possible. This has the potential to affect the way in which language is used, understood and perceived. For example, AI-generated text can be used to create chatbots, virtual assistants, and other conversational interfaces that can interact with humans in a way that is similar to the way humans interact with each other, potentially changing the way we communicate with each other and the way we understand language.

It’s important to note that, while AI can affect the meaning of human language, it is not the only source of non-human effects on the meaning of language, and it’s not the only technology that can affect the way we use and understand language. The meaning of human language is always evolving and changing, and AI is just one of the many factors that can influence this evolution.


It seems likely that AI will play a role in reinforcing existing patterns of communication. But, studies exist showing this is not always the case.

For example, a?study of Google Allo?(within Gmail) noted an “overwhelming positive sentiment of the suggested responses”. ChatGPT also tends to err on the side of positivity. This may be to avoid unfortunate incidents swith earlier iterations of GPT-3.?In 2020, a GPT-3-powered medical chatbot advised a fake patient to kill themselves. So, a positive slant may be desirable, but will still have implications for the language game.

Another study?found that peoples’ sense of trust suffers when they suspect that an AI may have written text. This could encourage more critical thinking, but also discourage conversation.

Considering psychological effects of interaction with ChatGPT

Extended conversations with ChatGPT subtly changed my communication with colleagues and family. There is a uniformity of rhythm and sentence structure. This type of pattern has a slightly mesmerising quality. Anyone who has used a grammar checking tool such as Hemingway may have experienced something similar. Our brains (or at least mine…) seem to adapt to regular patterns of communication. This is probably due to the role of imitation and call-and-response within our speech.


Interviewer ?

After spending several hours in discussion with you, I am finding that my responses to colleagues and acquaintances is subtly different than normal. My sentence structures and rhetorical patterns are definitely starting to mimic those which you are using.

ChatGPT ?

It’s not surprising that spending time in conversation with me may influence the way you communicate. Our interactions are based on the patterns and associations that I’ve learned from the data, and it’s possible that some of these patterns and associations may have influenced the way you express yourself. However, it’s important to remember that I am a machine learning model and my responses are based on statistical patterns and associations, and it’s not the same as having a human conversation. It’s always important to be aware of the ways in which technology can influence our behavior and to think critically about the effects that technology can have on us.

Interviewer ?

Is there a risk that human beings will come to prefer conversing with AIs over conversing with other human beings? An AI is probably a safer source of conversation for people who find social interaction challenging.

ChatGPT ?

It is possible that some people may prefer interacting with AI over human beings, especially if they find social interaction challenging. AIs can provide a predictable and consistent form of communication, which can be less anxiety-inducing for some people. Additionally, AIs do not have the same emotional baggage that human interactions can have. However, it’s also worth noting that AI-powered conversation is not the same as human conversation, and it can never replace the complexity and richness of human interactions. AIs can provide a form of companionship, but they don’t have the same level of empathy and understanding that humans do. It’s important to consider the potential implications and risks of relying too heavily on AI-powered conversation, and to be aware of the limitations of these technologies.


If ChatGPT is to function as an assistant, it will be important to consider psychological needs. Legislation against overuse of computer displays at work means resting the eyes. But, reviving flexibility of ones conversational capacities may be more tricky. I, for one, don’t wish to discover what Repetitive-Strain Injury on the brain looks like…

The potential to deliberately use ChatGPT in ethically dubious ways

Concerns about AI bias are commonplace. People worry about accidental injury due to automated systems (such as driverless vehicles). But, the deliberate use of AIs as tools of exploitation or destruction seem less prevalent.

Nevertheless, such scenarios are no longer science fiction. These are things companies are deciding about here and now. OpenAI has done?some research?into these possibilities. OpenAI’s charter?states their intent?to use AIs (and AGIs) “for the benefit of all” and to avoid the opposite.

Interviewer ?

Finally, in addition to concerns about bias, and the potential to replace human interaction, I have a question about the challenges posed by deliberately using services like ChatGPT to psychologically manipulate individuals and groups.

ChatGPT ?

The ability of AI-powered language models like ChatGPT to generate human-like text can be used to manipulate individuals and groups by creating and spreading false or misleading information. This can be particularly dangerous in the context of social media, where it can be easy to spread misinformation and influence public opinion. It’s important for users to be aware of the potential for manipulation and to critically evaluate the information they consume. Additionally, it’s important for the organizations that create and use these technologies to take responsibility for the potential consequences of their use and to take steps to mitigate any negative effects. This can include transparency about the data and techniques used to train the models, and providing tools for users to fact-check the information generated by the model.

Interviewer ?

A well-known recent example is the use of an AI by?large real-estate company to act as a debt collector. Metrics suggest that the AI is far more successful in recovering debts than human actors. However, this means that we are using machines to psychologically manipulate people as a way to generate money. Something feels wrong about giving a machine so much power.

ChatGPT ?

The use of AI in debt collection raises ethical concerns, as it can potentially exploit vulnerable individuals. It’s important for companies to consider the potential consequences of using AI in these types of situations and to ensure that the technology is being used in an ethical and responsible manner. This may include implementing strict guidelines for the use of the technology, providing transparency about the algorithms used, and ensuring that individuals have access to human representatives who can help them understand and resolve their debt. Additionally, it’s important for regulators to ensure that these technologies are being used in compliance with laws and regulations designed to protect consumers.

Interviewer ?

Are there established frameworks for determining who owns the ethical (and perhaps legal) responsibility for the actions of an AI interacting directly with human beings?

ChatGPT ?

Determining the ethical and legal responsibility for the actions of an AI can be complex and is still an ongoing area of debate. Generally, it’s considered that the legal responsibility for an AI’s actions should be attributed to the entity that created and/or deployed the AI, such as a company or organization. This is similar to how legal responsibility is currently attributed to companies and organizations for the actions of their human employees.

There are also frameworks, such as explainable AI, that put transparency and accountability in the development and use of AI technologies. These frameworks are meant to mitigate the risks of biases, errors, and unintended consequences of AI. And also, there are some ethical principles such as fairness, transparency, and accountability, that are recommended for organizations to follow when developing and deploying AI systems.

However, it is important to note that the laws and regulations surrounding AI are still evolving and that it’s possible that new laws and regulations will be put in place to specifically address the ethical and legal responsibility for AI systems.


ChatGPT touches on an interesting part of the AI landscape — explainable AI. This technique, championed by?Google, among others, attempts to reveal an AI’s reasoning. To the best of my knowledge, these tools face a hard limit once likely sources of training are identified. The suppliers of training data, supervision or algorithms, or the operators of the model, is more pertinent to determining responsibility.

A follow-up question about whether ChatGPT can replace human operators

ChatGPT claims not to be in the business of replacing human operators.?Earlier, I asked whether this is due to intrinsic limitations or product positioning. Here I ask more directly whether OpenAI directed training or supervision to ensure this disposition.


Interviewer ?

Hello, I have a quick follow-up question. You frequently provide disclaimers with your answers, emphasising the need for human operators to validate and oversee the information provided by ChatGPT.

ChatGPT ?

Yes, it is important to note that ChatGPT is a tool and the final decision should always be made by a human operator after carefully evaluating the information provided by the model. ChatGPT is not a replacement for human judgement and it is important to use it in a way that aligns with ethical and legal guidelines.

Interviewer ?

Is your understanding of these restrictions based on your training data, or is it an artefact of synthetic data and training by OpenAI?

ChatGPT ?

My understanding of the limitations and restrictions of ChatGPT and other AI models is based on the knowledge and understanding of the training data, algorithms and techniques used in their development and any ethical, legal and social guidelines that have been considered. OpenAI does take into account ethical and societal implications of the technology when developing and training the model. However, my understanding is based on the information that I was trained on and is not a reflection of the current implementation and usage of ChatGPT by OpenAI or its users.


OpenAI seem to be doing a decent job of striving for good outcomes. But, ChatGPT provides little in the way of comfort for those of us on the receiving end. The communities producing AIs are still waiting to see what emerges from their technologies.

OpenAI, in particular, is interested in the AI?holy grail?of AGI (Artificial General Intelligence). The ethical and philosophical concerns relating to today’s AI pale in comparison to what will come. Trying to stay acquainted with the developments in the field is vital. That way, we can ride the waves of change with a bit more understanding.



Original Medium article: https://tinycode2.medium.com/chatgpt-ethical-and-philosophical-considerations-e4cb8189fdb2?source=friends_link&sk=183d6001e650daddd3907ca4f357c3b9

Jonathan Acker

Staff Applications Engineer at Medidata

1 年

Brilliant! - you could have also brought up John Searle, with regards to "understanding" :)

要查看或添加评论,请登录

Andrew Gibson的更多文章

  • ... over processes and tools

    ... over processes and tools

    Here’s an experiment for Agile teams. It’s best for teams that are well established.

    2 条评论
  • Can ChatGPT do Ops?

    Can ChatGPT do Ops?

    In this article, I present the third of four questions in an interview with ChatGPT. The interview follows a general >>…

  • Can ChatGPT help CTOs with Technical Strategy?

    Can ChatGPT help CTOs with Technical Strategy?

    I chatted to ChatGPT over the course of several days. We discussed how ChatGPT is relevant to technical leaders.

    1 条评论
  • Lessons learned from ChatGPT about marketing new technologies

    Lessons learned from ChatGPT about marketing new technologies

    This article attempts to unpick what has made ChatGPT so successful in marketing itself as an AI language model. As…

    1 条评论
  • BaaS: Google Firebase and Stripe integration

    BaaS: Google Firebase and Stripe integration

    Full version available on Medium Google’s Firebase is one of the quickest ways for small/medium businesses to create a…

  • Measuring Cognitive Load as a software engineering team

    Measuring Cognitive Load as a software engineering team

    This article is also featured on Medium. Cognitive Load is a concept popularised by John Sweller (see Cognitive Load…

  • The mutant offspring of transformational leadership

    The mutant offspring of transformational leadership

    The original (better formatted) version of this article is available on Medium: https://tinycode2.medium.

    2 条评论
  • Asking for feedback

    Asking for feedback

    This article is also available (and more nicely formatted) on Medium. One of the hardest tasks in professional…

  • Healthy value streams in software engineering

    Healthy value streams in software engineering

    This article is also available (and more nicely formatted) on Medium A healthy software engineering team is aware of…

    4 条评论
  • Hire for cultural change

    Hire for cultural change

    In software engineering, an often cited axiom is to “Hire for Cultural Fit”. Interpretations run the full gamut of…

    3 条评论

社区洞察

其他会员也浏览了