Beware: ChatGPT can suggest the non-existence of an academic article.
Andreas Raharso
Author of Escape from System1, Fellow NUS School of Computing, Founder, Keynote Speaker, Consultant.
One of the biggest dangers of ChatGPT spreading incorrect information is that it can lead to poor decision-making by CEOs or leaders. Large language models such as ChatGPT are trained on vast amounts of data, which can include biased or inaccurate information. These biases are very difficult to eliminate, ChatGPT can perpetuate these biases, leading to incorrect information.
?
As an example. I requested that ChatGPT locate studies on ecosystem-based marketing for me. The answer from ChatGPT is shown below (in Italic).
?
ChatGPT
Certainly! Here is a research article related to ecosystem-based marketing that you may find useful:
?
Title: "Ecosystem-based Marketing: A Conceptual Framework and Future Research Directions"
Authors: Stephan A. Butscher and Christian Lüthje
Journal: Journal of Business Research
Year: 2018
?
In this article, the authors propose a conceptual framework for ecosystem-based marketing and provide insights into how firms can build and leverage ecosystems to create value for customers and other stakeholders. The authors draw on previous research in marketing, strategy, and innovation to develop their framework and identify future research directions for scholars interested in studying ecosystem-based marketing.
?
You can access the full article through academic databases or by searching for it on the journal's website.
?
Quite convincing, don't you think?.
?
I searched Google Scholar but couldn't find it. I went to the actual Journal and couldn't locate it, and I thought to myself, this is strange. I refuse to give up. I tried to search by using each author's name, and none of these authors ever wrote an article related to ecosystem-based marketing. Finally, I realized that the article did not exist.
?
I'm back on ChatGPT to protest the recommendation. I forget that ChatGPT is not a person and may not know whether its answer was correct or incorrect. I found ChatGPT's answer to my complaint amusing (in Italic).
?
ChatGPT
I'm sorry for the confusion in my previous response. You are correct that there is no article titled "Ecosystem-based Marketing: A Conceptual Framework and Future Research Directions" by Stephan A. Butscher and Christian Lüthje in the Journal of Business Research from 2018.
领英推荐
?
I apologize for any inconvenience this may have caused. If you are looking for information on ecosystem-based marketing, there are other articles and resources available on this topic that may be helpful to you. Please let me know if you have any further questions or if there is anything else I can assist you with.
?
Isn't that a cute answer?
?
The large language models that are used by ChatGPT are incredibly complex and difficult to understand, making it difficult to predict their behavior. As a result, they may produce untruthful results without anyone realizing it.
?
ChatGPT is a large language model that is trained on the GPT-3.5 architecture. It uses natural language processing (NLP) techniques to understand and respond to user queries in a conversational manner.
?
But here’s the catch: ChatGPT’s responses only touch reality at a tangent. This means that the responses given by ChatGPT may not always directly address the reality of the situation or question posed by the user but rather provide a response that is related to the topic or provides a tangent to the main discussion. In other words, while ChatGPT may sound convincing, the responses are ultimately fictional creations of ChatGPT.
?
Even worse, ChatGPT can tell you different things just because it’s saying them in a different language. Something that never happened to humans.
?
This is because we as humans, understandably, anthropomorphize these systems, considering them as simply expressing some internalized bit of knowledge in whatever language is selected.
Regardless of how they describe it, the weather is gloomy and cold today; the facts remain the same regardless of the language used to express them. The expression is distinct from the idea.
?
This is not the case with a large language model because they do not know anything in the sense that people do. Based on their training data, these statistical models find patterns in a succession of words and anticipate which words will appear next.
?
Can you perceive what the problem is? The answer isn't truly an answer; rather, it is a statistical prediction of how that question will be answered.
?
One of the biggest dangers of relying on fictional information is that it can lead to the misallocation of resources. CEOs and leaders are constantly seeking to make informed decisions that are backed by data and evidence. Academic journals can provide this evidence-based information, which can help them make more informed decisions. Many CEOs or leaders may base their decisions on academic references that do not exist, leading them to allocate resources in a way that is not aligned with the reality of the situation. This can result in missed opportunities and wasted resources that could have been better invested elsewhere.
?
To summarize, it is critical that you, as the CEO/Leaders, are aware of the dangers of AI-generated fictional answers and take proactive steps to mitigate these risks.