How human like are ChatGPT responses? Perhaps too human?
Kevin Brown
Digital Strategist & Innovator - Team Builder - Leadership Coach - Story Teller - Human =)
There is general consensus that that ChatGPT responses are surprisingly good. It was trained on massive data sets taken from the internet with access to millions of articles. These selected data sets were generally reputable sources. Which we could comfortably assume the majority of which were created by humans. So it is logical to assume that ChatGPT had a reasonable training set and could be capable of accurately creating human like answers.?
Today, many people believe the answers are good enough that now the whole world needs to take into consideration this amazing tool and how it will effect work and life. Such as writing school essays to news articles, book writing and legal responses to name a few. ?
But then, the ChatGPT creators and other technical experts warn that while the responses may sound very human like, they can include factually incorrect information, untruthful, be verbose and occasionally inject nonsense "hallucinate" and otherwise misleading at times. They warn, it is of utmost importance to review and check the facts before using ChatGPT responses. What a noble and responsible piece of advice they give for their tool.
So we have been warned by the experts it is not as good as a human since the answers may be “factually incorrect, verbose or occasionally inject nonsense”.?Wait a second!?
For years now we have been dealing with rampant misinformation, new narratives, and false truths. That is what "humans" are putting on the internet while often burying the actual facts.?
From that perspective we could argue that ChatGPT might be more human like in writing than we want to believe.?You know, a little verbose, factually inaccurate and occasionally injecting nonsense.?I am quite comfortable assuming we all come across that nearly everyday. ?
But now we criticise ChatGPT for not being accurate enough to be considered a human response. It sounds like what we really want ChatGPT to be a factually correct technical expert with perfect morals. Is that what we mean by human??Unfortunately the last few years there has been a growing lack of trust and faith in our human technical experts. Now suddenly we are expecting it from a machine.?Rather ironic if you think about it.?To me I find ChatGPT as an incredible mirror of ourselves and what the collective we is putting on the internet. ?
领英推荐
However, I am now inspired.?To prevent the ChatGPT errors, we are being coached to use ChatGPT in areas in which we have enough expertise that we can check the output. With this coaching, and by looking for errors when using ChatGPT perhaps it will help us create new habits to consistently question facts and correct nonsense where ever the information comes from.??
From my perspective, while ChatGPT may be writing like a human, errors and all, it may be kickstarting the development of new global personal habits to check facts, validate information and work with accurate data. ?
I am hopeful with that new habit and the use of new AI tools and Machine Learning validation processes, that we will collectively create more accurate content. This better content may perhaps improve the quality of information on the internet in reputable places. Time will tell.?
For now, even with all its flaws, I just want to say “Thank you ChatGPT for being so human - and yet still helping us all improve in ways we did not imagine.” ?
I look forward to check your work tomorrow!
PS. I honestly don't mind a little nonsense from time to time.?That actually is what makes us human.?