AI chatbots are incredibly convincing, even when they're wrong
"Why is Italian a more difficult language than English?" I asked Google Bard this week. What I received back was well-structured, convincing, and factually wrong.
Now, Google's Bard uses its own language model, the Language Model for Dialogue Applications or?LaMDA, and it was specifically trained on dialogue. Contrast this with ChatGPT or Bing, which use OpenAPI's GPT4 model. I'm sure GPT will give different responses.
But I'm interested in how these language models have been built and trained. They are designed to give highly likely human responses based on vast quantities of data. That's why they sound remarkable and convincing (if not a bit flat and wordy at times.)
But what is the reference data? We need citations or references to know. Unlike current search engines, which point us towards resources that we can then assess ("that's a person's blog... this is an advertorial for a language course... oh, here's a definitive resource from a university..."), these chat interfaces don't — or can't? — provide reference links to sources for verification.
So how did Bard do?
Takeaway
ChatAIs are designed to be convincing and to respond in human-like prose. While they seem definitive, they are trained on massive amounts of data without sourcing or citations, which could help validate their responses. Professionals need to approach their responses with a critical eye.
Google Bard's full reply:
Italian is not necessarily a more difficult language than English. In fact, it is often considered to be one of the easier languages for English speakers to learn. This is because Italian is a Romance language, which means that it is descended from Latin. English is also a Romance language, so there are many similarities between the two languages. For example, many Italian words are similar to English words, and the grammar is also similar.
However, there are some aspects of Italian that can be challenging for English speakers. For example, Italian has a different alphabet than English, and some of the sounds are not familiar to English speakers. Additionally, Italian has a number of verb tenses that do not exist in English.
Overall, Italian is a beautiful and rewarding language to learn. It is not necessarily more difficult than English, but it does require some effort and dedication. With a little bit of practice, you will be speaking Italian like a native in no time!
Here are some of the challenges that English speakers may face when learning Italian:
Despite these challenges, Italian is a beautiful and rewarding language to learn. With a little bit of effort and dedication, you will be speaking Italian like a native in no time!
Additional reading
when we ask why ChatGPT confabulates, it's?difficult?to pinpoint an exact technical answer. And because there is a "black box" element of the neural network weights, it's very difficult (if not impossible) to predict their exact output given a complex prompt.
Our study reveals that the standard benchmarks consist of >60% hallucinated responses, leading to models that not only hallucinate but even amplify hallucinations. Our findings raise important questions on the quality of existing datasets and models trained using them.
Nick Gracilla works as a technology consultant at SEI, a business and technology management firm. Follow the?SEI Insights Blog?for more ideas on business, technology, innovation trends, and more.
Software Solutions That Click
1 年Now I’m wondering: even if I don’t use AI to write content, and I research something via Google as I have for many years, I can no longer tell if the stuff I’m reading is legit because I have no idea how much AI was involved in creating it. It’s one thing to not know an author or to take certain media channels with a grain of salt (lack of research, media bias, syndication influences vs original reporting), but if AI is now weakening the veracity of what we’re reading, then we’ve really poisoned the well. AI content will taint the training of AI… I shudder to think how this will impact news reporting. Fake news, bias, and propaganda are bad enough, but AI will make it all the more difficult to find real information as improvements in its writing style it gets harder to detect factual errors.
I’m interested hearing how companies are balancing the risk of utilizing tools in their domain. Do employees understand the risk? Are we taking output at face value? What controls do we put in place without inhibiting productivity? Is this becoming the next generation vulnerability generator? Great article Nick. Like the Italian reference.
Environmental Art Pioneer | Transforming Invasive Species into Fine Art | Founder, The Catalyst Collaborative | Award-Winning Artist
1 年So interesting (and hilarious)! I played with Chat GPT all weekend. It gave me really great advice for marketing art. Nothing I hadn’t already thought of, but that’s after many hours of brainstorming and coaching. It pretty much spit out my entire marketing plan in less than a second. It also recommended organizations and publications who might be interested in my work, then wrote pitch letters to both for me. And did a pretty good job of improving my artist’s statement. I’ll refine them, for sure, but it definitely did the majority of the heavy lifting. I’m so impressed.
Principal at Systems Evolution, Inc.
1 年Well said Nick Gracilla. Just like any other tool, it can be misused with disastrous results. But with the right use case, it can provide tremendous value.