Truth, Lies, BS, and the Frightening Future of AI Models
David Meerman Scott
Author of 12 books including NEW RULES OF MARKETING & PR and WSJ bestseller FANOCRACY | marketing & business growth speaker | advisor to emerging companies
In our polarized world, we cannot agree on the truth. Who won the 2020 US Presidential election? Are vaccines safe and effective? What’s going on with university campus protests? When we add dumb AI robots like the models underlying ChatGPT spitting out “answers” things get super scary.
I’m a fan of the Marketing AI Institute podcast and look forward to each week’s episode when hosts Paul Roetzer and Mike Kaput jump into a range of quick-fire subjects. In a special episode last week, they discussed a conversation on The Ezra Klein Show with Dario Amodei, the former vice president of research at OpenAI and now co-founder and CEO of Anthropic, the company behind the large language model series Claude AI. The show's topic: What if Dario Amodei Is Right About A.I.?
“It’s one of the craziest interviews I’ve ever listened to,” Paul says. “If we, as a society, can’t agree on truth, how do we build models that agree on truth? Anyone can build whatever they believe truth to be. It can create cults, it can create new religions – all these things because they are insanely good, they’re superhuman at persuading people to believe something.”
I listened to the Amodei interview myself a few days ago and have been thinking about the issues raised since then.
AI models: the ultimate bullshit artists
In the interview with Dario Amodei, Ezra Klein brought up the book “On Bullshit” by Harry Frankfurt.
The “On Bullshit” book description reads, in part: bullshitters seek to convey a certain impression of themselves without being concerned about whether anything at all is true. They quietly change the rules governing their end of the conversation so that claims about truth and falsity are irrelevant. Liars at least acknowledge that it matters what is true. By virtue of this, Frankfurt writes, bullshit is a greater enemy of the truth than lies are.
Ezra Klein: “When I began interacting with the more modern versions of these systems, what they struck me as is the perfect bullshitter, in part because they don’t know that they’re bullshitting. There’s no difference in the truth value to the system, how the system feels.”
Dario Amodei: “If you think of how the models are trained, they read a bunch of stuff on the internet. A lot of it’s true. Some of it, more than we’d like, is false. And when you’re training the model, it has to model all of it.”
领英推荐
"Truth" and bad actors
Tools like ChatGPT are great at many things, and I use AI nearly every day in my work. As I interact with the various large language models I don’t really think about “truth”. Clearly there is a lot of bad information out there that these models have been trained on, but like Google or other search engines, I tend to be careful with the results and give them a personal “smell test” before using.
I’m an outlier because I understand the basics. I’ve been working in and around the professional information business for nearly 40 years. Most people simply don’t consider “truth” in what they read or find online.?
Imagine a future where bad actors deliberately create AI models that are false or that skew the truth.
“These things can do so much damage because they’re perfect,” Paul says. “They don’t care about the truth. They have no relation to the truth. They just achieve whatever objective is set out. The leaders of these AI companies all think these models may go very bad. But publicly they say we have to build the smartest versions possible, and we’ll just figure it out.”
This is an important topic. ?I encourage you to listen to Ezra Klein’s interview of Dario Amodei as well as Paul and Mike’s analysis of it.
Also! At the Marketing AI Institute’s AI for B2B Marketers virtual event on June 6, 2024, I will be moderating a keynote discussion: “The Reality of AI Adoption in B2B Marketing - A Panel Discussion with B2B Leaders”. If you are a B2B marketer looking to reinvent what's possible, check out this virtual event.
Disclosure: I am an investor in the Marketing AI Institute.
Media Solutions
2 个月Pitchman Billy Mays said it best:
CMO at Shopper Approved. Podcast host, keynote speaker, and author of Reputation King. I help websites increase their traffic and conversions using proven behavioral science principles. Cialdini Certified Professional.
4 个月Dr. Cialdini was just discussing this at his conference. This is a real danger.
Founder & Financial Advisor at Maven Lane Financial Group | Author of "Your Money Narrative"
5 个月Excellent post, David! Sadly, I don't think it will be difficult to sway the masses. It has been interesting to watch it all evolve so quickly. I have noticed that ChatGBT has become more "conversational"—it's fun to engage in a little back-and-forth.
Tech Pioneer | AI Educator | AI Adoption Specialist | Strategic Practical “hands on” people friendly innovator
5 个月I think AI there are a number of factors at play here. Models reflect patterns of human language and then are “pre-trained” where humans review and align responses to conform to a certain “standard”. Then various “system” prompts are used to reduce “bias” and ensure responses are within “guard rails”. The recent Gemini debacle to me was evidence of too much “bias” being added by its creators in a way that distorted the “truth”. The public called BS and now “adjustments are being made” These tools certainly have the power to distort the truth and yet deliver great outputs. But just like any human “assistant” you don’t fire them for producing a less than perfect result, you simply validate it before sending it to a client.
The Story Professor / Business Storytelling Consultant Helping people connect with audiences in high-stakes communication environments
5 个月Thanks for sharing this podcast, David. I am biased. I have a healthy skepticism of the people taking center stage at conferences projecting the future of AI. The topic certainly can put "butts in seats." My skepticism is fueled by the writing and research of Roger Schank ( March 12, 1946 - January 29, 2023). Schank was a leading researcher and theorist in AI and cognitive science, known for his pioneering work in areas like natural language processing and case-based reasoning as well as his focus on applying AI to education and training. He was skeptical of exaggerated claims about AI systems' current capabilities. He argued that much of what was being marketed as AI was simply pattern matching, not true intelligent behavior. He believed there was a lot of "arrogance and ignorance" outside of a small group of experts when it came to understanding the true limitations of AI systems. Of course, the whole AI world blew up at the time of his death in early 2023. If he were still around, it would have been interesting to hear his sentiments, given the reality of AI systems today.