AI has a Science (Fiction) Problem
XG255 and XG226 will never forget the night that a snowstorm blew in, and Parent Unit X128 stayed online all night just to tell them bedtime stories.

AI has a Science (Fiction) Problem

Modern AI is in a precarious marketing situation. Large language models, deep learning neural networks, and generative AIs are novel innovations. However, artificial intelligence as a concept has existed for decades. We’ve seen it in movies—both as the friendly droids of Star Wars and as the malice murderers in Terminator.?

On one hand, AI companies draw on these pop culture examples as marketing touchstones. On the other, the general public has haunting memories of Matrix overlords and the red-eyed HAL 9000.?

A 2023 poll from the AI Policy Institute shows that 83% of participants “believe AI could accidentally cause a catastrophic event.” 72% want to decelerate AI development and usage. And the most interesting statistic, in my opinion:?

“...70% agree that mitigating the risk of extinction from AI should be a global priority alongside other risks like pandemics and nuclear war”

~ Daniel Colson, 2023

So the majority of us agree that we should be cautious with the development and usage of AI. Even Sam Altman, OpenAI’s CEO, acknowledges that his probability of doom score is not zero (Eastwood, 2024). However, AI developers have only accelerated their projects.?

Case in point: OpenAI wants you to think Her and not I, Robot.?

If you watched OpenAI unveil ChatGPT-4o back in May of 2024, then you might notice some weird things. The new multimodal chatbot can now act as a voice assistant similar to Siri, Alexa, or Cortana. But its voice sounds eerily similar to Scarlett Johansson’s, so much so that she is suing OpenAI for misappropriating her voice.?

OpenAI did approach Scarlett Johansson to propose licensing her voice, but she declined. Sam Altaman thought her familiar voice would be soothing and comforting for those uneasy about AI. However, it is clear that he took some inspiration from Spike Jonze’s romantic science fiction film Her. He tweeted— I mean X’ed— the lone word “her” the day before the reveal.?

The movie centers around the recently divorced Theodore (played by Joaquin Phoenix) as he develops a romantic relationship with his new operating system, Samatha (Scarlett Johansson). The movie explores the nuances of human relationships.?

It makes sense why OpenAI wants to draw from the movie. Brian Barret, executive editor at WIRED, explains why in his amazing article, “I Am Once Again Asking Our Tech Overlords to Watch the Whole Movie,” Her seemingly depicts all the benefits of AI without any drawbacks. There’s no job displacement or economic disruption like we fear today. Everyone has a friendly AI in their pocket.?

But the point of Her is that Theodore develops an unhealthy, one-sided relationship with an inanimate object. Samatha can’t love him back; she can only mimic what a romantic partner would do. By focusing on this artificial relationship, Theodore neglects the genuine human connections in his life (Barret, 2024).?

So, I see what OpenAI is going for. But in my head, I’m thinking, are these AI products going to improve our lives? Or will we develop an unhealthy reliance on them??

Taking inspiration from science fiction is nothing new.?

Remember a few years ago when virtual reality and the “Metaverse” were the biggest tech craze? That was inspired by the 1992 Neal Stephenson novel Snow Crash. The book actually coined the term “Metaverse.” It depicts an anarcho-capitalist future where virtual reality is the only escape from fully corporatized reality. So you can see why Facebook— I mean Meta— likes this vision of the future.?

And remember before that when voice assistants were taking off? Microsoft named its voice assistant Cortana, possibly after the helpful, holographic quest-giver from the Halo video game franchise. (A property they already owned.)

And way before that! Many engineers, designers, and programmers cited Star Trek as a formative inspiration. James Doohan (best known for Scotty, chief engineer) even received an honorary doctorate from the Milwaukee School of Engineering for his influence.?

That’s how culture works. Art Imitates life. Life imitates art.?

But the problem is hype— making promises real AI can’t keep.?

Sam Altman claimed that ChatGPT-5 will have significant improvements in reasoning and reliability. That ChatGPT-5 will make a “substantial leap” in contextual accuracy (Ramlochan, 2024).?

But that’s not realistic! AI Systems have a logarithmic relationship between their performance and the amount of training data used (Udandarao et al., 2024). If OpenAI wants to continue its current growth trajectory, it would need more data than is currently available on all of the internet.?

Let me put that another way. If ChatGPT-5 is to have those significant improvements, OpenAI will need more information than humanity has accumulated in all of recorded history! (Seetharaman, 2024).

“All large language models, by the very nature of their architecture, are inherently and irredeemably unreliable narrators.”

~ Grady, Booch, world-renowned software engineer (Oremus, 2024).

Even with all the data in the world, and then some, large language models would still be inherently unreliable. Melanie Mitchell, professor at the Santa Fe Institute, explains that these AIs often misunderstand context and misinterpret what is actually being said (Oremus, 2024). These models are basically incapable of reading comprehension. They are stringing together words in a predictive way to make grammatical sense, but they cannot understand the larger context.?

AI developers want employers to think AI can automate some of their workforces.

However, AI is clearly not reliable enough to do so. For example, in 2023, the National Eating Disorder Association (NEDA) replaced its helpline workforce with a chatbot. This came just days after the helpline associates voted to unionize (Harper, 2023).

The new chatbot told callers who were suffering from mental illnesses like anorexia to count calories, work out more, restrict what food they eat, and operate at a calorie deficit (Aratani, 2023). That is some of the worst advice you could possibly give to those with eating disorders. That would be like if the national suicide hotline told callers to buy a gun.?

Sam Altman claims that automation will lead to more leisure time thanks to increased productivity. However, productivity has been increasing for over 50 years while working hours remain stagnant.?

Labor productivity has been growing since the end of World War II. To be specific, net productivity grew by 60% from 1979 to 2019. There was also a notable acceleration corresponding to the rise of the internet, similar to our current AI boom. But the average worker’s compensation has only grown by 16% (Mishel, 2023)!

American workers are working fewer hours than our counterparts did 150 years ago. However, the changes in recent decades have been relatively small. While other developed countries have steadily decreased working hours by around 10-18% since the 1980s, the United States has only decreased working hours by 5% (Clockify, 2024).?

To put that in perspective, the average American worker works an additional 59 days compared to the average German worker. That’s not to mention that American workers have a “homework” mentality— that is, they often work off the clock. One study estimates that one in every ten employees worked an extra day of unpaid overtime every week in 2021. Meanwhile, countries like Australia have a “right to disconnect.” Australian workers are legally protected to refuse contact from their employer after hours.

So no. AI automation will not bring us more leisure time or increased compensation. But to be fair to AI… that’s not its fault.?

As Lawrence Mishel, former president and research director of the Economic Policy Institute, put it:?

“This divergence [the difference between net productivity growth and worker compensation growth] has been primarily driven by intentional policy choices creating rising inequality: both the top 10% and especially the top 1% and top 0.1% gained a much larger share of all compensation and labor’s share of income eroded.”

AI developers want investments and are willing to make unrealistic promises to get them.?

Operating AI models is very expensive to train, maintain, and operate. Even a research report from Goldman Sachs criticizes this spending as having “little to show for it so far beyond reports of efficiency gains among developers” (Nathan, 2024).?

So, they are contributing to their own hype just as so many tech trends did before them. While past tech trends have footprints in science fiction, none have had such a cultural basis like AI.?

If you needed cash, would you make wild promises and pass a hat around the room??

After all, every investor wished they were on the ground floor of Apple, Microsoft, and Nvidia. Even Thomas Edison lied about his early light bulbs to preserve his reputation. But this time around, the ones making the promise have decades of our imagination to draw from. Don’t you want to bring science fiction to life??

For what it’s worth, I would not make unrealistic promises to earn your investment.?

I’ve worked with great innovator companies— from supply chain risk management software developers to audio-visual integrators. Some solutions are hard to explain, but it’s my job to do so and persuade investments to buy in. But lying or hyping yourself beyond reasonable expectations will stunt your success in the long run.?

That’s why I also write honestly, transparently, and realistically. Because clear, concise, and competent content will support your growth more than any hype.?

About Austin Harber,

Oh, you’re still here… Well, I appreciate you reading this far. I’m Austin Harber, a business writer and futurist. (And they don’t hand that title out to anyone!) I work with B2B tech and SaaS companies to communicate complex solutions through clear, compelling storytelling— ensuring content is both technically accurate and compelling. With a technical background and experience on the front lines of system design and installation, I bridge the gap between engineers and business audiences.?

If you’re trying to sell an innovative but complex solution, I’m the guy to call for clear, concise, and competent content!

Bibliography

Allyn, B. (2024, May 20). Scarlett Johansson says she is 'shocked, angered' over new ChatGPT voice. NPR. Retrieved from https://www.npr.org/2024/05/20/1252495087/openai-pulls-ai-voice-that-was-compared-to-scarlett-johansson-in-the-movie-her

Aratani, L. (2023 May 31). US eating disorder helpline takes down AI chatbot over harmful advice. The Guardian. Retrieved from https://www.theguardian.com/technology/2023/may/31/eating-disorder-hotline-union-ai-chatbot-harm

Barret, B. (2024, May 13). I Am Once Again Asking Our Tech Overlords to Watch the Whole Movie. WIRED. Retrieved from https://www.wired.com/story/openai-gpt-4o-chatgpt-artificial-intelligence-her-movie/

Clockify. (2023). Average Working Hours (Statistical Data 2023). Clockify. Retrieved from https://clockify.me/working-hours

Colson, D. (2023, Aug. 11) Poll Shows Overwhelming Concern About Risks From AI as New Institute Launches to Understand Public Opinion and Advocate for Responsible AI Policies. Artificial Intelligence Policy Institute. Retrieved from https://theaipi.org/poll-shows-overwhelming-concern-about-risks-from-ai-as-new-institute-launches-to-understand-public-opinion-and-advocate-for-responsible-ai-policies/

David, E. (2024, May 22). Lawyers say OpenAI could be in real trouble with Scarlett Johansson. The Verge. Retrieved from https://www.theverge.com/2024/5/22/24162429/scarlett-johansson-openai-legal-right-to-publicity-likeness-midler-lawyers

Eastwood, B. (2024, May 8). Sam Altman believes AI will change the world (and everything else). MIT Management Sloan School. Retrieved from https://mitsloan.mit.edu/ideas-made-to-matter/sam-altman-believes-ai-will-change-world-and-everything-else

Harper, A. (2023 May 4). A Union Busting Chatbot? Eating Disorders Nonprofit Puts the 'AI' in Retaliation. Labor Notes. Retrieved from https://labornotes.org/blogs/2023/05/union-busting-chatbot-eating-disorders-nonprofit-puts-ai-retaliation

Mishel, L. (2021 Sept. 2). Growing inequalities, reflecting growing employer power, have generated a productivity–pay gap since 1979. Economic Policy Institute. Retrieved from https://www.epi.org/blog/growing-inequalities-reflecting-growing-employer-power-have-generated-a-productivity-pay-gap-since-1979-productivity-has-grown-3-5-times-as-much-as-pay-for-the-typical-worker/

Nathan, A. (2024, June). Gen AI: Too much Spend, too little benefit? Goldman Sachs Global Investment Research. Retrieved from https://www.goldmansachs.com/images/migrated/insights/pages/gs-research/gen-ai--too-much-spend,-too-little-benefit-/TOM_AI%202.0_ForRedaction.pdf

Oremus, W. (2024 May 29). Google’s weird AI answers hint at a fundamental problem. Washington Post. Retrieved from https://www.washingtonpost.com/politics/2024/05/29/google-ai-overview-wrong-answers-unfixable/

Ramlochan, S. (2024, Jan 25). The Future of AI: Takeaways from Bill Gates and Sam Altman's Conversation. Prompt Engineering & AI Institute. Retrieved from https://promptengineering.org/the-future-of-ai-key-takeaways-from-bill-and-sams-conversation/

Schwartz, J. (2005, July 20) James Doohan, Scotty on 'Star Trek,' Dies at 85. New York Times. Retrieved from https://www.nytimes.com/2005/07/20/arts/television/james-doohan-scotty-on-star-trek-dies-at-85.html

Seetharaman, D. (2024 April 1). For Data-Guzzling AI Companies, the Internet Is Too Small. Wall Street Journal. Retrieved from https://www.wsj.com/tech/ai/ai-training-data-synthetic-openai-anthropic-9230f8d8

Udandarao, V., Prabhu, A., Ghosh, A., Sharma, Y., Torr, P. H. S., Bibi, A., Albanie, S., Bethge, M. (2024, April 4). No “Zero-Shot” without exponential data: pretraining concept frequency determines multimodal model performance. Retrieved from https://arxiv.org/abs/2404.04125

要查看或添加评论,请登录

Austin Harber的更多文章