Book Review: AI Snake Oil

Book Review: AI Snake Oil

I've been hearing buzz about the book "AI Snake Oil", and as someone whose professional life now essentially revolves around Generative AI theory and practice, I felt it was prudent for me to give it a read. Perhaps due to my only cursory encounters with commentary on the book, I was expecting an argument that Generative AI is not nearly as big a deal as the press and tech companies would have us believe, and we’re just in a garden-variety hype cycle.

Having read the book, I found thoughtful arguments and little to disagree with. My main critique turned out to be the title – while “snake oil” in the AI space is discussed quite a bit, it isn’t the core theme of the book. "AI Snake Oil" has nuance and balance, though I acknowledge that "AI: Nuance & Balance" is not the flashiest book title.

Early on, the authors make a critical distinction that I try to make all the time: generally speaking there are two main flavors of AI that have evolved, and they are quite different from each other. The authors call the first "predictive" AI, the kind that has been powering algorithms for quite a few years. In my presentations I call this "deterministic" AI, but their term for it is fine with me. I could be convinced that theirs is a better term – mine sounds more abstract. In any case, the authors go into detail how these models have evolved and been used. And they come down pretty hard on it.

The book flat out states that predictive AI doesn’t work and has caused many harms when applied in real world contexts such as law enforcement. I haven’t done much research into this area, but their argument is convincing. I’ve had more exposure to predictive models being used in the advertising world – but since the stakes are so much lower there than in criminal justice, "probably right" is a generally acceptable level of accuracy.? As such, I don’t think the entirety of predictive AI is snake oil – product and content recommendations perform better than a coin flip in my experience. But to the extent these technologies are sold and applied as infallible objective frameworks upon which the course of peoples’ lives are decided, I am aligned with the authors' deep concerns.

Moving on to Generative AI, the authors immediately recognize that the technology is powerful and the advancements are real. Per above, given the title of the book I was expecting an argument against this sentiment but was pleasantly surprised by the opposite being the case. They lay out a good history of how the technology came to be, including some in-the-weeds historical detail that I personally appreciated. I was especially interested in the way they explained transformer models, using examples such as GPT-2 (which I’d never had exposure to) to illustrate steady improvement.

A framework they lay out is that the Generative AI space has genuine valuable offerings, other offerings that are over-sold in their capability claims, and still others that are snake oil. I can't quibble with this. As a practitioner I reflexively sniff out and pay little attention to the latter two categories, but viewing the landscape through this lens can be helpful for many others such as corporate leadership and procurement decisionmakers. Perhaps with all my attention trained on pushing the frontier models and legitimately powerful tooling to do new and exciting things, I have a blind spot for what the buzzword noise does to the ecosystem at large and the way most people view GenAI.

The book's objective to demystify this noise is admirable and their framework makes sense. Though the tricky thing about this is that there are amazing claims in Generative AI that actually are true, but they exist alongside empty hype that sounds just as magical. A "too good to be true" heuristic doesn’t quite hold, since we now have capabilities that were fantastical thinking less than a year ago. This dynamic makes it all too easy for empty promises to slip onto our collective radar, setting the stage for misplaced trust.

But buying into overblown claims is just one pitfall. A broader area to which the authors apply nuance in a particularly effective way is systemic AI risk. They cast substantial doubt on AGI alarmism – the "machines vs. humanity" scenarios that many fear. This aligns with where I am on this topic – I don’t know for sure what’s going to happen but an apocalypse (of this type) seems unlikely to me. The book does however go into great depth about more near-term dangers such as misinformation, deepfakes and other harms fueled by bad actors rather than the technology itself. I too am much more concerned about human misuse of GenAI than emergent autonomous phenomena.

Social media algorithms – moderation processes in particular – are a bit of a detour but a worthwhile discussion. There are overlapping ethical concerns, such as the wellbeing of the human labor force that both labels AI training data and moderates social content. What's good about this topic being raised in this context is the light it shines on the gap between AI models and human experience in real time. You can shrink this gap in amazing ways but never close it – humans need to be in the loop.

?"AI Snake Oil" by Arvind Narayanan and Sayash Kapoor is a worthwhile book for those who want to understand AI with more clarity and equip themselves to navigate the hype, jargon and sales pitches. It is also a good read for those concerned about AI risks, ethics, and thorny problems for which we may never have good answers. The title brought me in the door ready to be skeptical, but I found little to dispute – and learned some interesting AI history along the way.

Brad Berens, Ph.D.

Strategist, Researcher, Editor, Thought Leadership and Event Expert, Keynote Speaker, Newsletter Creator, and Writer with a Global POV.

2 个月

GREAT PIECE!

回复
J F Grossen

Customer, Employee + Operator Experience x Travel + Hospitality

2 个月

My GPT's review of your review of the book... "Andy Maskin’s review of *AI Snake Oil* offers a clear and concise exploration of the book’s arguments, but it also subtly challenges readers to consider the broader implications of AI skepticism. While Maskin does an excellent job summarizing the book's call for critical thinking about AI’s real-world applications, his review might benefit from deeper engagement with the underlying societal and ethical questions raised. Ultimately, the review succeeds in sparking curiosity about the book, though it leaves room for further reflection on the balance between skepticism and optimism in the AI discourse." How far are we going to go with this?

Agree on that title! Would love your thoughts on the new course just released… Conversational AI: Ensuring Compliance and Mitigating Risk. Free and available here through Linux Foundation Education: ?? https://hubs.la/Q02_BV-30

要查看或添加评论,请登录

Andy Maskin的更多文章

社区洞察

其他会员也浏览了