What AI Can Do, What it Can't and How to Tell the Difference - The interview Part 1
Rich Heckelmann
Effectively Bridging Technology Development, Marketing and Sales as Product Portfolio Leader, Pragmatic Marketing Expert, AI Product Management, Product Owner, Scrum Master, Operations, QA and Marketing AI Strategist.
The term "snake oil salesman" originated in the 1800's when cowboy Clark Stanley began selling his "snake oil" product, promising ludicrous medical curing abilities when taken. Sound familiar?
Stanley took a live snake put it into boiling water, mixed it up, and told the crowd that was a cure all.
But what he sold to the crowd, Stanley's Miracle Snake Oil( or what CEO’s buy AI for thinking the AI snake oil will double employee productivity ) actually contained mineral oil, beef fat, red pepper and Turpentine, and didn't contain any snake oil at all, the same way CEO’s are not getting what they bought AI for either.
Today many of the promoters of AI systems are like modern day Snake Oil salesman. These two Princeston Scientists Arvind Narayanan and Sayash Kapoor say much of AI is snake oil that does not and cannot work. They distinguish it from what they see as the real promise of AI in the long run and shine a spotlight on underlying problems with today's AI systems, and many of the concerns we have about AI's role in society.
Their conclusion – they are not ok with leaving the future of AI up to the people currently in charge, recognizing that to do this requires constant vigilance about not just specific snake oil products, but also broad ways in which AI could shape society for the worse and have systemic risks. So the book is primarily about the risks that we should avoid, but the perspective that led to the book is definitely one of optimism.
What follows are excerpts from the Tech Policy Press Interview with these two scientists conducted by Justin Hendrix.
Part 2 will be an Adam Conover interview with the same two Scientists
Broken Companies use Broken Systems
Arvind: Broken AI is appealing to broken institutions a lot of the time. (For Instance - Applicant Tracking Systems are a perfect example) Why, for instance, automation is so appealing in hiring because companies are getting hundreds, perhaps thousands of applications per position and that points out something that's broken in the process, but then it seems appealing to try to filter through all of those candidates with AI. And even if AI in those context is not doing much, we think that a lot of these AI hiring tools are just elaborate random number generators from the perspective of an HR department that is swimming in the sea of applications. It's done the job for them. It gives them some excuse to say we've gotten it down to these 10 candidates, and so there are often underlying reasons why we think broken AI gets adopted.
Social Media Moderation
Justin: You start off pointing out Mark Zuckerberg back in 2018, he's in front of Congress, he's trying to explain away all the various harms of social media and he's bullish on AI as the answer to a lot of these problems. Why can't AI fix social media?
Arvind: Companies have looked to automated tools simply because the scale is so vast and without some amount of automation, the system simply won't work. But when we started looking at the reasons why AI hasn't obviated the problem so far and looked at what might change in the future, we started to quickly realize that the limitations were not about how well the technology works, it's really about what we mean by content moderation and what we want out of it.
But for me in particular, this crystallized when I was reading Tarleton Gillespie's book on content moderation where he gives this example from, I want to say maybe around 2016, 2017, there was a controversy on Facebook when a journalistic organization posted the so-called Napalm Girl Image. Facebook took this image down and there was an outcry and people initially assumed that this was an enforcement error, perhaps an error of automated enforcement, a clumsy but or can only see this as nudity because it's a classifier that has been trained to do so but is unaware of the historical significance of this image. But what Charleston points out is that was absolutely not the case. This was an image. Not only had this type of image been discussed internally by Facebook's policymakers in terms of what they can and can't have on their platform, this specific image had been discussed and was part of the moderator training materials. And Facebook had decided that despite its historical significance for whatever other countervailing reasons, this image can't stay on the platform. And I thought this particular misunderstanding that a lot of people had, including myself back when I heard if this controversy really captures what we misunderstand about social media content moderation, it's a hard problem not because the individual instances are hard, but because as a society we can't agree about what we want out of content moderation.
I think in some sense the reasons why we'll have to rely on some amount of human interventions and the same as the sort of broader argument of the book that content moderation is not a monolith. So within content moderation, we might have specific tasks, things like detecting nudity in image, things like detecting. If an image contains a certain offensive hit speed symbol, for example, that might be very easily solvable using ai. In fact, we are quite optimistic that AI will continue to play more and more of a role in doing this type of detection work, but the place where human intervention becomes necessary is in drawing the line of what constitutes acceptable speech for a platform.?
AI Policies:
Justin: What's going on in Washington right now when it comes to AI is it is AI helping divert people away from what should be the focus of political leaders?
领英推荐
Arvind: Someone comes along and tells you, Ive got an abundance machine and its going to sort the environment and its going to sort poverty and its going to sort access to healthcare or mental health or whatever, it sounds pretty good.nbsp; Where theres someone running for mayor who thinks the chatbot is going to make these decisions. Its going to be unbiased. And so it can avoid all of these messy political disputes that we have. But to think that we can simply eliminate politics and have this neutral arbiter is to completely misunderstand the problem that were confronting to reduce this social problem to a technical problem. It might seem appealing in the moment, but its ultimately not going to work. And I think a lot of the vision of tech that is being sold to policymakers is just more nuanced versions of this basic misunderstanding.
Misinformation and putting AI back in the Bottle
And there it's not going to be easy to or not going to be possible to reduce it to a technical problem. Now similarly on the dangers of AI, so many of these deep problems that we have in our society, whether it's misinformation, which really a better way of looking at the misinformation problem is a lack of trust. The press is supposed to be the institution that helps us sort truth from falsehood. And really when we say the problem of misinformation, it's not really the fact that there are some bot farms that are spewing misinformation to us. That's not the problem. The problem we should be talking about is what to do about the decline of trust in the press. Now people have lost sight of that and are treating this as an AI problem. Oh, what do we do about AI generating misinformation? Which to us is absolutely completely missing the point. Instead of dealing with this difficult problem, institutional problem that we should tackle, it's instead treating this as a technology problem and then thinking about how to put AI back in the box. It's not only going to work, but it's distracting us from the hard work that we need to do.
Sayash: In some sense, it is understandable why people are thinking about putting AI back in the bottle for a lot of these societal harms, right? These companies are in some cases billion dollar companies, perhaps trillion dollar companies that are spending billions of dollars of money in training these models. And so they should bear what is seen as their responsibility when it comes to these harms. And I think that's why solutions like watermarking, the outputs of AI generated text for dealing with misinformation have proven to be effective, especially in policy circles. We've seen a number of policy commitments, most notably the voluntary commitments to the White House that involve watermarking ostensibly as a way to reduce misinformation. And if you look at it technically, I think watermarking, there's no way in which watermarking works to curb misinformation even in the world where misinformation is purely a technical problem because attackers can easily circumvent most watermarking based attacks.
The Silicon Valley Snake Oil Salesman
Justin: Do you think there's some hope that we can break the grip of Silicon Valley's sales pitch essentially when it comes to tech and its role in society? How do we end up with something more salient than what's on offer?
Sayash: I think the basic answer is that we feel that there is this, in this time period right now, when it's at somewhat of a societal tipping point in terms of its adoption diffusion across society, we feel that all of us have a lot of agency. We have agency in how AI is built, what we use it for, and more importantly, not just like the two of us in Princeton or the few of us in DC and policy who are thinking about AI on a day to day, but we feel as if a large number of people around the world, most people perhaps have some agency in how they use AI and what shape AI takes into the future.
And I think that is what primarily transpires at least my hope for this future. I think we've seen time and time again how people have resisted harmful applications of AI in their communities.
Arvind: We're not calling for resisting AI. That's not what the book is about. But through all that, recognizing that to do this requires constant vigilance about not just specific snake oil products, but also broad ways in which AI could shape society for the worse and have systemic risks and that sort of thing. So the book is primarily about the risks that we should avoid, but the perspective that led to the book is definitely one of optimism.
Part 2 Tomorrow
Sources:
Justin Hendrix Tech Policy Press AI Snake Oil Interview AI Snake Oil: Separating Hype from Reality | TechPolicy.Press