A conversation with Yuval Noah Harari about Artificial Intelligence
Yuval Noah Harari and Nick Thompson at AI for Good — ?ITU/Rowan Farrell

A conversation with Yuval Noah Harari about Artificial Intelligence

I just had the honor of interviewing the great scholar Yuval Noah Harari at the "AI for Good" conference put on by the UN in Geneva. We spoke about the risks to democracy of generative AI and some of the potential frameworks for maximizing the potential benefits and minimizing potential harm. You can watch a video here and a transcript is below.


Nick Thompson: Hello Geneva! I'm so sorry that we're not able to join you today, but I'm sure it is a fabulous conference.?

Yuval Noah Harari: Yeah, I'm also sorry that I'm not there in person but at least we can join virtually.?

NT: So let's get to it. Yuval, it’s a great pleasure to see you again, and it is a great pleasure to interview you. It is a moment of extreme change on the subject about which you know a lot. So my first question for you: a lot has happened in AI in the last six months.

You’ve complained about AI for years. You've been warning about the risks to the democracy of AI for years. What has changed, in your critique or your concern, as you've watched large language models and generative AI explode in the last few months??

YNH: I think things are happening just much faster than we expected, even people in the field. I think everybody should really know just three things about AI. You know you hear so much about AI but you really need to know three things.

First of all, this is the first tool in human history that can make decisions by itself. It's nothing like any previous invention in history. Atom bombs could not make decisions, they couldn't decide who to bomb. AI can make decisions by itself.?

The second thing everybody needs to know is that this is the first tool in human history that can create new ideas by itself. Now a printing presses or radio, they couldn't create ideas. They could disseminate all our ideas, but AI can create completely new ideas.?

And the third thing everybody should know is that humans are not very good at using new tools, new technologies. We often make mistakes. It takes time to learn how to use new tools in a beneficial and wise way. You know if you look at the industrial revolution, which many people compare the current AI revolution to the industrial revolution, this is quite a pessimistic comparison because when humans learned how to use the tools of the industrial revolution, we made some terrible mistakes on the way. Imperialism, Nazism, communism, the two World Wars, they were all mistakes on the way to learning how to use the tools of the industrial revolution. If we make similar mistakes with AI, this could really be the end of our species.?

And last thing is that while we are learning to use AI, it is learning to use us. So we have even less time and less margin for error than with any than with any previous invention.?

NT: I want to spend most of this conversation talking about how to regulate AI, to set the course, to reduce the risks, the policies that very smart folks watching this should be thinking about. But let's go back to that point that this is in some ways you're saying the most dangerous technology ever created. Right now, AI can't give a biography of Yuval Harari. Right? If I go into Open AI and I type in to give me a bio, it will get things wrong. It makes all kinds of mistakes. It's not actually that good yet. How long will it take to develop from this kind of adolescent confused messed up chatbot into the death destroyer of worlds that we see in the worst case??

YNH: I don't think it really will develop into the kind of destroyer world of worlds. The dangers of AI don't necessarily come from this super intelligent machine that can predict and do everything. It can come also from primitive AI which we already have. If we think about social media for instance, and the way that it's eroded our public trust, that it eroded democratic institutions all over the world. This was done with very very primitive AI. Basically, in social media, you have these algorithms that try to maximize for user engagement, and the algorithms discovered, largely by trial and error, the easiest way to increase user engagement, to grab people's attention, is by spreading outrage. This is something AI discovered about human nature, and it used it, and it has destroyed trust and institutions and the public conversations in many countries. We now have a gloomy situation where we have the most sophisticated information technology in history and people can no longer agree on anything. People can no longer have a meaningful conversation. And this is with very primitive AI.?

So we don't need to wait for this science fiction or all powerful AI to be worried. Now of course AI can also be used for the good. It's the most dangerous technology we've ever created, and it is also potentially the most beneficial technology that we ever created. So it's not about completely banning it which is anyway impossible, it's about regulating it to make sure that it is used for good and not for ill. Now how long, how much time do we have? It's very difficult to say. You know 10 years ago, there was no AI. People were talking about it, but it was still, for most people, it was science fiction. The whole AI revolution is just less than 10 years old. It's just making its first baby steps, but it is progressing at such a fast pace that nobody has any idea where we will be in say 10 years.

?NT: All right. Well actually, I would just like to say, you said it's impossible to have a good sophisticated conversation, Yuval. I feel like we're having one right now, but I do get your point. So let's talk about the pace of change because that is clearly underpinning so much of the concerns. If the changes that had happened over the last six months had happened over five years, we would have a much better chance of figuring out the norms. If the changes in the Facebook algorithm had happened over a period of many years, we could have figured out the norms, right?

So is there anything that can be done to change the speed at which this is evolving? I know you signed a letter saying to stop development of AI if you're going to build a large language model larger than GPT-4. That didn't have an effect, as far as I know, or at least it didn't change Open AI's behavior or Microsoft's behavior. What needs to happen to change the speed at which this is going?

YNH: I think we need to differentiate between development and deployment. It's very difficult to stop the development because we have this arms race mentality. People are aware of some of the dangers but they don't want to be left behind. But the really crucial thing, and this is the good news, the crucial thing is to slow down deployment not development. You can have an extremely sophisticated AI tool in your laboratory as long as you don't deploy it out into the public sphere. This is less dangerous. It's like you have this very dangerous virus in your laboratory, but you don't release it to the public sphere. That's fine. There is a margin of safety there. The same way that it is, you know unthinkable—forget about viruses, I mean drug companies that developed a powerful new medicine, they can't start just selling it to the public without going through some safety checks. And if you develop a new car you can't just put it on the road without first going through safety checks. It should be the same with AI. We should better understand its potential impact on society, on culture, on psychology, on the economy of the world before we deploy it into the public sphere.?

NT: But be a little more specific. So I develop a large language model. it's better than GPT-4. I would like to compete. I need to pay my developers, my venture capitalists. They want a return. I've got this software, it's going to help doctors all over the world. In fact, doctors in Africa are going to be able to cure people. What regulatory authority do I need to go to? And I don't even understand how this thing works. The people who made AI aren't quite sure why it works this way. What government authority is going to look at it and be able to say, you know, that's safe??

YNH: That's a very big issue. I mean we don't have the regulatory bodies in place. This is what we need to establish as soon as possible. You know you can have a regulation, for instance, that says you need to devote, say, I don't know, 20% of any investment in AI to safety and regulation. We don't have the institutions to regulate AI because we haven't invested in them and because if you now finish a PhD in computer science and you specialize in AI and the government offers you one salary to come to the, I don't know, legal department, and the private industry offers you 10 times or a hundred times more to go to them, then it's quite obvious where most people would go. So we need to invest a lot more in safety and in regulation, and we can do it—again, a simple, simple in conceptual terms, a simple first step is simply to have a regulation that there is a fixed amount, a fixed percentage, of every investment in AI must go to safety.?

NT: So you're saying that if I have my, again, my large language model, I've built it with my hired my team of developers. I have to put one in five of them on safety? And report to some authority that I've done that??

YNH: The same way that when you develop a car, you have some people working on making the car good as fast as possible, but you have people working on safety. Because you know, even if you have no ethics of your own, you know that no government will allow your car on the road unless it's safe.?

NT: But, I—I'm very compelled by this argument, and it would be wonderful. I'm just gonna, let's go back to the thing you mentioned before—social media. Well let's say that Facebook had taken their algorithm, or Twitter had taken their algorithm, and they would have certainly argued that they have 20% of their people working on safety, right? They're knocking out nudity, they're trying to find jihadists, they're spending a lot of time on that. How would any government authority have been able to look at that algorithm in say 2012 and anticipated the effects it would have had on democracy in the years that followed?

YNH: But I don't think anybody can anticipate say 10 years in advance or even five years in advance. Any regulatory institution that deals with AI will need to be able to react very fast to learn things on the fly. Now, you know, when people began to see the harm of the social media algorithms in 2016 in 2017 in places like Myanmar, they sounded the alarm, but the corporations didn't react. It's not as if nobody understood what was happening. No! There were people sounding the alarm at the time very fast after things began to happen. But there was no response. Now again, if all the talent goes only to the private corporations that develop the technology, then it's a lost battle, but if enough of the talent is encouraged and empowered to go either into government bodies or into NGOs that take it on themselves to regulate and to check for safety— and safety means social safety and psychological safety—then I think we'll be on on better grounds.

And again for societies, especially democratic society, we think that there is an existential issue. And I think also for the tech companies themselves, that we found the voices you're hearing coming from private businesses is, we understand the danger of what we are doing, please help us regulate this. Because by ourselves—it’s not that we don't trust ourselves, it's because we understand the kind of arms race dynamic within the market. We understand that without some external authority, policy meant to regulate will not happen.

NT: So one of the concerns I have about the big companies coming and asking to be regulated—we've seen Sam Altman has been traveling the world and Brad Smith from Microsoft in Washington—one of the concerns I have is that their desire to be regulated may come from moral concerns and may come from the fact that if they are heavily regulated, no one will be able to compete with them.

If the government says, you know what, in order to have an AI company, gotta have 20 of your people on safety, you have to get certified, you need an off switch, you get an lawyer who's going to comply with the regulations in Denmark and make sure that it all matches up with the regulations of the United States, only the big companies can do that. And then their power increases. GDPR only increased the power of the big social media companies. Are we going to do that again??

YNH: That's a very good question. I'm not sure about the answer, but first of all, already at present, the kind of resources you need in money, in data, in people to develop the really powerful models is such that it is a game of very few competitors. Certainly if you think in global terms, then very few countries are leading this AI revolution. Talking at the UN with representatives throughout the world, this is extremely dangerous. Again, the previous time something like this happened in the 19th century with the industrial revolution, we had a few countries leading the industrial revolution and then very quickly, coming in and exploiting the rest of the world. And this can happen again with AI in new ways.

With the AI revolution, you don't need to send soldiers into a country in order to basically conquer it. You just need to take the data out. You can control it from afar. So when we talk about regulation, it's not just the issue of a national government with its corporations. It's also a global issue of how all these countries that don't have—they are not really competitors in the AI race—how are they going to face the consequences? Because obviously the technology will impact everyone, not just the front ones.?

NT: So let's go back to the industrial revolution, and let me ask you about regulation. Back then, so electricity gets invented. We don't regulate electricity; we regulate the uses of electricity. You can't use electricity for this particular bad thing, but electricity is out in the open. We do regulate trains, but we don't say—I guess we do regulate trains, so electricity is the better example. So isn't AI more like electricity than like trains or cars where it's underlying all this stuff and we should regulate the outputs and the uses??

YNH: No, it's again, it's even more extreme than trains, because as I said in the beginning, AI can make decisions by itself, it could create new ideas by itself. It's more like humans than it is like trains, and its potential, again, politically, economically, culturally to disrupt human society is immense. Now, again, there are many many of the regulations we are talking about that can be conceptually quite simple. It's taking very old laws and rules and simply applying them to the new realm of information technology and AI.

If you think about a law, for instance, like don't steal, this is not a new invention, don't steal, but part of the business model of the of the big companies, of the tech giants, was to say that the world of information technology in the online world is completely different from the physical world. So laws like don't steal, they don't apply to data. We can take your data and do anything we want with it, and this is not stealing. And regulation, to a large extent, simply means no—whatever rules and norms that humans developed over thousands of years to deal with things like, I don't know, wheat fields, that you can't take somebody's wheat field, if they also apply to the digital reality and to data, you can't take my data and use it to manipulate me or to sell it to a third party without my permission.

Similarly, for thousands of years, we had laws against counterfeiting money. Technically, it's very easy. It was always very easy to create fake money, whether it's coins or banknotes or whatever. Once money became central to the financial system, in order to protect the financial system, governance enacted very strict rules against counterfeiting money. In most places, you would be executed. It was one of the worst crimes imaginable.?

Now, nobody ever enacted rules against creating fake people because it was technically impossible. There were rules throughout, but not against creating fake people. Now it is possible for the first time in history to create fake people, to create billions of fake people. That you interact with somebody online, and you don't know if it's a real human being or a bot. I mean in a year probably this conversation like we are having now, it will be almost impossible to be sure whether you're talking with a deep fake or with a real human. Now, if this is allowed to happen, it will do to society what fake money threatens to do to the financial system. If you can't know who is a real human and who is a fake human, trust will collapse and with it, at least free society. Maybe dictatorships will be able to manage somehow, but not democracy. And we just need very strict rules against faking people. If you're faking people or if you allow fake people on your platform without taking effective countermeasures—so maybe we don't execute you, but you go to 20 years in jail—and you'll see how quickly the tech giants will find ways to prevent the platforms from being overflown with fake people.

NT: Can you relax the regulations slightly since I created a whole bunch of fakes along with my 12 year old recently while playing with some software. It’s the same sell, and I really loved the little guy.

YNH: It’s not that you're not allowed to create them. You are not allowed to pass them in public as real people. I mean, there are situations when it would be wonderful to interact with an AI, let's say an AI doctor. It can be extremely helpful to interact with an AI doctor provided it's very clear that this is not a human doctor, this is an AI doctor. When I interact with an AI doctor or journalist or whatever, I need to know whether it's a real human being or an AI.?

NT: Let’s say it's a customer service rep. What if you lost your luggage, and it's just, you're calling United Airlines you need your bag back. Do you care??

YNH: I need to know if it's a real human or not. I mean if they have a tune, it's a two-second announcement, you're about to be connected to an AI bot,? and now I have the conversation and it provides what I need, I have no problem with it.?

NT: Okay so what about this case. So clearly we're debating and we're on, let's say we're talking on Twitter, you and I are going back and forth, let's say Twitter has its old verification system where it was based on being a real person. You call and you show your driver's license or whatever you do. You and I are both verified, but every time you say something, I just have another browser open, and I type into Open AI, hey what should I write back to you Yuval that will most convince him of my viewpoint. Do I need to declare that??

YNH: Well I'm not sure. My gut reaction is that's fine. I mean people are doing it in different ways. At least you basically in doing what you described, you are taking responsibility for what you're saying. And for instance, I don't know if you say something defamatory, you are liable to it. There is a real human being that has taken responsibility and that, in theory at least, is kind of vetting what the AI is telling. If you're just saying whatever the AI tells you to do blindly, that's on you. So part of the thing is that it also prevents—it's a question of numbers. At present on social media, we are not sure how many users are bots, but because bots—bots can tweet hundreds of times an hour in a way that most humans can't, and even though bots are apparently a small percentage of Twitter users, they create, they're responsible for a large volume of the conversations or of the communication on Twitter.

Now what happens if you have a social media platform when it's not just bots that retweet what a human created. You have millions, potentially billions of bots that can create content in many ways superior to what humans can create, like more convincing, more appealing, whatever, more tailored to your specific personality and life history. If we allow this to happen, then basically humans have completely lost control of the public conversation, and things like democracy will become completely unworkable. Now if you want to talk politics with the bot online, okay. I mean I can't prevent you from doing it. But to preserve a democratic society, we need to prevent the situation when the conversation is simply being swamped and hijacked by a potentially unlimited number of bots.?

NT: This makes a good sense. Let me give you a different framework for regulation that I've heard from some people which is, it's going to be too hard to regulate the bad stuff. We don't know how these things work. We can't really conceive what they're going to do. If we try to prevent all kinds of bad things, we're just going to allow for regulatory capture and we probably won't prevent them anyway.

So instead, what government should do is they should try to support as many good uses as possible. Because as you've said, Al can be used for good and AI can be used for ill. If we have more that's used for good, maybe it cancels out. So if that were the logic, then a government should build a data set that AIs could train on and only allow access to non-profits, universities, maybe companies that they've certified because they're doing good things. They're only teaching kids chemistry or trying to cure AIDs or trying to promote civil conversation. What do you think of that framework? Government should focus on supporting the good, not stopping the bad.?

YNH: I'm completely for supporting the good and for instance building, say a government database which is open to NGOs and so forth in order to be able to compete with the private sector players, but it cannot replace regulating at least the most dangerous potential of AI. Again, we have in a field like medicine, yes we focus on doing good. But if we now lift all regulation for medicine, anybody can can create a new drug and start selling it to people, or anybody can experiment on viruses in a lab and then release them to the public sphere to see what happens, this will be catastrophic. And doing it with AI, like lifting all regulations, my guess is that it will be even more catastropic partly because AI is already today able to synthesize new drugs and new viruses. I mean you can ask an AI to synthesize a new virus for you. You can ask it what would be the best way to create the greatest harm, how to spread it and so forth.

So yeah, we obviously cannot just rely on doing good. There are many situations in history when there is an imbalance between good and evil. Like with war and peace. It takes a lot of people to make peace, and sometimes just one person to start the war. So if we don't regulate the negative potential, all the good that AI is definitely going to do for humanity may turn out to be not enough to save it.?

NT: Let me ask you, we're running very short on time, but let me ask you one last impossible to answer a big question, though you answer everything very well. I love talking with you, Yuval. One of the arguments made for why the west and democracy should go quickly on AI is that we're essentially in a geopolitical arms race and if the democracies—they're trying to make sure everybody's real, you have government commissions, you have to certify 20% of your time is being spent on the safety stuff, and then North Korea says you know what? Let's go, right? Or more likely, China says, let's go. Then AI develops in a totally different way, and in fact, the AI bots and the AI systems built in non-democratic countries become massively more powerful, and it shifts the power of the world. Do you worry about that???

YNH: So many things to say about that. First of all—

NT: Two minutes left, Yuval!?

YNH: Okay first of all, I'm not talking about stopping development but deployment. Now if we don't regulate deployment, this will definitely destroy democracy much faster than any scheme by a North Korean tyrant or whatever. We need regulation in order to save democracy. If we don't have regulation, we will destroy ourselves.

And also, take into account that dictatorships are also terrified by the new AI, by the new large language models in particular, because dictatorships, they rely on fear in order to manage the information system. You tell a joke about the leader or you tell something that the regime doesn't want to hear you, you go away. Now how do you frighten an AI? What will you say to the AI? If you tell this joke, if you could go on telling jokes about our leader, or if you expose this thing from our past that nobody is supposed to know, you will go to the AI, they have no idea how to stop the AI from stealing the beans. They can prevent the AI from access, but that's going to be very difficult and that will cause them to lag behind. Actually in this particular situation, democracy, because they—I mean they are threatened, but they have a larger knowledge for that. They are better able to survive with a certain amount of pollution in their information system. For dictatorships, it's much much harder because they tend to rely on zero opposition voices in their information network, and how do you stop an AI from voicing the problematic ideas. Nobody knows that.

NT: I’m afraid we are out of time. I hope for everyone in Geneva that we have not polluted your information environment. I feel like when we talk with Yuval, it is nothing but cleaning the information environment. It is wonderful, it is an honor, it is a pleasure, it is always fascinating to talk with you. At this moment, it is so important. Thank you so much.?

YNH: Thank you! a

NT: All right everybody have a wonderful rest of the afternoon. Thank you so much.


Arno Baltin

Lecturer at Tallinn University

1 个月

There was a great conversation on YT : Yuval Noah Harari — Nexus - with Nicholas Thompson. It was availible yesterday (27th of September and is not availible anymore today. What could be the reason? Be well! Arno

回复
Vladimer Botsvadze

Advisory Board Member at US AI Institute | №1 Global Marketing Thought Leader by Thinkers360 | World's Top 21 AI Speaker in 2024 With Sam Altman | Futurist | Mentor at Techstars | Judge at The Webby Awards

4 个月

Great job, Nicholas!

回复
Prof Dr Oussama HAMAL

Professor - Researcher - Lecture || Ph.D in Artificial Intelligence || IT Consultant || Keynote Speaker || AI in Hight Education || AI in Architecture & Construction Smart City IoT CIM BIM&GIS || AI in Healthcare

5 个月

Great speech ??

回复
Laura Emma Merrithew Woodyard

Museum Studies Candidate | Home & Atelier | Equestrian | Gardener |

1 年

Go back and read 21 Lessons, and Sapiens. It is so Critical that you truly understand the detriment of AI, and not become cult-absorbed, but have a brazen approach to how you can distance yourself with almost obsessive pull into the tech community. Really, take a step back, as I am seeing way too many references to the harms of tech and am a realist and throw tremendous caution towards most. You don't really get it until you can no longer touch and work with your iPhone/touch phone (and watch as its hacked for days on end without regard for privacy) as the battery slowly depletes to nothing on a network of satellites owned and operated by a very select few, and that can easily be attacked on a wide scale. It's time to go back to old school landlines, and basic locks and keys. No one is immune to a hack or a virus. I hear what Spyro P. says, but actual theft does occur, and maybe you aren't sensitive enough because you lack complete awareness on a small scale to know when instrumental intrusion has occurred.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了