In conversation with the Godfather of AI - Collision 2023
Geoffrey Hinton and Nick Thompson on Centre Stage during day two of Collision 2023 at Enercare Centre in Toronto, Canada. Photo by Ramsey Cardy/Collision via Sportsfile

In conversation with the Godfather of AI - Collision 2023

This week, I had the honor of interviewing Geoff Hinton, known as the 'godfather of AI' because of his pioneering work on neural networks. In the past year, he has become quite concerned about the potential risks of AI. We spoke about where the technology is going, how he thinks we can best mitigate the risk, and the professional advice he gave my children backstage.

You may find a transcript of our conversation below.

--

Nick Thompson: What an incredible pleasure to be here with Geoffrey Hinton, one of the great minds on one of the great issues of our time. A man who helped create artificial intelligence, was at the center of nearly every revolution in it, and now has become perhaps the most articulate critic of where we're going. Such an honor to be on stage with you.

Geoffrey Hinton: Thank you.?

NT: He's earned the moniker a godfather of AI. One of the things that AI has traditionally had problems with is humor. I asked AI if it could come up with a joke about the godfather of AI. And it actually wasn't that bad. It said, "He gave AI an offer it couldn't refuse: neural networks." It's not bad.

GH: Okay, that's not bad.?

NT: It's good for AI. So let's begin with that. What I want to do in this conversation is very briefly, step a little back into your foundational work, then go to where we are today, and then talk about the future.

So when you're building and you're designing neural networks and you're building computer systems that work like the human brain and that learn like the human brain, and everybody else is saying, "Geoff, this is not going to work." You push ahead. And do you push ahead because you know that this is the best way to train computer systems? Or do you do it for more spiritual reasons, that you want to make a machine that is like us?

GH: I do it because the brain has to work somehow, and it sure as hell doesn't work by manipulating symbolic expressions explicitly. And so something like neural nets had to work. Also, Von Neumann and Turing believed that, so that's a good start.

NT: So you're doing it because you think it's the best way forward.?

GH: Yes, in the long run, the best way forward.

NT: Because that decision has profound effects down the line. Okay, so you do that. You start building neural nets, you push forward, and they become better than humans at certain limited tasks, right? At image recognition, at translation, some chemical work. I interviewed you in 2019 at Google IO, and you said that it would be a long time before they could match us in reasoning. And that's the big change that's happened over the last four years, right?

GH: They still can't match us, but they're getting close.?

NT: And how close are they getting and why??

GH: It's the big language models that are getting close, and I don't really understand why they can do it, but they can do little bits of reasoning. So my favorite example is I asked GPT-4 a puzzle that was given to me by a symbolic AI guy who thought it wouldn't be able to do it. I made the puzzle more difficult, and it could still do it.

And the puzzle was: the rooms in my house are painted blue or yellow or white. Yellow paint fades to white within a year. In two years time, I want them all to be white, what should I do and why? And it says you should paint the blue rooms white. And then it says you should do that because blue won't fade to white. And it says you don't need to paint the yellow rooms because they will fade to white. So it knew what I should do and it knew why. And I was surprised that it could do that much reasoning already.

NT: And it's kind of an amazing example because when people critique these systems or they say they're not going to do much, they say they're mad libs, they're just word completion. But that is not word completion. To you, is that thinking?

GH: Yeah, that's thinking. And when people say it's just autocomplete, a lot goes on in that word, "just" autocomplete. If you think what it takes to predict the next word, you have to understand what's been said to be really good at predicting the next word. So people say it's just autocomplete or it's just statistics. Now, there's a sense in which it is just statistics, but in that sense everything's just statistics. It's not the sense most people think of statistics as it keeps the counts of how many times this combination of words occurred and how many times that combination. It's not like that at all. It's inventing features and interactions between features to explain what comes next.

NT: Okay, so if it's just statistics and everything is just statistics, is there anything that we can do—obviously it's not humor, maybe it's not reasoning—is there anything that we can do that a sufficiently well-trained large language model with a sufficient number of parameters and a sufficient amount of compute could not do in the future?

GH: If the model is also trained on vision and picking things up and so on, then no.?

NT: But is there anything that we can think of and any way we can think in any cognitive process that the machines will not be able to replicate??

GH: We're just a machine. We're a wonderful, incredibly complicated machine, but we're just a big neural net. And there's no reason why an artificial neural net shouldn't be able to do everything we can do.

NT: Are we a big neural net that is more efficient than these new neural nets we're building, or are we less efficient??

GH: It depends whether you're talking about speed of acquiring knowledge and how much knowledge you can acquire, or whether you're talking about energy consumption. So in energy consumption, we're much more efficient. We're like 30 watts. And one of these big language models, when you're training it, you train many copies of it, each looking at different parts of the data, so it's more like a megawatt. So it's much more expensive in terms of energy. But all these copies can be learning different things from different parts of the data. So it's much more efficient in terms of acquiring knowledge from data.

NT: And it becomes only more efficient because each system can train each next system??

GH: Yes.?

NT: So let's get to your critique. So the best summarization of your critique came from a conference at the Milken Institute about a month ago, and it was Snoop Dogg. And he said, "I heard the old dude who created AI saying this is not safe because the AI's got their own mind, and those motherfuckers going to start doing their own shit." Is that accurate? Is that an accurate summarization?

GH: They probably didn't have mothers.?

NT: But the rest of what Dr. Dogg said is correct?

GH: It's bang on. Yeah.

NT: All right, so explain what you mean or what he means and how it applies to what you mean when they're going to start doing their own shit. What does that mean to you?

GH: Okay, so first I have to emphasize we're entering a period of huge uncertainty. Nobody really knows what's going to happen. And people whose opinion I respect have very different beliefs from me. Like, Yann LeCun thinks everything's going to be fine, they're just going to help us. It's all going to be wonderful. But I think we have to take seriously the possibility that if they get to be smarter than us, which seems quite likely, and they have goals of their own, which seems quite likely, they may well develop the goal of taking control. And if they do that, we're in trouble.

NT: Okay, so let's go back to that in a second, but let's take Yann's position. So Yann LeCun was also one of the people who won the Turing Award and is also called a godfather of AI. And I was recently interviewing him, and he made the case. He said, look, technologies, all technologies can be used for good or ill, but some technologies have more of an inherent goodness. And AI has been built by humans, by good humans, for good purposes. It's been trained on good books and good text. It will have a bias towards good in the future. Do you believe that or not?

GH: I think AI that's been trained by good people will have a bias towards good, and AI that's been trained by bad people, like Putin or somebody like that, will have a bias towards bad. We know they're going to make battle robots. They're busy doing it in many different defense departments. So they're not going to necessarily be good since their primary purpose is going to be to kill people.

NT: So you believe that the risks of the bad uses of AI are whether they're more or less than the good uses of AI are so substantial, they deserve a lot of our thought right now.?

GH: Certainly, yes. For lethal autonomous weapons, they deserve a lot of our thought.

NT: Well, okay, let's stick on lethal autonomous weapons. Because one of the things in this argument is that you are one of the few people who is really speaking about this as a risk, a real risk. Explain your hypothesis about why superpowerful AI combined with the military could actually lead to more and more warfare.?

GH: Okay, I don't actually want to answer that question. There's a separate question. Even if the AI isn't super intelligent, if defense departments use it for making battle robots, it's going to be very nasty, scary stuff, and it's going to lead—even if it's not super intelligent, and even if it doesn't have its own intentions, it just does what Putin tells it to. It's going to make it much easier, for example, for rich countries to invade poor countries. At present, there's a barrier to invading poor countries willy nilly, which is you get dead citizens coming home. If they're just dead battle robots, that's just great. The military industrial complex would love that.

NT: So you think that because it's sort of a similar argument that people make with drones. If you can send a drone and you don't have to send an airplane with a pilot, you're more likely to send the drone, therefore you're more likely to attack. If you have a battle robot, it's that same thing squared.

GH: Yes.?

NT: And that's your concern.?

GH: That's my main concern with battle robots. It's a separate concern from what happens with superintelligent systems taking over for their own purposes.

NT: Before we get to superintelligent systems, let's talk about some of your other concerns. So in the litany of things that you're worried about, obviously we have battle robots as one. You're also quite worried about inequality. Tell me more about this.

GH: So it's fairly clear it's not certain, but it's fairly clear that these big language models will cause a big increase in productivity. So there's someone I know who answers letters of complaint for a health service. And he used to write these letters himself, and now he just gets Chat GPT to write the letters, and it takes one fifth of the amount of time to answer a complaint. So he can do five times as much work, and so they'll need five times fewer of him. Or maybe they'll just answer a lot more letters.?

NT: Right. Or maybe they'll have more people because they'll be so efficient. Right? More productivity leads to more getting more done.

GH: Maybe not.

NT: This is an unanswered question.?

GH: But what we expect in the kind of society we live in is that if you get a big increase in productivity like that, the wealth isn't going to go to the people who are doing the work or the people who get unemployed. It's going to go to making the rich richer and the poor poorer, and that's very bad for society.

NT: Definitionally? Or you think there's some feature of AI that will lead to that??

GH: No, it's not to do with AI. It's just what happens when you get an increase in productivity, particularly in a society that doesn't have strong unions.

NT: But now, there are many economists who would take a different position and say that over time—and if you were to look at technology—we went from horses and horses and buggies and the horses and buggies went away. And then we had cars and oh, my gosh, the people who drove the horses lost their jobs, and ATMs came along and suddenly bank tellers no longer need to do that. But we now employ many more bank tellers than we used to. And we have many more people driving Ubers than we had people driving horses. So the argument an economist would make to this would be, yes, there will be churn and there will be fewer people answering those letters, but there'll be many more higher cognitive things that will be done. How do you respond to that?

GH: I think the first thing I'd say is a loaf of bread used to cost a penny. Then they invented economics, and now it costs $5. So I don't entirely trust what economists say, particularly when they're dealing with a new situation that's never happened before.?

NT: Right.?

GH: And super intelligence would be a new situation that never happened before. But even these big chat bots that are just replacing people whose job involves producing text, that's never happened before. And I'm not sure how they can confidently predict that more jobs will be created than the number of jobs lost.

NT: I'll just have a little side note that in the green room I introduced Geoff to, I have two of my three children are here, Ellis and Zachary. They're somewhere out here. And he said to Ellis, he said, "Are you going to go into media?" And then he said, "Well, I'm not sure media will exist." And then Ellis was asking, "What should I do?" And you said—

GH: Plumbing.?

NT: Yes. Now explain. I mean, we have a number of plumbing problems at our house. It would be wonderful if they were able to put in a new sink. Explain what jobs—there are a lot of young people out here, not just my children, but thinking about what careers to go into. What are the careers they should be looking at, what are the attributes of them?

GH: I'll give you a little story about being a carpenter. If you're a carpenter, it's fun making furniture, but it's a complete dead loss because machines can make furniture. If you're a carpenter, what you're good for is repairing furniture or fitting things into awkward spaces in old houses, making shelves in things that aren't quite square. So the jobs that are going to survive AI for a long time are jobs where you have to be very adaptable and physically skilled. And plumbing is that kind of a job.

NT: Because manual dexterity is hard for a machine to replicate.?

GH: It's still hard, and I think it's going to be longer before they can be really dexterous and get into awkward spaces. That's going to take longer than being good at answering text questions.

NT: But should I believe you? Because when we were on stage four years ago, you said, reasoning: as long as somebody has a job that focuses on reasoning, they'll be able to last. Isn't the nature of AI such that we don't actually know where the next incredible improvement in performance will come? Maybe it will come in manual dexterity?

GH: Yeah, it's possible.?

NT: So actually, let me ask you a question about that. So do you think when we look at AI and we look at the next five years of AI, the most impactful improvements we'll see will be in large language models and related to large language models, or do you think it will be in something else?

GH: I think it'll probably be in multimodal large models. So they won't just be language models, they'll be doing vision. Hopefully they'll be analyzing videos. So they were able to train on all of the YouTube videos, for example. And you can understand a lot from things other than language. And when you do that, you need less language to reach the same performance. So the idea they're going to be saturated because they've already used all the language there is, or all the language that's easy to get hold of, that's less of a concern if they're also using lots of other modalities.

NT: I mean, this gets at another argument that Yann, your fellow godfather of AI, makes, is that language is so limited. Right? There's so much information that we're conveying just beyond the world. In fact, I'm gesturing like mad, which conveys some of the information as well as the lighting and all this. So your view is that may be true, language is a limited vector for information, but soon it will be combined with other vectors?

GH: Absolutely. It's amazing what you can learn from language alone, but you're much better off learning from many modalities. Small children don't just learn from language alone.

NT: Right. So if your principal role right now was still researching AI, finding the next big thing, you would be doing multimodal AI and trying to attach, say, visual AI systems to text based AI systems.?

GH: Yes, which is what they're doing now at Google. Google is making a system called Gemini that fortunately, Demis Hassabis talked about a few days ago. That's a multimodal AI.

NT: Well, let me talk about actually something else at Google. So while you were there, Google invented the transformer network or invented the transformer architecture, generative pretrained transformers.

When did you realize that that would be so central and so important? It's interesting to me because it's this paper that comes out in 2017, and when it comes out, it's not as though firecrackers are shot into the sky. It's six years later, five years later that we suddenly realize the consequences. And it's interesting to think, what are the other papers out there that could be the same in five years?

GH: So with transformers, it was really only a couple of years later when Google developed Bert. So Bert made it very clear transformers were a huge breakthrough. I didn't immediately realize what a huge breakthrough they were.

And I'm annoyed about that. It took me a couple of years to realize. Bert made it clear.

NT: The first time I ever heard the word transformer was talking to you on stage, and you were talking about transformers versus capsules. And this was right after it came out. Let's talk about one of the other critiques about language models and other models, which is soon, I mean, in fact, probably already they've absorbed all the organic data that has been created by humans. If I create an AI model right now, and I train it on the Internet, it's trained on a bunch of stuff, mostly stuff made by humans, but it's a bunch of stuff made by AIs. Right? And you're going to keep training AIs on stuff that has been created by AIs, whether it's text-based language model or whether it's a multimodal language model. Will that lead to the inevitable decay and corruption, as some people argue? Or is that just a thing we have to deal with? Or is it, as other people in the AI field, the greatest thing for training AIs and we should just use synthetic data in AI?

GH: Okay, I don't actually know the answer to this, technically. I suspect you have to take precautions so you're not just training on data that you yourself generated or that some previous version of you generated. I suspect it's going to be possible to take those precautions, although it'd be much easier if all fake data was marked fake. There is one example in AI where training on stuff from yourself helps a lot. So if you don't have much training data, or rather you have a lot of unlabeled data and a small amount of labeled data, you can train a model to predict the labels on the labeled data. And then you take that same model and train it to predict labels for unlabeled data. And whatever it predicts, you tell it you were right, and that actually makes the model work better.

NT: How on earth does that work??

GH: Because on the whole, it tends to be right. It's complicated. It's been analyzed much better in many years ago from acoustic modems. They did the same trick.

NT: So listening to this, I've had this realization on stage. You're a man who's very critical of where we're going. Killer robots, income inequality. You also sound like somebody who loves this stuff.

GH: Yeah, I love this stuff. How could you not love making intelligent things?

NT: So let me get to maybe the most important question for the audience and for everyone here. We're now at this moment where a lot of people here love this stuff and they want to build it, and they want to experiment. But we don't want negative consequences. We don't want increased income inequality. I don't want media to disappear. What are the choices and decisions and things we should be working on now to maximize the good, to maximize the creativity, but to limit the potential harms?

GH: So I think to answer that, you have to distinguish many kinds of potential harm. So I'll distinguish like, six of them for you, please. There's bias and discrimination that is present now. It's not one of these future things we need to worry about. It's happening now. But it is something that I think is relatively easy to fix compared with all the other things. If you make your target, not be to have a completely unbiased system, but just to have a system that's significantly less biased than what it's replacing. So at present, you have old white men deciding whether young black women should get mortgages. And if you just train on that data, you get a system that's equally biased. But you can analyze the bias. You can see how it's biased because it won't change its behavior. You can freeze it and then analyze it, and that should make it easier to correct for bias. So, okay, that's bias and discrimination. I think we can do a lot about that, and I think it's important we do a lot about that, but it's doable.

The next one is battle robots. That I'm really worried about because defense departments are going to build them, and I don't see how you could stop them doing it. Something like a Geneva Convention would be great, but those never happen till after they've been used. With chemical weapons, it didn't happen till after the first world war, I believe. And so I think what may happen is people will use battle robots. We'll see just how absolutely awful they are, and then maybe we can get an international convention to prohibit them. So that's two.

NT: I mean, you could also tell the people building the AI to not sell their equipment to the military. You could try. Okay, number three?

GH: Military has lots of money.?

NT: Number three??

GH: Number three, there's joblessness. You could try and do stuff to make sure the increase in productivity, some of that extra revenue that comes from the increase in productivity goes to helping the people who are made jobless if it turns out that there aren't as many jobs created as destroyed. That's a question of social policy. And what you really need for that is socialism. We're in Canada, so you can say socialism.?

Number four would be the warring echo chambers due to the big companies wanting you to click on things that make you indignant and so giving you things that are more and more extreme. And so you end up in this echo chamber where you believe these crazy conspiracy theorists. If you're in the other echo chamber or you believe the truth, if you're in my echo chamber. That's partly to do with the policies of the companies, and maybe something could be done about that.?

NT: But that is a problem that exists. It existed prior to large language models and in fact, large language models could reverse it.

GH: Maybe.?

NT: I mean, it's an open question of whether they can make it better or whether they make that problem worse.?

GH: Yeah, it's a problem to do with AI, but it's not to do with large language models specifically.?

NT: It's a problem to do with AI in the sense that there's an algorithm using AI trained on our emotions that then pushes us in those directions. Okay. All right, so that's number four.

GH: There's the existential risk, which is the one I decided to talk about because a lot of people think is a joke.?

NT: Right.?

GH: So there was an editorial in Nature yesterday where they basically said, fear mongering about the existential risk is distracting attention from the actual risks. So they compared existential risk with actual risks, implying the existential risk wasn't actual. I think it's important that people understand it's not just science fiction. It's not just fear mongering. It is a real risk that we need to think about, and we need to figure out in advance how to deal with it. So that's five, and there's one more, and I can't think what it is.

NT: How do you have a list that doesn't end on existential risk? I feel like that should be the end of the list.

GH: No, that was the end, but I thought if I talked about existential risk, I'd be able to remember the missing one, but I couldn't.?

NT: All right, well, let's talk about existential risk. Explain exactly existential risk, how it happens, or explain, as best you can imagine it, what it is that goes wrong that leads us to extinction or disappearance of humanity as a species.

GH: Okay. At a very general level, if you've got something a lot smarter than you that's very good at manipulating people, just at a very general level, are you confident people will stay in charge? And then you can go into specific scenarios for how people might lose control, even though they're the people creating this and giving it its goals.

And one very obvious scenario is if you're given a goal and you want to be good at achieving it, what you need is as much control as possible. So, for example, if I'm sitting in a boring seminar, and I see a little dot of light on the ceiling, and then suddenly I notice that when I move, that dot of light moves. I realize it's the reflection from my watch. The sun is bouncing off my watch. And so the next thing I do is I don't start listening to the boring seminar again. I immediately try and figure out how to make it go this way and how to make it go that way. And once I got control of it, then maybe I'll listen to the seminar again.

We have a very strong built in urge to get control, and it's very sensible because the more control you get, the easier it is to achieve things. And I think AI will be able to derive that too. It's good to get control so you can achieve other goals.

NT: Wait, so you actually believe that getting control will be an innate feature of something that... the AIs are trained on us. Right? They act like us. They think like us because the neural architecture makes them like our human brains and because they're trained on all of our outputs. So you actually think that getting control of humans will be something that the AIs almost aspire to?

GH: No, I think they'll derive it as a way of achieving other goals. I think in us, it's innate. I think... I'm very dubious about saying things are really innate, but I think the desire to understand how things work is a very sensible desire to have, and I think we have that.

NT: So we have that, and then AIs will develop an ability to manipulate us and control us in a way that we can't respond to. Right? And even though good people will be able to use equally powerful AIs to counter these bad ones, you believe that we still could have an existential crisis?

GH: Yes. It's not clear to me. I mean, Yann makes the argument that the good people will have more resources than the bad people. I'm not sure about that. And that good AI is going to be more powerful than bad AI, and good AI is going to be able to regulate bad AI. And we have a situation like that at present where you have people using AI to create spam, then you have people like Google using AI to filter out the spam, and at present, Google has more resources, and the defenders are beating the attackers. But I don't see that it'll always be like that.

NT: I mean, even in cyber warfare, where you have moments where it seems like the criminals are winning, and sometimes where it seems like the defenders are winning. So you believe that there will be a battle like that over control of humans by super intelligent artificial intelligence?

GH: It may well be, yes. And I'm not convinced that good AI that's trying to stop bad AI getting control will win.

NT: Okay. All right. So before this existential risk happen, before bad AI does this, we have a lot of extremely smart people building a lot of extremely important things. What exactly can they do to most help limit this risk?

GH: So one thing you can do is, before the AI gets super intelligent, you can do empirical work into how it goes wrong, how it tries to get control, whether it tries to get control. We don't know whether it would. But before it's smarter than us, I think the people developing it should be encouraged to put a lot of work into understanding how it might go wrong, understanding how it might try and take control away. And I think the government could maybe encourage the big companies developing it to put comparable resources, maybe not equal resources, but right now, there's 99 very smart people trying to make it better and one very smart person trying to figure out how to stop it taking over. And maybe you want it more balanced.

NT: And so this is, in some ways, your role right now, the reason why you've left Google on good terms, but you want to be able to speak out and help participate in this conversation so more people can join that one and not the 99.?

GH: Yeah, I would say it's very important for smart people to be working on that, but I'd also say it's very important not to think this is the only risk. There's all these other risks. And I've remembered the last one, which is fake news. So it's very important to try, for example, to mark everything that's fake as fake. Whether we can do that technically, I don't know, but it'd be great if we could. Governments do it with counterfeit money. They won't allow counterfeit money because that reflects on their sort of central interest. They should try and do it with AI-generated stuff. I don't know whether they can, but.

NT: All right, we're out of time. Give one specific "to do," something to read, a thought experiment, one thing to leave the audience with so they can go out here and think, okay, I'm going to do this. AI is the most powerful thing we've invented, perhaps in our lifetimes, and I'm going to make it better, to make it more likely it's a force for good in the next generation.

GH: So how could they make it more likely to be a force for good??

NT: Yes. One final thought for everyone here.

GH: I actually don't have a plan for how to make it more likely to be good than bad. Sorry. I think it's great that it's being developed, because we didn't get to mention the huge numbers of good uses of it, like in medicine, in climate change, and so on. So I think progress in AI is inevitable and is probably good, but we seriously ought to worry about mitigating all the bad side effects of it and worry about the existential threat.

NT: All right, thank you so much. What an incredibly thoughtful, inspiring, interesting, phenomenal talk. Thank you to Geoffrey Hinton.

GH: Thank you.?

Wow. Incredible. Thank you for posting the transcripts. Greetings from Belgium.

jon feldman

Primary School Teacher | Educational Technology Creator | Codemaster Institute

1 年

Great interview Nicholas Thompson

(upvote) The video's link at the right timing with Nicholas Thompson and Geoffrey Hinton is here https://youtu.be/7mqgyJbz67I?t=22957 (Day 2 Collision, Toronto). Great Talk cc: Yann LeCun

Isi Caulder

Patent lawyer | deeptech | climate

1 年

It was truly amazing to watch and reflect on the warnings and insights along with a spellbound audience. Good job!

Erin Kaiser

Business Development | Project Manager | Engaged Human

1 年

Bryant Cruse and Ayush Prakash - you may be keen to hear what they discussed

要查看或添加评论,请登录

Nicholas Thompson的更多文章

社区洞察

其他会员也浏览了