Rebooting AI with Gary Marcus
Neuroscientist, AI entrepreneur & author Gary Marcus joined me on the latest episode of the Exponential View podcast. We discussed paths to powerful artificial intelligence, and how the currently favoured approach, deep learning, has limits.
You can listen to this & other episodes of the Exponential View podcast right here.
You’re well known for being a sceptic of deep learning. Why?
Gary Marcus: Deep learning is a major advance. The ideas aren't really that new, but the ability to use it practically is new. But there has not been anything like exponential growth in many deep learning domains, like natural language processing for example. We still don't have any kind of a system that can read a new story and tell you who did what to whom when, where, and why. We couldn't do that in 1950 and we can't do that now in 2019. People are highlighting all the things that the systems have gotten better at, but people aren't measuring the things that the systems aren't very good at.
So if deep learning isn’t a panacea, what do you see as the way forward for AI development?
Gary Marcus: That's really why we wrote our book, Rebooting AI. It's not that we think that AI is impossible. My coauthor, Ernie Davis, and I both want to see AI happen. We both think it can happen, but we don't think that more and more data and more and more compute by itself is the solution to a problem. Obviously you would like to have more data, you'd like to have more compute. And that helps with problems like speech recognition, but it goes back to the issue of intelligence being multidimensional.
The problem is that intelligence involves many different things. You classify things, but you also make inferences. You do reasoning. When you read a children's story or anything that you read, really, some things are spelled out, but most of them aren't. A story, a new story or a fiction story that gave you every detail of what was going on and explained every inference, "This person must've been hungry, that's why they got their food," would be the most tedious thing imaginable. So, any good writer tries to say the things that are not obvious and let you infer the things that are obvious.
For a machine to cope with that, it has to be able to infer the things that are obvious to people. On that we've made no progress and having more and more data by itself is not going to solve the problem. We need to go back to some things that people thought about in the early history of AI, which is about reasoning and inference and how you combine different kinds of knowledge. And I'm not at all saying that's impossible. I'm saying it's neglected, and that we need to come back to those questions using all these wonderful tools for deep learning classification that have been developed in the last few years, but supplementing them with systems that can reason logically.
Common sense is an important component in all of this. How do you go about building common sense in an artificial system, and who decides what common sense is?
Gary Marcus: I think those are really good questions that we don't have answers to yet. The challenge starts with the fact that common sense is not one thing, just like intelligence is not one thing. We define it as knowledge that is ordinarily held that you can expect any adults for example, to have. But some of that knowledge is about how objects work, some of that knowledge is about how people work, how animals work, there's material science.
The first question is how do you even represent it? How do you program it into the machine? The biggest effort tried to do it all with logic and logic is not very good in handling uncertainty. So somebody spent thirty years trying to program all of this stuff in logically and it hasn't really been that effective. Deep learning's Achilles' heel is: it doesn't really have a way of directly incorporating implicit knowledge.
So there is no way to tell deep learning that a bottle is something that can carry liquids and it might leak... So, first, mission is to really build a language for representing that stuff at all. The other part of your question is, of course, who decides? I mean, you could imagine a version of the French Academy legislating it, but I don't think that the boundaries are so fixed.
You can listen to this and other episodes of the Exponential View podcast right here.
Foundation degree at Széchenyi István Egyetem
5 年Igen elérkeztünk egy lényeges kérdéshez az ember és a mesterséges intelegancia AI A mindennapokban.most egy új évezredet is k?sz?nhetünk 2020 nak mikor is nem az lesz az els? hírek a médiákban hogy mely part csinált ezt azt hanem a j?v?nk fejl?désével annak még kiaknázatlan területével foglalkozik a sajtó és a média.silikon városok igen már nem csak tervez? asztalon léteznék egyre elterjedten nagy volumen? építkezésekbe kutató folyamatokba kísértnek el bennünket mind országos mind világ viszonylatban.a világ tudósai kiknek nem a pusztulás a legf?bb feladatuk nemzetk?zi és világi szinten ?sszefogva hozzák létre az újabbal újabb silicium városokat Ai mesterséges robotijai társadalmat. Figyeljek amit írok és ha elj?n az a nap vegyek el? és xn--publiklnak-x4a1t.igen a mesterséges robotokat szigorú k?rülmények k?z?tt kell gyártani erre vannak és átcsoportosításai az auto ipar mint felvev? szegmens készen áll rá legyártani az els? inteligens robotokat szállító és repül? xn--eszkzket-q4ab.mondom kapacitás megvan xn--hozz-8na.egy k?zpontból irányítót biztonsági zónában ellen?rz?tt objektumból lehet majd irányítani a silicium városokat az utópisztikus xn--felptseket-d7ac4c54k.fizeto eszk?z a nemes fém papirt nem alkalmazva ezzel is védjük az esztelen erdoirtast.
Electrical & Automation Engineer | IIoT|Machine Learning| Data Science|Data Engineering
5 年I agree with Garry, the moment we also shed more light on the things that the systems aren't very good at, more focus on these areas will be triggered and may just give us earlier solutions. Deep learning's inability to incorporate implicit knowledge is truly a major challenge.
Founder at Tisquantum Limited
5 年Data vs Meaning. To infer implicit connections, you need meaning, not data. AI is (mostly) data, for the very good reason that very few people have any idea how to encode meaning. Techniques such as deep learning are like playing golf in the dark - you might get close to the hole - but since you have no idea where the hole is, you are unlikely to hit it - and you would likely never know if you were even close. I'm not good at golf - but I have got my eyes open and the lights on - which gives me a bit of an edge.
Attorney - AI | Cybersecurity | Privacy - Support & Advocacy for CISOs
5 年Azeem Azhar, the story about the mountain goats being “hard wired” for spatial perception to survive in the mountains reminded me of Immanuel Kant’s categories and critiques of pure and critical reasoning. ?Wondering if examining AI via Kant’s thoughts might be useful...
Award-winning CTO | Management Consultant | Complexity Scientist | Professor of Informatics
5 年I like the comment regarding #commonsense? "The first question is how do you even represent it? How do you program it into the machine?? The biggest effort tried to do it all with logic and logic is not very good in handling uncertainty. So somebody spent thirty years trying to program all of this stuff in logically and it hasn't really been that effective.? Deep learning's Achilles' heel is: it doesn't really have a way of directly incorporating implicit knowledge."