I think the fears over AI are overblown.
Originally posted on my blog on Dec 22,2020.
As described by Naval Ravikant.
We’re nowhere close to general AI, not in our lifetimes. You don’t have to worry about that. It’s so overblown. It’s a culmination of Cassandra complex, you know it is fun to talk about the end of the world!
The reason why I think AI is not coming anytime soon is because a lot of the advances in so-called AI today are what we call narrow AI. They are nothing more than pattern recognition machine learning to figure out what is that object on the screen or how do you find that signal, there is nothing approaching what we call creative thinking. To actually model general intelligence, you run into all kinds of problems.
领英推荐
1. We don’t know how the brain works, at all.
2. We’ve never ever modelled a paramecium or an amoeba, let alone a human brain.
3. There is this assumption that all of the computation is going at the cellular level, the neuron level. Whereas nature is very parsimonious, it uses everything else at its disposal. There is a lot of machinery inside the cell that is doing the calculations that is intelligent and isn’t accounted for.
The best estimates are that it is going to take 50 years or more before we can simulate what is going inside a cell near perfectly. We will need another 100 years before we can build a brain that can simulate inside the cells. So putting it as saying that I’m just gonna model a neuron as ‘on’ or ‘off’ and then use that to build a human brain is overly simplistic.
Furthermore, I would posit that there is no such thing as general intelligence. Every intelligence is contextual within the context of the environment that it is in, So we have to evolve an environment around it. I think a lot of people who are peddling general AI, the burden of proof is on them. I haven’t seen anything that would lead me to indicate that we are approaching general AI. Instead, we are solving deterministic, close-set, finite problems using large amounts of data, but it is not SEXY to talk about that!
We do not know how intelligence works. Most of the AI approaches say that we are gonna try and model how the brain works. When we model at the neuron level, which is saying that this neuron is ‘on’ and that neuron is ‘off’. What I am saying is that a neuron is a cell inside a cell and there is all this machinery going on that is operating the neuron, which is also part of the intelligence apparatus, you can’t just ignore that. As I said, nature is parsimonious, we’ve got this three pounds wetware object that can hold all this data, nature has been very efficient. I just don’t think that computers are anywhere close to that like they can hold that amount of data with that complexity, with that holographic structure of the brain.
I don’t think we can evolve a creature to be intelligent outside of the boundaries of feedback in a real medium. Like, if you have raised a human being in a concrete cell with no input from the outside, they wouldn’t have any feedback from the real world, and then they won’t evolve properly. So I believe that dumping information into a thing is not enough. It has to have an environment to operate in, to get feedback from it needs to have context.