Artificial Intelligence, does it exist?
There are many many high level corporate plays to release better and better "AI" offerings. IBM has Watson (Sherlock was busy). Google has TensorFlow and Cycorp (My favourite DARPA initiative) has OpenCyc which in my opinion is extremely exciting. But are any of these offerings a true working AI?
An overly simple definition of AI may be something like "any code that can rewrite itself." and this is a very Alan Turing type argument. Self editing code has a memory in its code and should recursively "learn". But is intelligence about learning?
My general argument is that if we are using human intelligence as a benchmark for AI then we need to take 4 factors into account before we declare we have built intelligence in a digital medium
1] We honestly have not completed an understanding of our intelligence. Primarily science uses Reductionism. The idea is that you can study a car by studying its parts... which works well for cars, but in the area of biology emergent properties are everywhere. Take Conway's game of life, or any general cellular automata for example. Conway wrote a "game" using only 2 rules, but what emerges is complexity. This complexity does not come from the parts, but rather through interaction. A lot of bad research into cognition and intelligence assumes that consciousness comes from simple building block and therefore is a simple "dumb" thing at the bottom of the pyramid. But what about properties that emerge?
Honey has viscosity (thickness) not because molecules have viscosity, rather because non viscous molecules interact to create a property of viscosity at an interaction level. Consciousness does not come from neurons, rather it comes from the interaction of neurons. Studying one neuron in isolation means you are no longer studying consciousness as consciousness only emerges through complex interaction.
If we don't honour and model the true source of consciousness and cognition how can we build machines with those properties?
2] Yes yes Deep Blue beat Garry Kasparov at chess, but it is worth noting that an average human chess player when paired with an average computer program working together still beat the best human working alone AND the best computer program working alone. One reason is that computers and humans specialize in different ways of thinking. Machines are perfect at logic, humans are great at pattern recognition and these two computation models are almost mutually exclusive. If you hear a new song by Linkin Park even though you've never heard it before, you will go "Oh wait, I know that song!" because you are great at pattern rec. If I play you a note on a piano and say "What note is that?" you go ummmmm, a computer goes C#.
Now you can learn to be good at pitch and a computer can learn Linkin park, but the amount of energy you need to invest to learn pitch is far more than a computer and for a computer to learn Linkin Park is a ton of investment vs a human. AI must solve this challenge and become good at both. Linear algebra and nodes honestly is a terrible approximation to what humans do naturally.
3] Desire comes from human biology. The mind is not the only source of human cognition (which is distributed) and consciousness (which comes from biology in a large part). We hallucinate our consciousness from our biology. https://www.ted.com/talks/anil_seth_how_your_brain_hallucinates_your_conscious_reality
Without biology what is hunger? What is drive? How does AI build an image of itself and feel the need for preservation, vulnerability or love without a body? The TED TALK nails this point home to perfection.
4] Saving the best for last. A reasonable cook can prepare a reasonable meal, to become a great cook he/she needs better recipes and better ingredients. But a master cook can prepare a tasty meal from corn and water without a recipe. Computer AIs are smarter with better inputs and better programs, their decision making is totally dependent on environmental factors. A human master is independent of his / her environment. How can code learn this when it is about unlearning. Can you train a program not to worry about its code, not to worry about iterative optimization?
In closing, how can we claim to have an AI when we don't understand emergent properties, when harmony between effective pattern rex and logic has not been obtained (if it had been obtained then a human - machine pair would not win at games against a computer), when machines have no biology to contextualize their existence and when they are dependent on the environment?
I would put forth that machines are learning to mimic intelligence in a fashion but the gap between mimicking and being is broader than one thinks.
CEO & Lead Designer - CAPLAN Software & Consulting
6 年Very good article, finally someone is admitting that just because deep blue wins against Casparov, it does not mean we have AI. Yes we get better and better expert systems but we are far away from building an AI. AI and machine learning are still just very good marketing arguments...
I help take Web3.0 and exponential technologies (AI/ML & Blockchain) into practical projects that measurably impact people's lives, especially the disenfranchised!
6 年Interesting insights.