5 Variations of Artificial Intelligence
Murat Durmus
CEO & Founder @ AISOMA AG | Thought-Provoking Thoughts on AI | Member of the Advisory Board AI Frankfurt | Author of the book "MINDFUL AI" | AI | AI-Strategy | AI-Ethics | XAI | Philosophy
To better understand the question of whether machines can think, it may prove helpful to differentiate the dichotomy between "strong" and "weak" a little and compare it with a scheme suggested by the philosopher Keith Gunderson. He distinguishes between the following AI "games":
1. Weak AI, task, non-simulation: The computer can perform tasks that previously required intelligence, but no intelligence is required of the machine whose states have nothing to do with humans or other cognition.
2. Weak AI, simulation, non-human: A computer can simulate the cognitive processes in a non-human brain, but the states of the machine may or may not be related to those in the non-human brain.
3. Weak AI, simulation, human: A computer can simulate human cognitive processes, but there is no specific correlation between the computer states and the cognitive states of the brain.
4. Strong AI, non-human: The cognitive states found in machines are not functionally identical to those in the brain and therefore cannot be used to recreate human thought processes.
5. Strong AI, human: The cognitive states of the machines are functional (although not physical by nature) and identical to those found in the human brain.
Murat
(Excerpt from the book The AI Thought Book )
Senior Independent Researcher
2 年I think the whole concept of "weak AI" is an oxymoron. It's like saying that a paper clip is a weak handcuff. All those systems labeled "weak AI" may surely be useful ( or damaging, or hurtful, or useless), but branding them "weak AI" is just a marketing ploy. As for the variations suggested by Keith Gunderson, they seems to mix up the "What" (capabilities does a system have) with the "How" (does the system provide this performance). It also seems to ignore the "precision" of the how (for example, in a simulation of the nervous system of a cockroach, how close is the simulation to reality? Does it take into account all known factors? Could there be any unknown factors? It also ignores the distinction between simulation and emulation - if a system is different from a human brain, but is capable of corresponding complexity of response, can it then not be "strong AI" - or, for that matter, "strong I" (if "I" means any "naturally developed" intelligence? If we should ever encounter a space-faring alien society, shoulder then not consider them "strongly intelligent" even if their nervous system and "processing states" look quite different from ours?
EY AI Leader | Consulting ? AI ? Actuarial | Helping businesses innovate, transform and scale with AI
2 年The end state is likely to be super-human rather than human. Also, the progression may not be so linear: There are many things today that are already super-human including face-recognition.