Why a singular (artificial) intelligence can only be borne as an evolutionary "mistake"
Alternative titles (please choose yours):
Should an AI be always born as a free system ?
Is strong AI the mystic projection of a layer of mind feeling "impotent"?
Why is a strong-AI as real as Santa Claus ?
?----
As a computer scientist, I need first-of all to re-underline my admiration for all those so called intelligent systems helping humans in a variety of difficult situations.
In order to claim some additional connotations to AI, we need the definition of AI, in order to shorten up here, please take it please from a dictionary or from https://www.dhirubhai.net/pulse/romanticism-speculations-artificial-intelligence-fabio-ricci .
Nowadays in the era of?surveillance capitalism social and financial systems are still growing in complexity and a restricted group of people is making profit of this situation at cost of bigger but less accultured layers of humans which in a "nugdet" way "unconsciously" and "inevitably" adhere to created controlling mechanisms (e.g. privately: whatsapp, facebook, instagram,… but even institutionally: international laws, regulations, …). In this era we might experience the "world" as predisposed, prefixed, forcing, preorganized and a part of us might feel (the ones in a stronger way than others) the need for evasion from this so mechanized nowadays life to gather and exercise control to other layers of humanity. Decisions which control today our life might have been taken by "unknown" groups years before we were even born. Obviously I do not mean the honorable work of setting up human rights to avoid mistreatment – but regulations issued to optimize social/financial systems against possible starving – like e.g. preserving liquidity of state's coffers against the welfare of each single citizen. In such a trendy world some of us might feel the need of more "mystical" experiences, just to survive/escape/evade from such a regulated world. Suppression of the mystical part in us has begun in the 19th century with the positivism to support that mechanical and scientific part / models of the world we still live in. Suppressing the mystical parts in humans means making them more controllable. Why: because they are forced to behave using rules. This might be understood in terms of a "protection" but on the long way – at least to me – it turns out to be a "divide-and-conquer" method to control the complex scenario. So some of us might feel impotent and somehow forced to obey, to hope in something hawing the famous "super-power " in some domain – or even in most domains which would do the dirty work of saving us from all the problems we ourself caused…
Probably the latter mystic reason is also co-motivated by the feeling of being "impotent" toward the resolution of complex important problems and do delegate this ideally to some super-power having entity like a strong-AI – not us humans!
Excluding some (again, humanly supervised!) world-domination scenarios, typical of our thriller movies habits, whenever humans could create an AI (system), they wanted / hoped / planned to always supervise it. Most "intelligent" system today solve specific controlled tasks (recognizing this and that, balancing a bicycle or a prothesis, playing a game within rules, real time recognizing moving objects in cars or weapons, nano-robotics, RNA engineering, etc.) – rather than searching for the real sense of their existence once turned on.
A supervised AI (weak AI) is a reasoning system which solves human tasks by optimized imitation of human abilities with a higher performance. Hence and following our way to carachterize things, a "good", helpful one.
Yes but now I want finally to know about this "strong" AI ...
Let us (for a moment) assume that you have gloriously created an adaptive AI system with qualia processing and self-consciousness - the singularity . If this were true, the consequences sooner or later could be:
1) The AI would choose adapt itself until it become fully aware of itself, of its decision-freedom and of any (external human) supervision - "childhood + development"
2) The AI will internally claim its independence and you will not supervise it anymore – losing "parent control"
3) The AI will lose its motivation to solve the specific tasks (a kind of supervision) it was thought for - "rebellion"
4) The AI will try to optimize (adapt) those processes and circumstances, which should protect the same AI, instead to solve the tasks it was previously thought for – "adult phase"
5) As we have seen in many science-fiction movies, the AI becomes a tactical human competitor and potentially harm other (human) systems – "personality phase"
6) The AI would try to consume resources (electricity, connections, information, …) without being necessarily useful to its (humans) creators, turning on to be a real loss (for humans) – financially speaking.
领英推荐
7) The AI would finally possibly try to control / disadvantage / eliminate humans (supervising or killing them) because humans (as interested in a supervised AI) are seen as a menace to the AI itself.
The (sci-fi) laws of robotics never harming but only serving humans are de-facto again a kind of supervision (limitation) of an AI, hence a kind of amputation of its decisions freedom as a singularity.
Points 2) to 6) could occur in a less harmful / dangerous way, if the AI in some way is "forced" into a kind of symbiosis (with humans), thus here the AI would have to accept limitations (a [mutual] dependency, a limitation, a bias… [again a hidden supervision]) of its self-decided behavior. Is this still a (free) AI system ?
Can a real free and unsupervised AI be realized at all?
If there could be one, this would be only thanks to a big "mistake" (see above).
Meta-ego or scientific mission ?
Still assuming some human group has reached the ability to "implement" and launch a singular AI system, the only motivation could be the one, in a strong self-ambition to be "the" creator of a strongly (and independent) intelligent system, which makes the group feeling even "meta-intelligent", hence just filling/growing the (for a community relatively useless) group-ego! Scientifically speaking – to create a real intelligent system would indeed prove big abilities for the creator but at the end the only target would be to elevate the community egos to be able to claim these abilities and to gather project financing from believing investor groups, instead to have done something really useful.
Conclusion – can we still hope in some useful uses of (supervised) AI? Should we call it "AI"?
My answer is "yes but supervised", AI systems intended as task optimized computer systems will always help us as prothesis for all those mechanizable tasks we need for a better life or better survival. This implies that the "AI" systems (now quoted) be always supervised and hence "weak" so they as machines/algorithms will only execute the parts to do what they were thought for.
Building supervised (weak) AI systems in robotics can harm other humans – thinking e.g. at a greater use in the factory production lines, in the field of recruitment for specific human roles, or in the field of chatting to solve (simple) problems – hence causing firing of human workers. But here the "harm" is coming via humans decisions, not via AI decisions. (Personally I do recognize the need for a government to impose special robot taxes wherever the robot takes potentially one or more jobs exploitable by humans.)
The naming question is a good one, still preserving my respect and admiration for all those scientists realizing amazing (supervised) systems. Following Kano model's (https://en.wikipedia.org/wiki/Kano_model ) main thought, a supervised optimized AI system today will be named just a "system" tomorrow. Who does this? We humans will do it.
Concluding, the romantic thought of the borne AI singularity is in my eyes only a kind of mystical graffiti impressed somewhere in our mind, as reaction of our mind to leaving in such a constraining super-organized system. A graffiti like a linkedin article like this one to be impressed on some (interests) wall inside the community.
Thanks a lot for reading and helpfully cogitate!
Yours, Fabio Ricci from semweb
PS: Due to linkedin's way of linking of reshares articles inside special inerest groups - please would you mind (in case you got here via a Linkedin Interest Group and you liked it) considering the original article https://www.dhirubhai.net/pulse/why-singular-artificial-intelligence-can-only-borne-mistake-ricci/ to reshare or put reactions - thank you so much.
District 9 Techno Optimist
3 年Why do we anthropomorphize our machine intelligence? Distributed intelligence that harnesses human minds and machines in novel patterns is a more interesting prospect. Hypercapable hive minds.