The end of history
I declare having not yet read Yuval Noah Harari's books Sapiens and Homo Deus. He was on Ezra Klein show (https://itunes.apple.com/us/podcast/the-ezra-klein-show/id1081584611?mt=2&i=1000381998563) and one question piqued my mind:
- what would make the AI breed continuously change?
Some background from the discussion above: AI, even the simple (but highly efficient one here in 10 years) one will do our highly specialized jobs better than humans. (this of course is when all goes to script which it doesn't and something like SCAN framework is helpful to think what to do with complexity and chaos versus simple and complicated algorithmic problems shout-out to Tom Graves).
In Harari's thinking this is a direct consequence of specialization. The jobs we do are so narrow an artificially intelligent being will perform better in most of them. And paradoxically setting the clock back 2000 years there’s quite nothing that performs better than a human.
For an economic and societal purpose, we are not needed. (I’ll skip the question of what we do after not being needed.) Harari makes the logical prediction: Artificially Intelligent species will rule.
Back to the intriguing question: what would make AI breed change, either change the world they rule, the workflow and the work they do?
To process this question Harari makes a key distinction between intelligence and consciousness. We humans have them intertwined. That’s why all SCAN processes are fractal, recursive, have the wicked problem embedded. (And that’s the logic behind the passage in the Magus where the only one with a clear-eyed choice is the sociopath who is threatening to kill a village to catch a member of the resistance. His consciousness and intelligence are distinct.)
I don’t think there’ll be AC, Artificial Consciousness, in near future. How does AI decide to continuously change? To what end? What sets the direction?
In his book Sapiens (not read it yet, but according to the discussion with Ezra) Harari claims human ability to spin up stories, have a shared make-belief capability has helped us rule the world. But the visions, goal-setting of any enterprise are not pure intelligence, they are consciousness at least as much, like FB and Google and other here are finding out right now.
So: we try to move the enterprise with a mix of intelligence and consciousness, how will AI even be able to move? The way humans tend to think about this, a purely intelligent enterprise heads to end of history.
information systems..silicon and carbon...recognition based on identity architectures and values.. we have " identity management" within us, in B2C systems and business and also in social systems...and they a?l have different architectures, governance practices and values.. I personally dont get the big AI excitement yet..because i dont see anyone mentioning the identity contexts and regimes being applied... but open to any thoughts in that space
Scientist behind Software for Mod, Sim and Vis using Converged HPC / AI
7 年The evolutionary future of AI is uncertain. The evolutionary future of humanity is also uncertain. Those facts will drive people who need certainty in their lives nuts. These are complex dynamical systems, and THAT is life.
Technologist and advisor
7 年Currently the AI we develop is void of consciousness. You can see it as good or bad, we cannot fathom with our coupled (both intelligence and consciousness) brains how the future AI breed will develop. As I alluded, we tend to call someone with intelligence and lacking in consciousness psychopaths/sociopaths. That is not to predict a purely intelligent breed would be. Harari's thesis is change we humans impart (sometimes called development, sometimes less flattering attributes) is based on a shared illusion, that is specific to us and the intertwined intelligence and consciousness. Lacking that capability for a shared illusion (which by the way is terribly inefficient and error-prone) has moved us to build wonderful societies and destroy the planet. The near future of AI has - let's call it code-horizon - code the possible outcomes of which has been set by humans, and is seemingly trivial. But with Machine Learning, AI will evolve. Without nothing but intelligence, what will that look like? Without the shared, error-prone and inefficient illusions, what would propel that form of intelligence? Why would they continue toiling? Why not head to the figurative beach with margaritas? If I were a science fiction writer I'd develop a plot where this intelligence takes on saving the world and ends up correctly deciding the correct course of action is to annihilate it all. For a logically intelligent being that is not madness, the math is math. (spoiler: a code left there by the last revision of human intervention makes the being act)
keeping silence and helping people to solve their problems with no buzz in media
7 年Looks quite pessimistic. As for me, it's natural consequence of logical approach. The thing is that logic has no intrinsic purpose. AI may have goals and targets, but no purpose. Because of purely logical nature of AI. If we don't feel purpose, we don't have it. It's a trick of definitions and word meanings. Consciousness is about con-science or awareness about common higher purpose. If we reduce it to logical science, we lose real human nature and yes, there is no sense in competing with logical machines. As if we reduced our life to running and get upset about the fact that cars are faster than us.
i look at the way we evolve our parietal lobe capabilites and thin that survial, adaptation and purpose would be key themes..instincts even...i do see some AI approches on the automation - robotic side..So curious how AI developed systems do embed instincts and purpose..