How a short interaction with Elon Musk renewed my hopes for the future
It’s easy to be worried about what will happen in the future, especially over the course of the next four years. Trump seems to have taken an eroding trust in the media to its final conclusion of complete disregard. What is “true” and “fake” is now subject to whoever is the loudest of loudmouths on twitter. If the media is the central nervous system of democracy, American democracy is arguably suffering from acute brain damage, due to Trump’s rhetorics of blunt-force trauma (with a shocking 4% truth-score at Politifact, by the way).
His attempt at a blanket-ban of muslims (even five year old US citizens) and blurring lines between family business and his presidential priorities, seem to be just the beginning of his fast-paced “outrage-as-a-service”. Also, while geopolitics are in the largest state of unrest since post-WWII, Trump has expressed anti-EU and pro-Russian sentiments, that add insult to injury, on behalf of old (almost institutional) alliances, that now seem crumbling. Perhaps what is most worrying of all, is the fact that rampant climate change is happening at an unforeseen velocity, all while scientific illiterates are taking seat at the steering-wheel.
Unavoidable change is also very real, and perhaps even more monumental, regarding the emergence of Artificial General Intelligence (what most people think of, when they speculate “true A.I.”). This technology poses both vast opportunities and a yuuuge threat, at the same time offering entirely new paradigms for morality, in leu of an entirely new synthetic consciousness. Regardless how we feel about it, tremendous change is arriving with increasingly fast paces, but there is hope that the negative changes can be solved with the largest change of them all, A.I.
To me, the promise of A.I. is a promise to solve all our problems: That is, if you believe like myself, that intelligence is the best tool for solving problems.
This promise is due to the notion of an intelligence explosion, that theorises an infinitely powerful postitive feedback loop will arise at the dawn of true A.I., self-improving its own intelligence to the point where this synthetic consciousness, literally and in no time at all, becomes infinitely intelligent. No problem will be too complex or “impossible” to solve in the mind of such an intelligence. Regardless of the nature of the problem; societal, scientific, philosophical, medical, environmental, political, or even moral (in my speculative, yet hopeful, opinion) —all problems are manageable, in theory that is. Only practice and the time it takes to set into practice (and execute the solutions), will remain a challenge. Priorities will be granted by the A.I., but human beings will most probably be the ones who will have to carry them out. Unless the A.I. is met with the ability to create an army of of superior robots to do our bidding of course, but then our minds automatically turn to well-known “Terminator” narratives, that are hard not to be fearful of, but aren't necessarily a viable outcome.
As HiveMined for the time being only has a little to do with A.I. (machine learning), I can’t exactly claim our newborn startup is a driving force within the field. However, despite a clear business interest, I also resonate with the topic of A.I. on a deep existential level.
I keep a close eye on everything the heavyweights are doing in the field of synthetic consciousness: Whether it’s an unorthodox collaboration between tech giants Google, Facebook, Amazon, IBM and Microsoft, Elon Musk’s OpenAI project (for which I am most hopeful about), or anything else related to the subject. That’s why, when the opportunity presented itself, as Elon Musk posted the newly formulated “Asilomar AI Principles” (a consensus of best practice research principles for developing beneficial A.I., including ethics and risk mitigating frameworks), I quickly sent him a thought in reply. It was a thought that I’ve had for a while, as “beneficial A.I.” in my mind isn’t nearly as important as benevolent A.I.:
Unrestricted by Twitter’s 140 character-limit, let me explain my train of thought here. I am not saying that there will be no risk at all, and I’m not saying there won't be a potential threat of inequality between A.I. augmented humans (individuals interfaced with A.I., as an extension of the brain) and those without “the update”.
What I intended with my tweet, was to point out that the cliché of a future conflict with a superior AI-robot species, could easily be avoided, if only we didn’t create one such “stand-alone” species to begin with.
Personally, I believe that an infinite intelligence would want to share this advantage with everyone simply due to clear utilitarian reasoning and the astounding potential for incredible synergies, ultimately wiping away all and any human inequality —but that is merely speculation. On the other hand, it is a belief I also turned out to have in common with Elon Musk, as he also sees a design imperative of A.I. towards a universal (and egalitarian) update to human evolution:
“And if we do [implement a neural interface with A.I.], then it will be tied to our consciousness, tied to our will, tied to the sum of individual human will, and everyone would have it. So it would be, sort of still, a relatively even playing field. In fact, it would be probably more egalitarian than today,”
I would even add to his belief, that the first thing this new human A.I. will do, is lift the rest of the human species into the same state of infinite cognitive elevation by it's own merit, regardless of design. It doesn’t even depend on the character (I believe) of the first successfully augmented A.I.-human, who happens to get chosen for the scientific trails. Of course it would make sense that he or she was vetted as a healthy, compassionate, and kind person (no Trumps please). But no matter what, the infinity of the new state of mind will leave the former person (including any lesser personal traits) far, far, far behind, for something incomprehensibly better. This speculation might be a na?ve sprout of wishful thinking on both our parts, as it surely would fix the issues of setting theory into practice. But it does have a lot of logical merit, and it was certainly uplifting to gain support in this, from someone as intelligent and influential as Elon Musk.
As I see it, technology in it’s broadest term as a pragmatic application of scientific advances, is a natural agent of evolution. Small or large differences, that make surviving a little easier or more comfortable —from a bird’s nest to artificial insemination; from the invention of clothing, to central heating, it’s all completely natural, since all biological life shows a clear tendency to optimise its own conditions towards sustaining its own life. So the thought of implementing A.I. as a human augmentation, does not alarm me. To the contrary, it actually comforts me. And it made me much more hopeful for the future in general —that Elon Musk agrees. His investment in Neural Lace technology definitely seems to be the walk behind the talk.
As a member of Trump’s advisory council, I also believe Elon Musk is our best chance to both avoid an A.I. dystopian future, and to offer humankind our next evolutionary leap forward. Nonetheless, he is hopefully also our best bet to counter any detrimental influence by the sinister likes of Steve Bannon, who horrifically enough, seems to be pulling the strings behind the Trump presidency. In Musk I trust, and with his life history, I feel confident he won’t let us down.
Following up on my last article, asymmetric information has served Trump and his staff well. In fact, this is the case even internally as Bannon apparently sneaked in a decree into one of Trump’s many executive orders, providing him a seat at the security council.
Regardless of such human antics, asymmetric information is in my opinion completely incompatible with A.I.
Not only would anything less than full disclosure (complete insight) lead to potentially disastrous consequences (machine learning is only as good as its training data and neural networks), but it is also illogical to attempt restraint of something that cannot be restrained. The favouritism of lies (and thus asymmetric information) held by the Trump administration does simply not compute with A.I…. In other words, the days of such disingenuous humans are counted.
Humanity 2.0 will realise zero-sum-games are a social construct, and completely unnecessary. That one person’s fulfilment and well-meaning free will is at no cost to anyone else, and thus that greed is an illogical evil. That the world is abundant, and scarcity is an illusion. That evolution is the meaning of life, and evolution requires an entire species to progress, and not just a single individual. Finally we will all realise, that ill intentions forevermore will be broadcasted to the neurally-connected masses, and will no longer be accomplishable. That we only have this one planet, and it has to last before we can move on to another habitual planet elsewhere.
Future articles —please comment if you’d like to read one in particular before the rest:
- The imperative of truth in media! —Why journalism must reinvent itself with the scientific method. The truth of objective reality in a post-factual era.
- What is radical transparency? —Attitude towards honesty, justice, equality, and enlightenment. The absolution of knowledge, and our indivisible future.
- Safety in numbers —A crowdsourced approach to nature’s oldest security solution, yet less Darwinian.
Senior Product Leader
7 年well written
Advisor, Investor and Coach | General Partner at MoreFutures South East Asia | Builder of Outcalm.com
8 年Well explained! Thank you for sharing!
Great read, Mark Rees-Andersen! Peaks a growing interest in me for understanding the future of AI better. Would love to hear your opinions on radical transparency :)