The Future of Humanity and AI: Where Reason and Risk Intersect
(Retitled from the 11/18/2018 Version)
QUESTION:?What If We Develop a Symbiosis with Artificial Intelligence and Humanity Loses Control?
REPLY:?What if We DON’T Develop a Symbiosis with Artificial Intelligence and Humanity Loses Control?
I’ve been pondering two separate streams of thought from two sources I’ve come to appreciate:?Elon Musk and Simon Sinek.?One might ask why the work of these two individuals would ever be addressed in the same breath.?Elon Musk is an ingenious inventor of technological innovations.?Simon Sinek is a thought leader and subject matter expert on the topic of leadership.?My answer would be that the two seem (to me) to intersect with regard to their views about the importance of brain physiology’s influence of human thought processing and the impact of thought processing upon the future of humanity.?I believe their views about the way human beings think, when combined, lead to a profound insight about the nature of a ‘singularity’ Elon Musk has been referring to when discussing the very near-term potential impact of Artificial Intelligence, or ‘AI’, upon humanity.
Forgive me if I have difficulty explaining the concepts presented by each of these brilliant men in this regard, but I’ll do my best.
Simon Sinek’s concept, represented by the body of work associated with his book “Start with Why”, has to do with the correlation between the drive of behaviors for the questions “Why?” “How?” and “What?” as applied to organizational leadership.?Mr. Sinek posits that today’s organizations tend to approach the answers to those questions from the outside in.?That is, organizations begin by presenting “What” (to purchase), go on to explain “How” it is unique or set apart from competitive goods or services and may or may not ever get around to explaining “Why” the organization is dedicated to how it does things or what it provides.?In fact, he goes on to say, the opposite approach is better aligned with human nature.?People do things for reasons.?People who are aligned with the reasons a leader or organization does things are inclined to behave in ways that support that leader or organization.?This is driven by and aligned with the way human beings think.?
Simon Sinek describes the answering of these three questions, with “Why?” at the center of a layered circular diagram, (“Start with Why”), as the “Golden Circle”.?For additional information about the “Golden Circle”, I recommend you view his 18-minute TED Talk video available at:
He further explains that these three layers of thought are correlated with the physiological manner by which human brains function.?
The most primitive, fundamental, underlying part of our brain, the ‘limbic brain’ or ‘paleocortex’ governs motivation, emotion, learning, and memory.?These types of information processing correlate to ‘why’ humans feel.?There is no spoken language associated with this part of the brain.?It is intuitive, instinctive and wordless.?It is also extremely powerful.?It drives much of human thought and action.
领英推荐
The outer layer of the brain, viewed as more recently evolved, is the part of our human brains where language processing, social and emotional processing, memory and learning processes occur.?It is known as the ‘neocortex’.?This outer layer of the physiological brain, therefore, correlates with the ‘how’ and ‘what’ of human thought and action…. all in response to human feelings.
This quote from one of Simon Sinek’s recorded discussions revealed the bottom line to me about what humans do and how humans do it being based upon why it matters to them.
Within moments of having been absorbed by Mr. Sinek’s TED Talk, I noticed a couple of other interesting looking videos recommended for me by the YouTube app.?They were a couple of recordings of interviews with Elon Musk about Artificial Intelligence.?I find Elon Musk intriguing and foresighted, so imagine my surprise when I found that Mr. Musk also considers the physiological layers of the human brain and distinguishes the limbic system (paleocortical system) from the neocortical system when proposing a third layer of thought processing that would take the form of human symbiosis with artificial Intelligence.?
In one of his most recent discussions about Artificial Intelligence (2018-10-02 Elon Musk Interview about AI:?https://www.youtube.com/watch?v=B-Osn1gMNtw ), Musk spoke of it as a ‘threat’ to humanity, requiring regulation and oversight during development.?The threat posed to humanity sounded nearly apocalyptic but avoidable.?The underlying message was that, once AI had superseded human capability, human beings would be rendered nearly useless, relegated to the status of pets… if we were lucky.?If we were unlucky, some seemingly inconsequential construct about AI’s role being to ensure human beings were ‘happy’ might lead to all human beings being unceremoniously and systematically rounded up and repeatedly injected with endorphins or serotonins.?
Elon Musk pointed out that our instincts (our “Whys”) permeate the internet.?Despite being based upon thoughts for which there are no words, the words we CAN and DO use, along with vividly graphic images, increasingly convey why we do things, as a species.?The digital profile of humanity, amassed throughout social networks, provides a vividly collective mirror of our basest and most noble instincts (hate, love, fear….).?I am increasingly appalled at the things I learn other humans do, or sometimes even at things I have done (unwittingly).?The information that troubles me today was not available earlier in my life and certainly not or to earlier generations…. because the internet was not there.?But now, we, as a species, are immersed in collectively shared experiences, powered by digitization and social media.?Why would a nearly omniscient and omnipotent artificial intelligence, upon achieving self-awareness, risk keeping human beings around…?based upon what it will immediately know about us??To what extent do human beings ponder decisions to kill troublesome insects or rodents… or even each other, for that matter??
When it had apparently seemed inevitable to Mr. Musk that the rate at which legislation could be passed to regulate AI development was too slow to achieve it in a timely enough manner to prevent the most unfavorable potential outcome, he described having chosen a path that represented the “if you can’t fight it, join it” approach.?Seeming quite depressed and resigned in his demeanor (2018-09-06 Elon Musk Interview (warning – colorful language from host, Joe Rogan): https://www.youtube.com/watch?v=Ra3fv8gl6NE ), Elon Musk reported that the best we could hope to do is to achieve a symbiosis between digital intelligence (AI) and human collective consciousness before a “singularity” occurs.?This requires a broader bandwidth than the one we currently have with smart phones or computers.?By directly joining humanity with AI, with digital intelligence, as quickly as possible, the outcome could potentially be the most benign one, preserving, as a minimum, some human ‘freedom’.?This is the premise underlying the Neuralink mission (https://www.neuralink.com/ ).?
If we overlay the Simon Sinek model (beginning with ‘Why’) over the Elon Musk model (a superhuman cyborg symbiote with a third level of consciousness), the thought that comes to my mind is whether or not the answers to the questions of “How?” and “What?” will extend out into the next/third layer of the brain or, for lack of a better term, “megacortex”… or whether we will need to circle back to revisit the “Why?”, having created a new “Who” and having muddied the prior more fixed natures of “When” and “Where”.?Probably both, I would say. Maybe a question beyond Who? When? Where? Why? How? and What? will emerge. Maybe the next question will be "Why NOT?"
The trepidation of Elon Musk seems (to me) to be that we may not even be able to achieve the required symbiosis in time and, even if we do, the actual outcome is unknowable due to being beyond the event horizon of the singularity.?His rationale for expedient development of the required technology is that, by virtue of our recently developed dependence upon and increased information processing capability through smart phone technology, many human beings have already become cyborgs.
Interestingly, although the components required to achieve what I have (correctly or incorrectly) dubbed the “megacortex” here will include active implantable devices (ref.: 2017-04-21 Blog Post about Elon Musk’s Neuralink project: https://www.kurzweilai.net/elon-musk-wants-to-enhance-us-as-superhuman-cyborgs-to-deal-with-superintelligent-ai ), it does not appear that compliance with medical device regulations is a concern.?I believe Elon Musk’s position on this is that, once the singularity manifests, machine intelligence will take the project over and matters of human safety will simply no longer be subject to human controls.?I think this speaks volumes about the risk of not achieving the targeted project outcome, the symbiosis between human and AI, in time.?
I would love to hear your thoughts.
Writer, Author
6 年In any loop...there is always an initiator...we (humans) must always be the initiator...and thus maintain control. Why,what,etc. follows.