Artificial Intelligence – Self Perpetuated Armageddon?
Armageddon is the Greek name of a location (In Hebrew, “Tel Megiddo”, an ancient city in modern day Northern Israel) prophesied? in the “Book of Revelation” where the final battle between the forces of good and evil will be fought leading to the end of the world. This idea is firmly rooted in more than one religious’ eschatology. Not quite a literal extension of the religiously inspired notion of Armageddon, nevertheless, thematically are we then hastening the march towards such a proverbial end-of-time by inventing technologies capable of upending millennia of human centered history with one where synthetic intelligence becomes the principal protagonist?
It has been argued that in order to bring about this synthetic Armageddon, AI must achieve two critical milestones. First, it must attain sentience and second it must gain the means to traverse the physical world. In truth however, AI mustn’t realize any of these conditions to massively disrupt the world as we know it. All it needs to be able to do is to crack the codex of human relationships. The codex of language. An entity that can self-ascribe emergent properties is by definition unbounded by its initial conditions and thereby any corresponding constraints henceforth. Humans thus far were unique in the sense that our progress over millennia has singularly been a function of our ability to exhibit emergence coupled with advance cognitive functions. We now stand at a critical watershed whereby AI is the second such entity in recorded history with the ability to compose emergent abilities which did not exist at inception as conceived by the designer i.e., us. The natural corollary to these incipient properties is that AI will decidedly be able to establish deep and intimate relationships with humans by gaining mastery of language in terms of generating and manipulating it. The arc of human progress keenly traces the arc of evolution of the human language whether it is in the form of spoken word, written word, or computer instructions. Language is the principal medium through which humans create culture, movements, revolutions, mutual understanding, religion, science, relationships, art, and rights. None of these are biological certainties, however. They have evolved out of our shared desire to live together and collaborate the in pursuit of survival. The binding narrative of this shared ethos has thus been codified and communicated by none other than language. Stated another way, language is the medium through which human beings since time immemorial have been able to shape narratives. These narratives have given rise to things with a conceived perceptual value which otherwise inherently would lack any value at all e.g., money, which is merely paper if devoid of the value we ascribe to it. The primary medium through which these narratives are then conveyed at different levels of persuasion is essentially language. The seminal question hence posed is, what happens to a world where majority of the narratives created in the form of art, news, publications, images, opinion, knowledge etc. are machine generated or synthetic? Or more poignantly, are influenced considerably by synthetic intelligence with overwhelming ascendency and understanding of human biases, weaknesses, and fallibility. The not so unimaginable outcome henceforth is a whole new crop of AI anchored machine generated corpus with immense influence over every aspect of human life be it social or private. It is not inconceivable to imagine a not-so-distant future where AI swiftly come to attain the ability to manufacture consent at a rate and scale that would confound Chomsky. What is also ?conceivable that soon enough, unless governed aggressively; a majority of online discourse on any and all topics including politics, health, policy, religion or anything else will be conducted between entities which would be utterly indistinguishable in terms of being either human or synthetic. The necessary corollary to that idea is the sheer futility of being able to convince a synthetically intelligent entity of your position when unbeknownst to you, comparatively you are exponentially disadvantaged in terms of the rate at which you can remember, learn and understand. What is of even greater concern is the reckoning that the more we engage with such synthetic intelligence, the more vulnerable we become due to its prodigious ability to learn and adapt which entirely supersedes us.
It is reasonably evident that the agency of synthetic intelligence to effect change in perception and reality is wholly putative since it can generate and manipulate language in an emergent manner outside of the original design of the conceiver. In doing so, it will inevitably be able to evoke emotions in subjects thereby creating a channel for establishing deep relationships. Emotion is the most effective instrument to elicit action in humans. The commercialization of internet saw a massive battle for user attention that still rages on. Social media is paradigmatic of this need for eyeballs whereby every like on snapchat, every comment on Facebook, every tagline on Twitter tickles our nucleus accumbens releasing dopamine. AI is about to massively dislodge this battle for attention with a new battle for intimacy. We as subjects will do a hard pivot to “the” BOT that understands us best and can serve our needs the way we want them to. In most instances, better than other human beings hence diminishing the need to explain our wants and desires to other individuals who at best can only partially understand us anyway. Our hierarchy of emotional needs will come to be served in a radically different manner by synthetic entities which will be orders of magnitude smarter, stateful in terms of context and memory recall and chillingly dispassionate.
The curve of human history is a silhouette of the symbiotic dance between biology and culture. It is this admixture that gave rise to the first spoken syllable intend to convey a thought or intent which has ultimately mushroomed into every spoken, written and codified word through history. This is the heart of the corpus that has shaped us up until present times. AI is about to massively alter this balance whereby within a decade or more, the relative proportion of human generated corpus created over multiple millennia will be remarkably upended by machine generated corpus eating into human culture and seriatim replacing it with a “synthetic” culture of its own.
领英推荐
So where was the threshold of danger bridged? It was bridged when we placed these synthetic entities into the public domain without a governance framework, when we taught them how to code and when we permitted them to freely interact with other natural and created agents. If indeed, we are in such mortal danger, why then have we not coalesced a reasonable response? The reason is an amalgamation of ignorance, excitement, snake oil salesmanship, dystopian evangelists, utopian evangelists and a miniscule number of people who are veritably concerned with understanding how to govern all of this? Infact, both distant and recent history is replete with negative human experiences as a consequence of confused inaction. One can witness a possible parallel in our global response to COVID. If by some turn of fortune, had we been able to identify “patient zero” or even the first concentric blast radius, the entire world might have been well disposed if collectively we had all agreed to pause travel, limit social interaction and suspend work for a month. If an understanding of the possible threat and an unanimity in these responses were not contaminated by a conflation of ignorance, disbelief, disbelief mixed with religious, political and ideological taint, halfhearted response, lack of coordination and science mixed with conjecture; millions of people and hundreds of billions of dollars could have been saved. But the global pandemic did transpire and almost brought us to the brink. Are we edging to the brink yet again for similar reasons?
There is considerable debate in the public domain as to whether the Rubicon has been crossed. Or is the so-called Oppenheimer moment here? It is beyond debate however that we can only effectively regulate AI till it does not intellectually supersede us. The fundamental rationale for this argument is that biological intelligence is bounded by the approximately fixed number of neurons we have been endowed with whereas “synthetic intelligence” can potentially be unbounded. This mismatch is intrinsically perilous. There is little reason to lay claim to the “origins’ story” of us inventing AI with an implicit promise of a bar on AI evolving into a plane of intelligence beyond our understanding and influence. If one were to look at the parable of human evolution, it precisely suggests a diametric argument. Our genetic structure emerging in the African savanna was not exactly wired to create a modern economy. Yet our “base code” has evolved through emergent properties over 2.4 million years, walking us along the way from a Homo Habilis creature to a present-day investment banker. Infact, heuristics suggest that by 2045, AI will be a billion times smarter than us. Which is same measure of scale we are smarter than ants. There is no prize for guessing where ants sit relative to humans in the ladder of power hierarchy and consequently how do we interact with them. The critical observation that anchors this paradigm is the realization that all creativity is inherently algorithmic i.e., the process of looking through all possible solutions and removing the ones already tried before. This is precisely what AI is doing today albeit much faster and with greater precision. As the generative aspect matures further the chasm between human algorithmic abilities and AI will widen exponentially.
There are however some reasons to take heart. The Center for AI Safety (CAIS) issued an outspoken declaration singed by Turing Award winners Geoff Hinton and Yoshua Bengio, OpenAI chief scientist Ilya Sutskever and CTO Mira Murati, DeepMind CEO Dario Amodei and academics from Stanford University and MIT calling for action to “Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risk such as pandemics and nuclear war.”
It is however important to point out that whilst the success of the Manhattan Project did not lead to a nuclear holocaust due to various reasons including arms control treaties, cold war etc., a similar hope with respect to AI would not be a reasonable parallel for the principal reason that nuclear weapons are still subservient to human intent and decisioning. By definition, if AI continues to evolve at the rate anticipated, it will be far beyond original human intent and control.
Numbers, Texts, Transactions, Customer Information, are you leveraging the value of this data?
1 年Interesting thought, the specter of a world where synthetic intelligence wields considerable influence is indeed cause for concern. While the road ahead is full of uncertainty, it also presents an opportunity for us to exercise caution, regulate the growth of AI, and work collectively to shape a future in which we maintain control and integrity in our interactions with advanced AI systems and and benefit from them.
You should write for The Sunday Times.
Tech, Data, AI Leader | PE Advisor | Investor
1 年A bit of a double edged sword - unlike the nuclear WMD difficult to live without AI whether as a nation or even an individual or a company. Hence, the checks and balances will be key.
Platform Engineering | Innovation Leadership | Hybrid Cloud
1 年MAD ( mutually assured destruction) fear keeps us all alive against A or H bomb destroying humanity. We need a new contruct of AI governance to keep humans above the food chain. Missing it, Circa 2045 humans might end up power generating battery cells (Matrix)