How to prepare for AI: the survival guide to immortality
The advent of AI
Artificial Intelligence isn’t arriving, it’s here. Articles and books have been written, anthropomorphizing AI, listing its failure modes, and testifying that humanity will struggle to navigate the intricate moral and ethical issues associated with the new paradigm. Some have been met with acclaim, for uncovering social issues that humanity is yet to face; others, with criticism, for unsolicited and unsupported fear-mongering.
These references largely cover the field of artificial general intelligence (AGI), referring to a machine that has the ability to carry out higher-order mental functions in a variety of contexts, like humans. Experts have varying standpoints on when we can expect to see AGI in society — but there is a universal consensus on the idea that it is not a question of if, but when.
Meanwhile, artificial narrow intelligence (ANI) — referring to the specialised tasks a machine or algorithm can do — is here, and it’s pretty good. It can do some tasks as well as humans (like distinguishing a puppy from a muffin) and some tasks better than humans (like playing chess).
Why is this significant?
- Cognitive implications: It takes each human years to develop the perceptual, sensory and cognitive skills necessary to understand the concept of a muffin, perceive a muffin when shown it from multiple angles, and retrieve the associated word ‘muffin’ when presented with a muffin. However, once an image-classifier is built, the algorithm can instantly be implemented in thousands of machines with a single software update — what takes humans years to acquire can be achieved by machines in a matter of minutes.
- Economic implications: Unemployment is cited as a significant implication of ANI, in a range of industries, including entertainment and news media, warfare, law, sports, transportation, and financial markets. In the recent years, AI researchers, entrepreneurs, and policy-makers have posited multiple solutions to combat externalities associated with developments in AI, from Universal Basic Income to statewide regulation.
Is AGI the most intelligent AI?
No — introducing Superintelligence.
Ray Kurzweil proposed the idea of an intelligence explosion — a point where AIs become intelligent enough to improve themselves. A software-based AGI will enter into a recursive loop of self-improvement cycles, leading to an intelligence explosion that surpasses even human intelligence — a Superintelligence (ASI).
How to prepare for AI
This article aims to provide a guide to the various strategies you can deploy to best equip yourself for the AI-age.
Understand the risks associated with artificial narrow intelligence, artificial general intelligence, and artificial superintelligence. This includes:
- Understanding the scope of current machine-learning algorithms, and how they are transforming the economy and disrupting industries.
- Being discerning about humanly unintelligible correlations generated by unsupervised learning algorithms
3. Understanding how algorithmic bias can occur:
- Algorithmic bias can arise from training data that is not representative
For example, from a training dataset that does not have enough dark-skinned people in the training dataset, leading to a facial recognition system that doesn’t recognise ethnic minorities
- Algorithmic bias can be caused by biased training data
For example, by using a corpus of writing data to improve an algorithm — in one study, the sentence ‘a man is to doctor, as a woman as to…’ was completed with ‘nurse’, by a predictive NLP algorithm.
Create a dialogue around the ethical and moral implications of new technologies as they are released.
It is the responsibility of individuals, entrepreneurs, companies, and policy-makers to foster productive, critical, debate surrounding new technological paradigms. This would result in the public awareness of the scope of current technologies, preventing a Turry-like case from ensuing — a thought-experiment positing the extinction of humanity by what was originally programmed to be a writing machine.
At the level of company-owned ANIs, an individual could best prepare themselves by being aware of how their data is collected (eg: cookies when you visit websites), used (eg: targeted advertising), stored (eg: Snapchat stores data on its servers), and biased at various levels (eg: biased training data, resulting in an inability for the service to work on edge-cases.)
At the level of ASI, humans could be construed as another species on the balance beam of life, whose tripwire is approaching; the advent of ASI could propel us either into extinction or immortality. Nick Bostrom, an acclaimed AI researcher, proposed the concept of a balance-beam model. All species pop into existence, teeter on the balance-beam, and then fall into extinction — Bostrom calls extinction an attractor-state.
In addition to extinction, there is another attractor-state — immortality. Therefore, attractor states are comprised of species extinction, and species immortality. So far, all species have have eventually fallen into extinction — but there is a possibility that a species could fall onto the other side of the balance beam, into immortality
A tripwire refers to the threshold of existence of a species. Hitting a tripwire is a massive, transformative event for a species, like a worldwide pandemic, or asteroid. Or artificial superintelligence.
Philosophers believe that ASI could be humanity’s tripwire, spiraling us into either extinction or immortality. Therefore, at the larger scale, whether AI will be positive for humanity is a question intertwined inextricably with questions of our very survival.
An individual’s greatest asset through this period of uncertainty is knowledge. This includes understanding the various theories of AI, being aware of the challenges and risks associated with its development, and rationalising potential outcomes that could result from combining innovation in various fast-growing fields, such as AI, biotechnology, and nanotechnology.
As innovators, we should be aware of capitalist incentive-structures that promote innovation and reward investment into rapidly-growing technologies, at the cost of humanity. Rather than fast-moving disruption, with high-impact technologies such as ASI, we should focus on stable development, testing features at each stage, to avoid an intelligence explosion.
As individuals, and humans, it is within our rights to lobby for policies that create ‘checkpoint’ levels for companies and research institutions pursuing AGI — with potential fiscal benefits for companies that are transparent about innovation, research, and development practices — thus incentivising the regulated development of AI tech, and calibrating the level of progress at a global scale.
My responses to some interesting counter-objections raised after publishing this on another platform:
In response to an insightful private note I received, positing that automation in certain industries could translate into lower costs, benefiting consumers, and increasing available spending for consumption within other industries:
I think a premise that I assumed for the purpose of this article is that most people view technology as having largely positive externalities — the incumbent view seems to be that tech is an enabler, and rightly so, since, in the past, it has served to increase humanity’s quality of life (improved communication, education, healthcare, etc.) This piece is focused, instead, on the potential damage that could result to society and individuals, by continuing to operate indubitably under this assumption.
First, I think the tech being developed today — research into AGI, quantum computing, and nanotech — has the ability to fundamentally alter the paradigm of humanity as we know it (explained in the ‘tripwire’ section’, as a technology that could propel us towards immortality). But that’s a bit esoteric, so let’s focus on the economic impacts.
Economically, I think unregulated AI runs the risk of being inherently inequitable. The example of AI automating legal processes is a great one — let’s assume that in addition to the legal industry, AIs are also deployed in financial markets to automate high-frequency trading, in healthcare, to assist with the diagnoses, prescriptions, treatment, and surgery, and a range of other industries, automating processes across the board. Unless the rate at which workers are retrained to use this new tech (either by the company, the government, or themselves) is equal to the rate of automation, there would be mass unemployment. If there is unemployment across industries, unless individuals are supported externally, in the form of retraining programs, alternate forms of employment, or, to an extent, regular unemployment benefits, they don’t have the means to spend sustainably, and are unable to derive the benefit of services being offered at a lower cost. This is inequitable, because labour loses, and owners of capital—who are able to make their firms processes more efficient using AI—gain.
Additionally, competitors, and players in other industries, find it in their interest to follow this trend—automating production to become more efficient, causing further unemployment, and reducing the disposable income of consumers—rendering unemployment caused by AI an industry-wide problem.
Owner - Mandalay Fine Stationery
4 年Fantastic read Avantika
Very interesting. If machines were truly intelligent, would they bother to let us know?
Technical Program Manager 2 @ Microsoft Cloud + AI
4 年Very well written!
CXO & 3x Founder - Insuretech / B2B / SaaS / Social Commerce / Visual AI /Auto / GTM for Start-Ups
4 年Hey Avantika, i think the article covers good ground and I love the tripwire inference but in your responses or rebuttal, AI leading to job losses and unemployment is a rather myopic view of the implications of this tech. We all know the tech potentially can enable much much more than possible by human eyes or human efforts and that does not lead necessary lead to unemployment, rather makes them efficient. Keep these coming lady!
Technology Risk & Control Professional
4 年Thought provoking-- and we need many more young thought leaders in these suspicious times of tech. Kudos Avantika (aka hot shit). And much simpler reading than an old favorite babble I had to lookup, between fav thought leaders of our past https://www.wired.com/1994/01/kay-hillis/