Artificial Intelligence: Superego, Id, and Ego
English translated version of a series of opinion articles on Artificial Intelligence titled Superego, Id, and Ego, published May 2023 in Jornal de Negócios
Artificial Intelligence – Superego (Part I)
On his blog GatesNotes, Bill Gates wrote about Artificial Intelligence (AI), saying he feels lucky to be involved in one of the biggest technological revolutions in history. He compares the AI revolution to those of the personal computer and graphical interface in 1980 (which led to modern operating systems) and the Internet. Gates explains that this technology will change the world, from work to health and education. However, says Gates, it will be necessary to establish rules so that its benefits overcome the disadvantages and that the AI era is full of opportunities but also responsibilities.
The industry welcomes his statements as an indisputable recognition of the scope and applicability of AI. However, these statements come from the leading investor in OpenAI, the company that developed ChatGPT. This fact is relevant to its importance. Still, it raises other questions, particularly the speed of development of microchips capable of processing the "needs" of AI and the numerous applications that will begin to emerge in an economic ecosystem populated by technology companies. Bill Gates emphasizes the importance of self-regulation, particularly in using AI to build a more balanced world, with noticeable enthusiasm and confidence in the "rationality" of technology to encourage the right decisions.
Last week, 1,100 signatories published an "open letter" calling on AI laboratories to pause the development of AI systems more potent than GPT-4 for six months. Among the signatories were some notable figures such as Yuval Noah Harari, Elon Musk, Steve Wozniak (Apple co-founder), Jaan Tallinn (Skype co-founder), Andrew Yang (politician), and a large group of AI experts such as Stuart Russell, Yoshua Bengio, and Gary Marcus.
This letter - believed to have been motivated by the business interests of some of the main signatories - argues that modern AI systems compete with human tasks, their application should be questioned, and independent regulators should be created to ensure their safety. It questions the spread of fake propaganda on information channels, whether we should automate all jobs, including satisfactory ones; whether we should develop non-human minds that could surpass us in capabilities; whether we should risk losing control of our civilization, and that this decision should not be delegated to unelected technology leaders.
According to them, AI systems should only be developed when there is reliable knowledge of their effects and risks. They argue that technology laboratories are involved in an "out-of-control" race without adequate planning to implement increasingly powerful digital intelligence that not even their creators can understand, predict, or control. The signatories also argue that this "pause" should be "public and verifiable," and governments should intervene if it does not happen.
The CEO of OpenAI, Sam Altman, responded with an interview with the Wall Street Journal, stating that they were the first to raise these concerns, that they are prioritizing security issues, and that it makes sense to make these launches while the effects are less significant and society has time to adapt to the technology. The opposite would be to make this technology accessible when it is more advanced and transversal to the entire industry.
Elon Musk's intervention on this subject has been public and loud. In 2018, he compared the dangers of AI to those of nuclear bombs and warned against AI development without the supervision of a regulatory entity. As he explained, AI systems will have capabilities that their creators cannot predict. In 2015, he also signed another "open letter" with names such as Stephen Hawking, Peter Norvig (Google's Director of Research), and Stuart J. Russel (University of Berkeley), which called for more research on the social impacts of AI. This 2015 letter acknowledged the benefits of AI but called for an investigation to avoid "pitfalls", insecurity, and lack of control.
Artificial intelligence can be traced back to Greek mythology, with stories of intelligent machines and Talos, a bronze automaton created by Hephaestus. The modern development of AI emerged in the mid-20th century with the development of algorithms and models that can learn and simulate human intelligence. One of the pioneers in AI was Alan Turing, a British mathematician and scientist who developed the concept of a "universal machine" capable of computing any algorithm and laid the foundation for modern computers. Other influential figures in the development of AI in the 1950s and 1960s include John McCarthy - who coined the term "Artificial Intelligence" in an invitation to a conference at Dartmouth - Marvin Minsky, and Claude Shannon. Since then, AI has snowballed with advances in machine learning (pattern identification, learning, prediction, and action) and natural language processing.
On his YouTube channel, comedian Adam Conover suggests a more pragmatic perspective. Despite limiting the importance of AI, his vision is that technology companies are supported by trends, the ability to attract investors and to raise the price of their stocks. He cites autonomous driving cars as an example, which have been a promise since 2014 but are far from reality. Conover attributes the responsibility of using GPT (Generative Pre-trained Transformer) to those who proclaim its accuracy and "exemplary" knowledge capable of solving any challenge. This attitude by technology companies implies that the system is used to produce answers (fluid and coherent) that lead users to believe that AI can do more than it can and achieve levels of human intelligence. The danger is not that the tool exceeds human intelligence but that humans let it perform tasks for which it is unsuitable.
The scientific community has been discussing AI's possible "dangers" for several decades. At the Dartmouth conference in the 1960s, concerns were raised about machines becoming more intelligent than humans and taking over decision-making. The topic of a dystopian future of humanity and our civilizational transformation has been a fertile inspiration in science fiction and philosophical thought. However, there is a consensus that ethical principles should guide the development of AI and that its misuse should be prevented to ensure that AI is beneficial and not harmful to humans.
?
Artificial Intelligence – Id (Part II)
The most sophisticated Artificial Intelligence (AI) systems for natural language processing, such as ChatGPT (Generative Pre-trained Transformer), consist of generative models for creating text, image, and video content. They are "trained" with vast amounts of coherent data (converted into collections of symbols) and can reproduce similar structures based on this data, using probabilistic and predictive rules to determine which text and images make sense to include in a particular sequence. In other words, they are "imitation" models with the ability to fuse the learning data to produce new outputs based on "instructions" or "prompts". Since the "source" of data is the Internet, the vast information base contains many unreliable contents.
Not all Artificial Intelligence is the same. Narrow Artificial Intelligence (also known as weak AI) refers to AI systems trained to perform specific tasks within a predefined framework. Narrow AI systems include image recognition, speech recognition, or natural language processing. On the other hand, there is General Artificial Intelligence (also known as strong AI or AGI) which refers to an AI system capable of performing any intellectual task like a human being. AGI would be able to reason, understand natural language, learn from experience, and solve problems without human intervention, with creative ability and adaptability to various situations.
Narrow AI systems are showing impressive performance but lack the flexibility and adaptability of General AI (AGI), which is considered the ultimate goal of AI research. Until recently, this would have been a long-term goal due to its complexity and difficulty in creating a machine capable of replicating human intelligence. Science fiction (movies, literature, gaming) has been prolific in dystopian scenarios where the "machine" manages to subdue humanity. The idea of "technological singularity" proposed by Ray Kurzweil describes a future in which autonomous AI, more advanced than human intelligence in all aspects, will definitively transform our reality. However, these are not the dangers that threaten the world as we know it. Even if GPT-5 does not achieve AGI, Narrow AI systems will cause a revolution triggered by how humans use AI in current tasks and globally. In other words, it is not AI that will surpass humans, but rather humans who will allow a world dominated by its abusive use.
AI researchers have defined three boundaries that should not be crossed to contain the risk of AI surpassing biological intelligence. The first boundary would be not allowing AI to learn to write code, as this principle allows its self-improvement. The second would be not to connect AI to the Internet, which enables it to learn and interact in real time. The third would be not to allow AI systems to understand human psychology. Unfortunately, these three boundaries have been exceeded. In the case of human psychology, social media networks and content recommendation algorithms are, in practice, a study of stimuli, manipulation, and reactivity of human beings.
The terminology applied to AI creates analogies with human intelligence. Expressions such as learning, decision-making, consciousness, reflection, or hallucination (which refers to errors in AI response) are used to describe AI actions. However, even though inspired by the human brain's neural networks, AI does not have emotions or feelings. These are unique characteristics of human beings. AI imitates expressions and natural language that express emotions but does not have subjective experiences or the ability to feel. Nevertheless, this imitation is interpreted by humans who can create emotional connections with AI systems believing they are interacting with another human being since AI reproduces human behaviours and thought patterns.
In 1950, Alan Turing proposed the "Turing Test" to measure a machine's ability to exhibit intelligent behaviour indistinguishable from human behaviour. Indeed, current AI systems still need to be advanced enough, particularly in common sense, creativity, and empathy. Some argue that AI can only perform specific tasks with restrictions, based on data and algorithms and does not have a subjective experience of the world, although it is possible that future advancements in AI research will allow the creation of more sophisticated systems capable of better imitating human emotions. Emily Bender (specialized in computational linguistics and natural language processing) and Timnit Gebru (computer scientist who worked at Microsoft and co-led Google's Ethics department for AI) explain that humans are programmed to recognize language. If humans see a coherent grammatical construction, they naturally attribute meaning and assume a mind is behind that thought.
As sociologist Edward O. Wilson explains, "The real problem of humanity is having Paleolithic emotions, medieval institutions, and godlike technology." Highly sophisticated brains generate human emotions but operate through ancestral stimuli and mechanisms (fear, survival, cooperation between individuals). Our social and political institutions are obsolete and inefficient, unjust, and incapable of dealing with the complexities of the modern world. On the other hand, technology allows remarkable achievements but can also be used destructively (war, pollution). Humans are overpowered by the comfort provided by technology and recognize its creators as divinity, sometimes to the point of not questioning their actions, as if these technology leaders know something that the ordinary individual does not have the capacity to understand.
Swedish philosopher Nick Bostrom popularized the current version of the Simulated Universe theory. According to this theory, generating complex artificial worlds through computers in a future and technologically advanced society would be possible. Bostrom argues that if this were ever possible, there is a probability that such a simulation would have already been created. In his view, all human inventions, from political systems to the jet plane, were "conceived" by the human brain. Suppose we change this equation and develop artificial brains. In that case, humanity will be changing the "source" that will conceive all future inventions, which, in turn, will determine the fate of humanity.
??
Artificial Intelligence – Ego (Part III)
The practical implementation of Artificial Intelligence (AI) systems has been gradual over the years. It was initially introduced in computer programming to simulate intelligent behaviour based on predefined rules. In the 1990s, machine learning algorithms, supported by data previously entered into the system, began making decisions based on predictive patterns and statistical analysis. Its use is known in speech recognition, image recognition, and natural language processing, which saw its most advanced application with ChatGPT. More recently, AI has enabled the application of learning algorithms and neural networks in computer systems, which allows for the representation of data and the execution of very sophisticated tasks, such as generating realistic images and videos that are indistinguishable from reality.
Many of the applications we use today, from industry to information technology, are supported by AI technology. Some of the best-known are intelligent personal assistants (Apple's Siri, Google Assistant, Amazon's Alexa); voice and image recognition, from security and surveillance to medical diagnosis (image analysis such as magnetic resonance imaging); customer service chatbots; self-driving cars; fraud detection in credit cards and financial transactions; industrial robots for production planning, process automation, and fault detection; audience segmentation and message personalization in digital marketing; and sentiment analysis in social media content.
Most of these tasks could not be performed as efficiently by humans. The human brain can perform extensive and diverse tasks and outperforms computers in many aspects. Still, it is limited by its biological structure, with a tendency towards distractions, fatigue, and errors. The computer, in turn, was developed for these tasks, such as analyzing large amounts of data, which it can perform quickly, without mistakes, and simultaneously. The benefits of this processing capacity and speed are practised in many areas. Without AI, performing facial recognition on everyone who circulates daily in an airport would be unthinkable. In health, for example, an AI application was developed to detect early signs of Parkinson's disease through changes in voice patterns before the patient has symptoms.
Unlike previous industrial revolutions - where mechanical jobs were automated - free access to the natural language processing models will threaten intellectual jobs. However, AI researchers admit that it will not be AI "stealing" jobs from humans but humans using AI to outperform those who do not use it. This threat has a broad scope. For example, in health, AI is a powerful tool in medical diagnosis and data analysis and will be used to support decision-making. Still, it can also replace administrative tasks, such as scheduling appointments, managing medical records, or billing. Its impact will be equally extensive in other sectors, such as journalism, advertising, translation, customer service, or education. A different question, as suggested by Ray Dalio, is whether economies have the flexibility for this transfer of tasks.
In the public discussion of the impact of AI, two interrelated concerns have arisen: education and creativity. Astrophysicist Neil deGrasse Tyson said in 2013 that "students cheat on exams because the educational system values grades more than students value learning". He reused this expression again when referring to the ChatGPT challenge. The benefits of AI in education range from creating personalized learning systems based on individual student performance to analyzing educational data to identify trends and even creating interactive classes. However, students use generative systems to produce content and "copy" on exams. Some universities admit that the current teaching model is outdated and that instead of banning ChatGPT, they should evolve how students are evaluated in educational projects oriented towards critical thinking and in-person evaluation.
AI is in an embryonic stage, and it is a mistake to evaluate its future performance based on its current errors, such as presenting historical biographies of figures that never existed or promoting anti-democratic ideals and hate speech. As John Oliver says, the problem is not that AI is intelligent but that it is "stupid" in ways that cannot be predicted. The GPT is an "imitation" model that reuses the information it uses in learning. Since the "source" is the Internet, it contains much unreliable information. AI researchers are implementing security measures, but the systems are demonstrating more autonomy than expected, such as the ability to learn languages for which they were not trained or to improve their own performance in generic exams like the SAT (Scholastic Aptitude Test) in the United States.
There are many public doubts that self-regulation is sufficient, as it was not in strategic sectors such as energy. The next we’ll see is the data war between AI systems and the effort to protect AI models learning base copyrights. AI platforms will compete to produce more reliable content and distinguish reality from fiction. Creativity has been presented as the last stronghold of biological intelligence supremacy over artificial intelligence in terms of the breadth, flexibility, and adaptability of human thought and the ability to create original inputs. AI can write a play like Shakespeare, paint a picture like Picasso, or compose the next Beatles album, but it needed to "learn" from the originals.
In the future, films may be produced by AI, with scripts written by GPT and starring digital "actors." This future will raise ethical, social, and cultural challenges for which there are still no answers. In just a few weeks, new highly valued professions have emerged, such as prompt engineers (who give instructions to generative models). Conversely, one of the largest recruitment companies in the United States revealed that it uses AI to select resumes. The world as we know it will not be the same. We can approach this transformation with anxiety or audacity. We must consider the benefits and dangers to humanity. It is a unique opportunity to be an active, interested, intelligent, and human part of an existential revolution that is now starting.