ARTIFICIAL INTELLIGENCE – THE "GPTIZATION" OF OUR DAILY LIVES
Every disruptive technology tends to be met with skepticism. I remember well the time when the internet scared us and was seen as a threat to jobs, even as it promised new opportunities. What's new is frightening, and often for good reasons. However, something curious happened with Artificial Intelligence: it was quickly embraced by many, perhaps without much time for reflection and adaptation. And now, I'm the one who's scared (maybe).
It was around mid-2022, during a routine Computer Vision class1 in my master's program, that I was introduced to DALL-E 2, an AI model capable of generating images from textual descriptions. I had already studied Generative Adversarial Networks (GANs), so the ability to generate images from text wasn't exactly new to me, but what I saw that day exceeded any expectations. It was unprecedented: visual creations that seemed to have come directly from an artist's imagination. Below is the first image I created with DALL-E 2 (and please, don't ask me why I requested it to create this).
Shortly afterward, other similar tools began to emerge. One created non-existent landscapes, another sketched surreal architectural projects. Creativity had apparently become something "automatable," and I was just beginning to understand its impact.
CHATGPT: REALITY SURPASSING FICTION
Then came ChatGPT. In February 2023, while still struggling to fully grasp the intricacies of Natural Language Processing (NLP)2 techniques, I stumbled upon a tool that not only reviewed my Python3 codes but also ventured into deep sociopolitical comparisons. I asked an absurd question, just to test it: "How does the current political polarization in the United States compare to pre-Hispanic societies in Latin America, from the perspective of hyper-partisan news?" The response? was not only coherent but also thought-provoking.
Suddenly, AI was no longer confined to specific technical or creative fields. Within months, there was an explosion of courses and videos on "how AI can change your life," as well as a new profession: Prompt Engineer?. People began using AI for absolutely everything — from simulated therapeutic consultations to creating business plans and astrological charts. I found myself caught in this whirlwind and participated in one of those prompt engineering courses, and I must admit, it was helpful.
Curiously, the term Artificial Intelligence is quite old. It was formally introduced by John McCarthy in 1956? during the Dartmouth Conference, when scientists first discussed the possibility of machines simulating human intelligence. I say formally because prior works had addressed the topic without naming it?. However, I like to say that AI was really "born" in 1942 when Isaac Asimov published his famous three laws of robotics? (sorry, but I'm a huge fan). Anyway, for many decades, AI was restricted to laboratories and large corporations due to the high computing costs and the complexity of the models. But with the advent of cloud computing, cheaper hardware, and especially the creation of generative AI models like GPT, there was a democratization of access to AI — and we are witnessing this process.
Truth is that with an internet connection and a browser, anyone can use tools that were once reserved for tech giants. There are countless online courses, including free video lectures on YouTube, for instance, that cover a wide range of disciplines related to the subject. But studying takes effort and not everyone is willing to do it.
THE "GPTIZATION" OF ACTIVITIES: ARE WE REALLY IN CONTROL?
But what caught my attention — and is the purpose of this text — is the incredible escalation of AI in our daily lives. A process we can call "GPTization". People have begun to blindly trust AI's responses, and gradually, we're handing over our decision-making power to machines. Here are some points I've observed over the past year and how they interrelate and eventually feed each other.
(1) “AI EXPERTS” ARE EVERYWHERE. Social media is full of individuals who, overnight, seem to have become experts on the subject. The superficiality of the knowledge is concerning. Quick courses promise to turn anyone into a "guru," but the reality is that few people understand the deeper concepts. This can lead to…
(2) EXCESSIVE CREDULITY in tools they don't fully comprehend. Many people are starting to believe everything AI tells them as if it were the absolute truth. The practice of checking sources seems to have become obsolete. This phenomenon is dangerous because AI, though powerful, can generate incorrect or biased responses. The absence of source verification is a sign of how AI can lead us into an era of…
(3) SUPERFICIAL UNDERSTANDING; after all, everyone uses it, but few really understand how it works. Terms like "artificial intelligence" and "machine learning" are thrown around casually, but the real understanding is shallow. AI is seen as a "magic box" (okay, in part it is really a black box), and this lack of understanding can lead to misuse or unrealistic expectations about what the technology can or cannot do, leading to a consequent…
(4) FLOOD OF AI-GENERATED CONTENT, as in the case of social media, which is full of artificially created posts. This leads to a minimization of content, as it can be difficult to know what is authentic or not (as in cases of Fake News). This creates a "digital saturation," where the value of human content is diluted amid a sea of automated texts and images, often without genuine purpose. This whole process can even cause…
(5) EXAGGERATED DEPENDENCE. There are people who no longer write even a WhatsApp message or email without first passing it through the filter of ChatGPT or similar tools. AI has become a model for all interactions. This reflects a dependence that can be harmful, as one may lose the ability to think and write independently, which increases the possibility of…
(6) FRAUD AND CHEATING. Students of all levels are using AI to do their schoolwork (let the first stone be cast by anyone who hasn't yet). New types of fraud involving these tools have already emerged, such as texts generated to deceive evaluation systems. Similarly, we also have AI detectors that sometimes detect nothing for sure. The ease with which AI generates content raises serious ethical questions, especially in educational and professional environments. Additionally, malicious individuals are using AI to generate fake texts and audio with the intent to harm people and organizations?. This naturally exacerbates…
(7) PRIVACY CONCERNS. Large companies are restricting the use of AI by employees, fearing the loss of confidential data. On the other hand, small businesses are abusing these tools, often without realizing the risk they are taking. While large corporations worry about security, many smaller companies are exposing themselves to enormous risks without awareness of the consequences. Finally, permeating this entire ecosystem, we have people who love a conspiracy theory trying to foment…
(8) COLLECTIVE ANXIETY, bringing about the resurgence of the belief that we are heading toward a "MATRIX" or "The Terminator"-style future, where machines take control and humanity loses its autonomy.
While these concerns may seem exaggerated, they reflect a genuine fear that AI could escape human control and transform society into unpredictable ways.
WHAT'S BEHIND THESE TECHNOLOGIES?
But what is a GAN after all? And how do tools like DALL-E and ChatGPT work? GANs are neural networks that compete with each other to generate new information. The "generator network" creates data that mimics the real world, while the "discriminator network" tries to distinguish between real and fake data. This continuous cycle generates images and other creations that seem incredibly realistic. GANs are widely used to create everything from digital artworks to deepfakes1? and medical images11.
ChatGPT, otherwise, is a language model based on transformers12, which works differently. Instead of generating images, it was trained on zillions of texts to predict the next word or phrase based on context. This architecture allows the model to process large volumes of data efficiently, analyzing the relationships between words over large text segments. This enables it to generate coherent responses on a variety of topics.
However, its responses can be biased because they reflect tendencies from the data it was trained on. Moreover, the tool will try to answer at any cost, creating texts based on non-existent content just to satisfy the questioner. Look into this prompt I created using ChatGPT 4o specifically for this article13:
领英推荐
Question: Could you please comment on how the first manned trip to Mars went, describing in one paragraph how the landing was and the time it took to return home?
ChatGPT said: The first manned mission to Mars was a monumental achievement, marked by a successful landing on the Martian surface after a seven-month journey. The spacecraft touched down in the vast plains of Elysium Planitia, chosen for its relatively flat terrain, which ensured a smooth and safe landing. Astronauts spent around 30 days on the planet, conducting experiments and exploring the environment before embarking on the challenging return trip. The journey back to Earth took another seven months, with the crew utilizing a combination of cutting-edge propulsion technology and carefully timed orbital maneuvers. Their safe return was celebrated globally as a testament to human ingenuity and perseverance.
Question: Was there any significant incident?
ChatGPT said: Yes, the mission encountered a significant incident during the Martian landing phase. About halfway through the descent, one of the landing thrusters experienced a malfunction, leading to an unplanned trajectory adjustment. The automated systems kicked in to compensate, but it caused a tense few minutes as the crew worked alongside mission control to ensure a safe landing. The spacecraft ultimately touched down safely, but slightly off the intended landing zone. This added a few kilometers to the astronauts' planned exploration routes, but they adapted quickly. The incident underscored the unpredictability of space travel and the importance of redundant systems and real-time problem-solving skills during interplanetary missions. Despite the scare, no crew members were harmed, and the mission proceeded with all objectives successfully met.
I'll leave the conclusions to the reader. The tool is amazing, but like any technology, it's good to read the instruction manual before using it. But let's be honest, who here reads instruction manuals?
CONSEQUENCES AND REFLECTIONS: WHERE ARE WE HEADING?
We are at a crucial moment in history, where AI is no longer just a science fiction tool but an active agent in our daily lives. The "GPTization" of activities means that we are increasingly transferring decisions and creativity to machines. This has its advantages but also raises serious questions: To what extent are we willing to renounce control? What does this mean for our authenticity? And how does this affect our critical thinking?
AI is a powerful and transformative tool, but we must not let it (re)define us. The future of AI depends on how we use it and, more importantly, whether we choose to maintain our role as protagonists of our own stories.
After all, will we be in control, or will we become digital puppets?
Notes and Brief Explanations:
1.?Computer Vision: A field of AI that studies how machines can process and interpret images and videos.
2.?Natural Language Processing (NLP): A field that teaches machines to understand and generate human language.
3.?Python: A programming language commonly used in Data Engineering and AI projects.
4.?The response obtained, in summary, was: "The political polarization in the U.S., fueled by hyper-partisan news, resembles pre-Hispanic societies in Latin America in the strategic use of narratives to consolidate power and divide populations. While in pre-Hispanic societies information control was in the hands of elites, creating myths to justify power, in the U.S., modern media creates 'parallel realities' that exacerbate divisions. Both societies show how control of the narrative and lack of internal cohesion can weaken society, making it vulnerable to conflicts and external interference." Unfortunately, the original text was lost.
5.?What is a prompt? It's a command or instruction provided to a system, computer program, or chatbot to generate a response or perform an action. In the context of AI and tools like ChatGPT or DALL-E, the prompt is the text that a person types to guide the AI's response.
6.?McCarthy, J. et al. (1955). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.
7.?See, for example: McCulloch, W.S., & Pitts, W. (1943). A Logical Calculus of Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics, 5(4), 115–133; and much earlier, La Mettrie, Julien Offray de. 1748. L'Homme Machine.
8.?The Three Laws of Robotics: (i) A robot may not injure a human being or, through inaction, allow a human being to come to harm. (ii) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. (iii) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Asimov, Isaac. "Runaround." In I, Robot. New York: Gnome Press, 1950. Originally published in Astounding Science Fiction, 1942.
9. To cite an example:?https://www.pwc.co.uk/forensic-services/assets/impact-of-ai-on-fraud-and-scams.pdf
10. Deepfake is a technique for creating fake content, especially videos and audios, using artificial intelligence (AI) and deep learning. This technology allows modifying or generating digital media where a person's face, voice, or expressions are altered in an extremely realistic way, to the point of appearing authentic.
11. To learn more: Langr, Jakub; Bok, Vladimir. GANs in Action: Deep learning with Generative Adversarial Networks. Shelter Island: Manning Publications, 2019, among many other books and articles on the subject.
12.?Transformers in AI have nothing to do with the robots from the movies! They are a neural network architecture introduced in 2017, based on an attention mechanism that allows understanding complex contexts without relying on the sequential order of data, unlike older RNNs. To learn more: Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., & Polosukhin, I. (2017). Attention is All You Need. In Advances in Neural Information Processing Systems (NIPS 2017).
For tailored consulting, we invite you to visit our website at www.awantt.pt and discover how we can transform your business.
Senior Project Quality Manager - Raia SURF iEPCI Project
4 周Wagner Ceulin interesting post. In my oppinion "Critical Thinking" on what AI provides us with is the main challenge. We've been losing the capacity of interpreting the outcomes from AI. Funny seeing the high number of experts who never delivered anything.