ChatGPT: Is it the New Fire?
In the second half of the 20th century, humanity got so used to the television as a big piece of furniture in the home that we couldn't imagine that everyone would have a small screen in their pocket with instant access to any episode of their favorite show at any time. Similarly, over the last two and a half decades, we've become accustomed to using search engines on a daily basis: (1) type in a few keywords, (2) get a list of suggestions of potentially useful links, (3) open the first few links, (4) read them quickly to find the information you're looking for. With a bit of luck, this process takes a few minutes.
Now imagine the following search process: you type in a question or a few keywords, and the engine responds with precise information written in coherent, human-readable text.
ChatGPT is a well-known language model that has begun to make Google's search method obsolete - despite the progress the latter has made over the last decade by adding voice and image search. In addition, ChatGPT has gained popularity for its ability to automate tedious tasks and generate creative content. For example, you can ask it: "Write a nice paragraph about climate change and economic growth in a way that a child can understand," and it will produce a well-written paragraph in a matter of seconds, not copied from anywhere, but completely generated. And if you're not satisfied with the result, it can rephrase its answer.
It's clear that this technology is already revolutionizing industries and creating endless opportunities for individuals and businesses. This is reflected by market leaders like Microsoft investing billions of dollars in OpenAI - the company behind ChatGPT. But it's also raising some concerns, as people worry that the increasing use of automation in areas such as blogging, customer support, transcription, and even more intellectually demanding fields such as data analysis and research, will make the skills of the people doing these jobs obsolete.
In this article, we explore the history of search and how the introduction of the AI-powered language model ChatGPT is revolutionizing the way we find information, how it has also raised concerns about the increasing use of automation in various fields, and how it might solve some global issues.
A Brief History of Search
Since 1998, the year Google was born, we have become accustomed to typing in keywords and receiving a list of results. In just a few minutes, we find relevant information. Searching has never been easier. Throughout history, people have always been in search of information. From antiquity until the mid-20th century, the process of finding information was largely exclusive to a political and intellectual elite and relied mainly on oral transmission, long journeys - as scholars had to travel to meet and share knowledge - and other primitive means of communication. The invention of collecting large numbers of books in one place - called a library, with one of the earliest known libraries founded in Alexandria in 300 BC - was a breakthrough that accelerated this process of finding information. Despite this acceleration, the process of finding information was still a process that could take days or, at best, several hours.
Much later, in the modern times, specifically in the 1960s and 1970s, the digitization of data made it possible to create digital databases that allowed people to search for articles and other materials using computers. And in the 1990s, the first search engines accelerated the whole process, and were improved with semantics, which allowed search engines to "understand" the meaning behind users' queries and return more relevant results. Search time was reduced from days to minutes. And today, with ChatGPT - or the technology behind it - this type of search has become possible in a matter of seconds.
These milestones have resulted in a more efficient, accurate and personalized search experience. There's no denying that these advances have had a significant impact on our productivity and ability to learn. However, there are also fears about the potential dangers of relying too heavily on these technologies and blindly following the flow of this progress.
Fears and Dangers
The difference between a flame and an uncontrolled fire is analogous to fear and danger. The presence of a flame can make us feel uncomfortable, but it doesn't necessarily mean that it will cause harm. An uncontrolled fire, on the other hand, is a danger; it's a specific and real threat that has the potential to cause harm. Similarly, fears about ChatGPT and other AI technologies may represent a general feeling of unease, but that doesn't mean that these technologies will necessarily cause harm. However, ethical questions about manipulation and legal questions about privacy are real dangers that should be considered when implementing or using these technologies.
Rejecting the new, accepting it slowly and then worshiping it is not a modern phenomenon. The list of inventions that weren't welcomed at first and later became indispensable could easily fill an encyclopaedia: the Internet, which is home to new forms of crime and harassment; the printing press, which also raised concerns about misinformation and propaganda; the wheel, which is a potential for accidents and injuries; the discovery of fire - or the invention of methods to make fire. All these are powerful technologies used for both good and bad; and the list can go back to the early use of bones as tools - creatively depicted in the first scene of Kubrick's masterpiece "2001: A Space Odyssey". However, this type of criticism is beneficial as it allows individuals to determine how the new technology should be used for maximum benefit.
领英推荐
Technologies are often quickly adopted by society, but the process of adapting our systems to them can take longer. An example of this is the phenomenon of social media addiction, the impact of which on the generations that have grown up with this technology is still being studied and understood. As society continues to evolve and adapt to new technologies, it's important to consider the potential long-term effects and proactively address any negative impacts that may arise.
A more concrete risk associated with using language models such as ChatGPT for content creation is the potential for plagiarism. For example, one could provide ChatGPT with someone else's work or idea and ask it to rephrase it without proper attribution, which is also a concern that has been raised since the early days of the internet, particularly with the launch of Wikipedia in the early 2000s. As a result, an ecosystem of plagiarism detection tools and methods has emerged. However, with the advent of content creation and paraphrasing, these anti-plagiarism tools may become obsolete.
Another concern is the potential for a lack of critical thinking and limited creativity when relying too heavily on language models. If people are not actively involved in the writing process, they may not develop their own critical thinking and problem-solving skills, which can be detrimental to their personal and professional growth. In addition, too much reliance on AI can lead to job losses, as many tasks can be automated. While AI enthusiasts talk about job displacement, the reality is that humans cannot adapt quickly to new activities and need time to acquire required skills.
It's important to consider these risks and find ways to mitigate them. For example, by promoting transparency and ethical use of AI, encouraging active engagement and critical thinking, and promoting a balance between the use of AI and human input.
It’s not Magic
Since its inception two centuries ago, modern education has focused on teaching students how to read, write and apply formulas. But over time, the emphasis has shifted. Today, it's crucial to teach students how to think critically and ask more complex questions using AI. The use of AI must be similar to that of an artist, who uses different tools to unleash their creativity.
In production and business processes, using ChatGPT & Co. can lead to a more efficient use of resources - mainly time, but also materials - by analyzing data, recognizing patterns and making predictions that can optimize processes and reduce waste.
In a globalized world that is still full of inequality of opportunity, AI can help identify and mitigate bias in decisions and predictions, and promote fairness and diversity through the use of rich training data, fairness-conscious machine learning techniques, and the provision of explainable decisions. Language models like ChatGPT can bridge language barriers and allow non-native English speakers to share their ideas on a global scale. This can help to diversify perspectives and reduce the dominance of a single language or group of speakers. This article, which took around 8 hours to complete, is a prime example of this. It was written with the assistance of ChatGPT and DeepL. These models can also play a significant role in bridging the gap between technological progress and people with limited access to education.
AI, such as ChatGPT, is not magic. It should be seen as a powerful tool that can help solve certain problems when used in conjunction with human expertise and decision-making. While ChatGPT has the ability to generate human-like text, it still requires human oversight and editing to ensure that the output is accurate and appropriate for the intended audience. It is important to remember that AI is not a replacement for human intelligence, but rather a valuable tool to augment it.
The world is changing fast and the way we access information has changed too. With the advent of search engines like Google, terms like ranking, crawling and SEO have become commonplace and we are constantly adapting to new concepts. The new world powered by AI is already here. But our perception is taking time to catch up - or perhaps it is afraid of catching up. Just as humans were afraid of fire when it was first discovered, and over thousands of years learned to master it to create magnificent sculptures in bronze, copper and other metals, AI is the new fire that will help us shape a more sustainable future.
Your feedback is much appreciated!
Azure Infra&AI | Microsoft Certified Trainer | Quantum Circle | International Speaker ??? Bridging the gap between Business and Technology ???
1 年There's a sci-fi book from the '60 by Stanislaw Lem where he depicts the world in 2050 were a new branch of science emerged: information scientists whose only role was to explain to others how to navigate the flood of information and...how to ask the right questions. I think this is the time we are in right now