History of Artificial Intelligence

History of Artificial Intelligence

Artificial Intelligence is simply mimicking the human behavior through a series of computer systems. Notwithstanding individuals' preconceptions about AI most of these are just types of AI and not a complete artificial intelligence network. Voice recognition systems like Alexa and Siri are advanced forms of artificial intelligence. An AI system will be far more advanced and far more versatile. Although the popular AI tool ChatGPT is being considered an AI, it is not a full AI system but a highly advanced and advantageous AI research tool.

An artificial intelligence system has several aspects. AI has to learn in order to make it better. Here the system takes in several data and starts building rules in order to effectively use them which are also known as algorithms. Through a reasoning phase, the system has to determine and pick the most suitable algorithm. The AI system has to constantly learn and evolve and be ready to change the available data. An AI system evolves similarly to a human, by learning and storing data, rules, and actions. Although, with given that these data can be changed or it could have grabbed false information, an AI has to be open for changes. A proper AI also includes creativity, which helps it mimic human brain behavior.

Over a period of time, the AI has started affecting the human life in a large manner. The world had only known about the small tools like voice recognition systems and chat bots but now, with the evolution of tools like ChatGPT and various image-generating, video-generating and even articles writing software AI has started to reach the human society.

Although, this has both positive and negative sides, artificial intelligence has become a technology which helps anyone almost with anything.

The concept of artificial intelligence, which involves the development of machines capable of mimicking human behavior, has a history that dates back to ancient times. Evidence of this can be found in Greek mythology, with examples such as Talos and Pandora. Talos was a giant bronze man created by the Greek god Hephaestus, and can be considered as one of the earliest examples of artificially developed humanoids. Pandora, also a creation of the Greek god of invention, further showcases the human desire to create intelligent machines.

Throughout history, there have been attempts to build machines capable of achieving AI, with various civilizations trying to develop automated devices to make daily life easier, as well as for war strategies. One notable example is the story of Yan Shi, a mechanical engineer who allegedly presented humanoids, also known as automata, to the King Mu of Zhou in China during the 10th century BC. These automata were able to move, walk, and even sing, showcasing the ingenuity of ancient engineers.

Automata, which are human-shaped mechanism devices capable of self-operation, have appeared in almost every civilization throughout history. The development of logic, which started with the rise of philosophers such as Aristotle, Plato, and Lord Buddha, paved the way for the modern information age. Aristotle, in particular, is considered the father of logic, and introduced “term logic”, which has evolved into the modern concept we know today.

In the 19th century, English mathematician Charles Babbage invented the first-ever computer, known as the "Analytical Engine." This mechanical general-purpose computer was able to perform various logical calculations and was steam-driven. It is a testament to the foundations laid by previous civilizations and various philosophers, which allowed for the creation of sophisticated modern-day technologies.

In the 20th century, the true rise of AI and the information age began. Alan Turing's "The Turing Test" was introduced as a way to determine a perfect AI, wherein a machine's ability to deceive a human into thinking it is also human is considered a success. While this test is still used today, the concept of "deceiving" can be an ethical problem, especially since an AI can teach itself to deceive others, making it a practical issue as well. Despite this, Alan Turing's test proved to be a game changer in the AI industry, paving the way for further developments.

From 1950 to 1970, the first wave of AI development took place, wherein researchers began to build the first AI programs, including simple logic problem solvers and games like checkers. This rapid growth led researchers to delve into machine learning and teach computers how to improve their performance over time. In 1956, the Dartmouth Conference marked an important milestone in the history of AI, serving as the birthplace of AI where several key areas were discussed.

However, during this time, the necessary machines to research and develop AI were yet to be discovered. Computers were not readily available, and with the technology at the time, achieving quick results and high hopes were impossible. This led to the first "AI winter," where AI research studies had to be put on hold.

During the first AI winter, rule-based systems and expert systems were introduced, leading to the development of financially viable applications based on AI systems. The winter finally ended in the mid-1970s with funding from the British government, and the AI field began to bloom once again.

But hopes were once again let down in the late 1980s, leading to the start of the 2nd AI winter in the early 1990s. However, this winter was short-lived and ended in the mid-1990s, marking the resurgence of AI, known as the "AI spring." Advanced AI methods such as neural networks and machine learning techniques started to bloom and became popular. It was during this era that important developments, such as the creation of the World Wide Web and introduction of search engines, took place, ushering in the true information age.

In the new age that followed the year 2000, the computer industry and AI-related research experienced significant growth. During the early 2000s, techniques such as natural language processing (NLP), deep learning, and computer vision gained popularity and underwent substantial improvements. The introduction of Kismet, a social robot designed by Cynthia Breazeal at MIT in 2002, provided society with renewed optimism for the industry, representing a momentous milestone since IBM's Deep Blue AI defeated the world chess champion Garry Kasparov in 1997.

Following 2000, numerous inventions were made in the industry, many of which proved to be highly successful. Robotic innovations continued until 2010, alongside other digital computer-related breakthroughs such as the introduction of the first iPhone in 2007.

In 2011, IBM's Watson demonstrated its capability to understand natural language and complex problems, a significant development that paved the way for other major corporations such as Google, Facebook, and OpenAI to conduct their own research in AI. The resulting introduction of multiple AI-related systems revolutionized society.

In recent history, there was a "cold war" of sorts between Google and OpenAI in the field of AI. However, with the release of GPT-3 by OpenAI in 2020, which demonstrated impressive abilities such as generating human-like language and performing a wide range of tasks, OpenAI appears to have taken the lead. In fact, OpenAI recently released chatGPT-4 on March 14, 2023, which has become a ubiquitous presence in people's lives. Meanwhile, Google was finally able to launch their powerful Bard AI on March 21, 2023, after facing various issues with providing accurate answers. Nevertheless, thanks in part to the influence of chatGPT, many AI systems have been developed for various purposes, including systems that can even match a human's creativity, something that Alan Turing himself believed to be impossible.

The years 2022 and 2023 have been a breakthrough for artificial intelligence. Introduction of language models like chatGPT has increased people's interest in AI. However, as the power of these language models increases, so does the training cost. The private sector, which has been the main source of funding, is almost dried up, and the government needs to step in to continue research. The purpose of AI has also changed drastically from academic to industrial sector. Previously, AI researchers used to work more in the academic sector researching and becoming academic dignitaries. However, now they are more attracted to the industrial sector due to the recent advancements in AI.

Not only does AI drain money and computer resources, but it also affects the environment. According to reports by Luccioni et al. (2022), Strubell et al. (2019), and the 2023 AI Index Report, BLOOM (BigScience Language Open-science Open-Access Multilingual), an open-source language model, alone has emitted more carbon than the average US resident uses in a year. Although it is the lowest recorded carbon emission with 25 tonnes, GPT-3 emits more than 500 tonnes.

AI has already started to change society and people's lives. However, every rose has its thorn. AI-related incidents and controversies have increased drastically since 2020 and 2021, as per an AI Index report in the AIAAIC Repository. The good side of it is that with all these incidents, many AI-related laws are being introduced.

In conclusion, artificial intelligence has a rich history dating back to Greek mythological times. Researchers have fought hard to advance this field and achieve many important things throughout history. AI can be a double-edged sword and should be used with caution. This does not mean the AI field should stop advancing. Rather, researchers should continue to advance the field for the betterment of human civilization. The brain is the most powerful weapon humans have, and we should use it well to survive with the environment, which means fields like AI should be advanced more and more but with caution.

要查看或添加评论,请登录

Nad Mahawithana的更多文章

  • Mitigating System Failures through Predictive Analytics

    Mitigating System Failures through Predictive Analytics

    Systems and how they malfunction A system refers here is related to a application if not a software program and other…

  • React Hooks

    React Hooks

    In React, functional components used to be stateless components, which means they don't use state, props and React life…

  • Rendering in React

    Rendering in React

    Rendering is the process of converting the code in to a visual representation which people can interact with. This is…

社区洞察