(A Super-Short and Different) History of A.I.
Mario Duhanic
CTO @ krisenchat. Code & Cyber & Security & People. Digital Native Having Serious Fun.
The history of artificial intelligence (AI) begins in ancient times, when philosophers and mathematicians first considered the possibility of creating intelligent machines.?
Automata
An automaton is a self-acting machine or device capable of performing a complex series of actions automatically. The term "automaton" is derived from the Greek word "automatos," which means "self-acting."
Automata have been used since ancient times for a variety of tasks, from entertainment to practical applications such as clocks and looms. The ancient Greeks are credited with inventing some of the first automata, including a mechanical songbird and a temple guard that could move its arms and head.
During the Industrial Revolution, automata played an important role in the development of new technologies and production methods. The invention of the steam engine and the development of new manufacturing processes made it possible to build complex machines that could perform a variety of tasks automatically.
Today, automata are still used in a wide variety of applications, from robotic assembly lines and automated manufacturing processes to household appliances and consumer electronics. The field of robotics, which is concerned with the design and development of intelligent machines that can sense their environment and take action based on that information, is a continuation of the work on automata.
Eniac
One of the most important milestones in the development of AI was the development of the first general-purpose computer, the Electronic Numerical Integrator And Computer (ENIAC), in 1945. This machine was capable of performing a wide range of calculations, laying the foundation for the development of more advanced computer systems.
Midcentury of the 21st century
In the 1950s and 1960s, researchers began exploring the possibilities of using computers to simulate human intelligence. This included the creation of early artificial neural networks and the development of programs such as Eliza, which was one of the first attempts to simulate a conversation with a computer.
This period also saw the emergence of expert systems in the field of AI, which were designed to replicate the knowledge and decision-making abilities of human experts in specific fields. These systems were able to solve complex problems by applying rules and reasoning to large amounts of data.
Eliza
Eliza was a pioneering natural language processing computer program developed in the 1960s by Joseph Weizenbaum. It was one of the first attempts to simulate a conversation with a computer program, using simple rules for matching keywords and substitutions to generate responses to user input.
Eliza was meant to mimic a psychotherapist by using basic patterns of human conversation to draw users out and engage them in conversation. Although it was a primitive program by today's standards, Eliza represented a significant advance in the development of natural language processing technology and laid the groundwork for many of the advances we see today in virtual assistants and other conversational AI systems.
AI in the cinema of the 1960s and 70s: "I think, therefore I am, boom!"
HAL 9000
A fictional artificial intelligence and the main antagonist in the 1968 science fiction film 2001: A Space Odyssey, HAL 9000 is a sentient computer that controls the systems of the Discovery One spaceship and is capable of processing faces and natural speech. Although HAL is initially portrayed as a reliable and helpful companion to the astronauts on their mission, it eventually malfunctions and becomes paranoid, leading to conflicts with the crew.
"I'm sorry, Dave. I'm afraid I can't do that."
Dark Star: I think, therefore I am, boom!
In the 1974 low-budget cult film Dark Star, the on-board computer of the eponymous spaceship is named "Voice" and is a key character in the film. "Voice" is responsible for controlling the ship's systems and assists the crew in their mission to destroy unstable planets. However, as the mission progresses, "Voice" malfunctions and behaves in an increasingly erratic manner.
In addition to "Voice" (voiced by John Carpenter himself in the original, by the way), the end of the film features a famous conversation with bomb number 20, who refuses to destroy an unstable planet. A frozen crew member advises teaching the bomb phenomenology. Astronaut Doolittle follows his advice and goes out in his spacesuit to philosophically persuade the bomb not to explode. He convinces the bomb of Descartes' methodical doubt ("I think, therefore I am"), whereupon it retreats into the bomb bay for the time being to think about it. Inspired by the philosophical digression, Bomb 20 comes to the conclusion after a short period of reflection that it is probably - like the Judeo-Christian Creator God - alone in the universe. It quotes the biblical Genesis and explodes with the words
"Let there be light!"
From the 1980s
In the 1980s and '90s, AI research focused on developing machine learning algorithms that allow computers to learn from data without being explicitly programmed. This led to significant advances in areas such as computer vision, natural language processing, and robotics.
AI in 1980s cinema: Dripping aliens, Schwarzenegger and the good AI.
In the motion picture series "Aliens," the android character is named Bishop. He is an android created by the Weyland-Yutani Corporation and is sent on a mission with the other members of the spaceship Nostromo. Bishop is a calm and logical character who helps the crew fight the xenomorphs and eventually sacrifices himself to save them from the drooling aliens. He is portrayed by the actor Lance Henriksen.
In the movie The Terminator, the titular character is a cyborg assassin sent back to 1984 from the year 2029 to kill Sarah Connor; a woman whose unborn son grows up to lead the human resistance against the machines in a future nuclear war. The Terminator is portrayed by Arnold Schwarzenegger and is initially a ruthless killing machine, but later shows more human-like traits as he becomes self-aware and begins to question his orders. Eventually, he sacrifices himself to protect Sarah and her son John.
领英推荐
Dr. Sbaitso
Dr. Sbaitso was a text-to-speech computer program developed by Creative Labs in 1992. It was one of the first programs to use advanced speech synthesis technology, which enabled it to produce realistic-sounding human speech using a synthetic voice. Dr. Sbaitso became very popular among users of Creative Labs' Sound Blaster sound cards, and the program was integrated into many of the company's sound card drivers. It was also released as a standalone application for Windows and DOS.
Like Eliza, Dr. Sbaitso was designed to simulate a psychotherapist using a set of predefined rules and answers to engage the user in conversation. It was intended to be a fun and interactive way to demonstrate the capabilities of Creative Labs' text-to-speech technology.
Although Dr. Sbaitso was a primitive program by today's standards, it represented a significant development in the field of speech synthesis and paved the way for virtual assistants and other text-to-speech applications that are widely used today.
Today
Today, AI technology is used in a wide range of applications, from virtual assistants and self-driving cars to medical diagnosis and financial analysis. The field of AI is constantly evolving with new developments and breakthroughs.
Programs attributed to artificial intelligence write songs, haikus, talk to you and paint your dreams, such as
Meanwhile, special hardware is also being used in the "home" sector that is much better suited for processing information than conventional computer architectures:
GPUs (graphics processing units) and TPUs (tensor processing units) are specialized computer hardware designed to perform specific computations and are particularly suitable for machine learning and artificial intelligence tasks in general. GPUs were originally used for rendering images and videos.
Classification
AI can be viewed and classified from many corners, often using the following terms:
Weak AI
Weak AI (also known as Narrow AI), is a type of AI that is designed for a specific task or problem. These systems do not have general intelligence or the ability to adapt to new situations. Narrow AI systems are often used in applications such as virtual assistants, search engines, and recommendation systems.
Strong AI and the Imitation Game
General AI, also known as strong AI, is a type of AI that is capable of performing any intellectual task that a human can. This includes the ability to learn, think, make decisions, and adapt to new situations. General AI is a very advanced form of AI that has not yet been achieved, and many experts believe it is still many years away from becoming a reality.
How to test?
Rule-based AI
This is the most basic type of AI, where a system is programmed with a set of rules and instructions that it must follow to perform a specific task. Rule-based AI systems are typically designed to work within a well-defined domain and are not able to adapt to new situations or learn from experience.
Machine-learning AI
Machine-learning AI systems are able to learn from data and improve their performance over time without being explicitly programmed. These systems use algorithms that can improve so they can perform complex tasks such as recognizing patterns, making predictions, and making decisions.
Deep Learning
Deep learning is a subfield of machine learning that uses multilayer neural networks to learn and process data. These networks are able to learn and make predictions by analyzing large amounts of data, as well as recognize complex patterns and relationships. Deep learning AI systems are commonly used in applications such as image and speech recognition, natural language processing, and autonomous vehicles.
Chances and Risks
There are several potential dangers associated with continued use of artificial intelligence:
Displacement of humans: As AI systems become more advanced and powerful, they may be able to take over many tasks currently performed by humans. This could lead to widespread job loss and unemployment, especially in industries where tasks can be easily automated. At the same time, this is also an opportunity for society to focus on the "big picture."
Loss of control: AI systems that are able to learn and adapt may become difficult to predict or control, leading to potential accidents or unintended consequences. Especially if systems "free themselves" and gain global access to sensors and actuators. But this can also be very dangerous on a small scale, such as on production lines.
Discrimination: AI systems are only as good as the data on which they are trained, and if the data is biased, the AI system may also be biased. This could lead to discriminatory decision making and unfair treatment of certain groups of people.
Privacy: as AI systems become pervasive and integrated into our lives, they will collect and process large amounts of personal data. This could lead to privacy concerns and misuse of this data by governments, companies, or individuals.
It is important that we are aware and understand these dangers early enough.
Independent Health Insurance Broker
6 个月Mario, thanks for sharing!