A Brief History of Artificial Intelligence

A Brief History of Artificial Intelligence

AI Part 2 – Tracing the Evolution

The journey of AI is a fascinating tale of human ingenuity, marked by groundbreaking research, technological advancements, and profound societal impacts. Join us as we briefly explore the evolution of AI, uncovering some key developments and challenges that have shaped this dynamic field over the decades.

The 1950’s and Alan Turning?

Many consider the beginnings of AI to date from the mid-20th century and Alan Turing, a visionary who provided groundbreaking thought relative to artificial intelligence.? While playing an indispensable role in the advancement of modern computers, he also proposed a method to determine if a machine’s intelligence could be indistinguishable from a human.? This came to be known as the Turing Test.? Despite only being a theoretical framework, the question on whether machines could think help conceptualize the idea of artificial intelligence before the term was coined.?

1956: Dartmouth Summer Research Project on Artificial Intelligence

Soon after Alan Turing proposed his framework, John McCarthy and Marvin Minskey held the famous Dartmouth Conference in the summer of 1956. The proposal for this conference was a discussion about thinking machines and is now considered as the founding event of artificial intelligence as a field of study. Here McCarthy invited many brilliant minds such as Allen Newell, Cliff Shaw, and Herbert Simons who programmed the Logic Theorist.?

The Logic Theorist was the first computer software engineered to perform automated reasoning, which was then dubbed as the first artificial intelligence program. This was the turning point for artificial intelligence as it changed from an idea into a feasible reality. The Dartmouth Conference is considered a seminal event in the history of AI as it sets the stage for future research and development in areas like machine learning and natural language processing.?

1960’s - 1970’s: Formative Years

In the wake of the Dartmouth Conference, there was a surge of interest and research in artificial intelligence. A notable example is the establishment of AI studies at the Massachusetts Institute of Technology (MIT) by McCarthy and Minsky.? This lab became a hotbed of innovation and development in the field of artificial intelligence.

One early demonstration of such applications was by Joseph Weizenbaum, who created ELIZA in 1966. ELIZA, an early natural language processing program and one of the first chatbots, paved the way for applications like Siri and Alexa. Another significant development was the creation of the programming language LISP by McCarthy, officially released in 1960. LISP became a favored language for programming artificial intelligence.

Research was not limited to MIT as other important institutions, such as Stanford, Carnegie Mellon, and Edinburgh, were also making significant advancements in AI research. The advocacy and research efforts from these institutions incentivized the Defense Advanced Research Projects Agency (DARPA), a US government agency, to fund AI research, mirroring similar efforts by nations worldwide.

1970s – 1990s: AI winter

As government agencies' patience drew thin, especially by DARPA with the slow development of the Speech Understanding Program, funding began to dry up.

As usual, lack of funding leads to lack of interest, and overall progress in artificial intelligence research slowed to a crawl. During these times, public sentiment towards AI was marked by skepticism and disappointment. A pivotal moment leading to the decline in AI enthusiasm was the publication of the 1973 Lighthill Report in the UK. This report, authored by Sir James Lighthill, criticized the progress of AI research, arguing that the achievements of AI were neither practical nor applicable as working systems.?

Though the 1980’s encountered an AI revival driven by the development of expert systems, this resurgence quickly faded as these systems proved to be weak and expensive. Many of the problems came from the current computers of the time. Many researchers blame the lack of computer memory and processing power holding back applications from doing anything truly useful.?

2000 – Present: Revival of AI

All progress did not stop, however. In 1985 Carnegie Mellon University began working on an expert system called ChipTest to play chess.? It then moved to IBM where it was renamed Deep Thought and then Deep Blue.? In 1996 Deep Blue lost to world champion Garry Kasparov 4 games to 2, but in a 1997 rematch, Deep Blue beat Kasparov winning 2 games and drawing 3. ?This event became a pivotal moment in the advancement of computer processing, and the potential for artificial intelligence to surpass human intelligence.?

AI was now headed to people’s homes.? A spin-out from the Stanford Research Institute’s Artificial Intelligence Center, Apple released Siri in 2010. A virtual assistant that can assist in everyday tasks, showcased the advancement of natural language processing and machine learning.? Alexa and others followed.

A step further came in 2022 with the release of ChatGPT, a chat bot built on GPT-3.5 large language model, emulating human conversations through text. Generative AI has now taken off as a field to popularize artificial intelligence capabilities through the use of deep learning and general access to big data,

These events renewed interest in artificial intelligence, leading to its current revival.? With advancing computing capabilities and the capturing of, and access to, incredible amounts of data, artificial intelligence has rapidly evolved to become a part of everyday life.

Seeing how far the field has come during its brief history, our next post will take a look at the current state of artificial intelligence.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了