The History Of Artificial Intelligence
(Image credit: NASA/ESA/CSA/Judy Schmidt)

The History Of Artificial Intelligence

To make sure we are all on the same page going forward. Starting with the complete(ish) history or as complete as I could manage, even with the power of AI. I thought would be the best way to start the educational/Storytelling section of this newsletter. As developments happen IRL this is the document that will actually be updated including any errors. I, OpenAI and Google may have made some mistakes (please do look for them). Before making this I looked for something like it, and at best I always got maybe 20% of this list(without references of any kind mostly). I hope this helps someone and helps form the foundations of this newsletter and community.

Here it is, The Complete(ish) History of Artificial Intelligence:

  1. 1666 — Mathematician and philosopher Gottfried Leibniz publishes Dissertatio de arte combinatoria (On the Combinatorial Art).?
  2. 1755 — Samuel Johnson defines intelligence in A Dictionary of the English Language.?
  3. 1763 — Thomas Bayes (We will be speaking about him later) develops a framework for reasoning about the probability of events.?
  4. 1854 — George Boole argues that logical reasoning could be performed systematically in the same manner as solving a system of equations.?
  5. 1865 — Richard Millar Devens describes in the Cyclop?dia of Commercial and Business Anecdotes how the banker Sir Henry Furnese profited by receiving and acting upon information prior to his competitors.?
  6. 1898 — At an electrical exhibition in the recently completed Madison Square Garden, Nikola Tesla demonstrates the world’s first radio-controlled vessel.
  7. 1913:?Markov models were introduced by the Russian mathematician?Andrey Andreyevich Markov (him too),?On?January 23, 1913, he summarised his findings in an address to the Imperial Academy of Sciences in St. Petersburg.
  8. 1943: Warren McCulloch and Walter Pitts publish a paper on “A Logical Calculus of the Ideas Immanent in Nervous Activity,” which is considered the first published work on artificial neural networks.?Reference
  9. 1943: The perceptron was invented in 1943 by McCulloch and Pitts, in a seminal work above. it would later be implemented in a machine in 1957 by Frank Rosenblatt.?
  10. 1947: Statistician John W. Tukey coins the term “bit” to designate a binary digit, a unit of information stored in a computer.
  11. 1949: Donald Hebb publishes Organisation of Behavior: A Neuropsychological Theory.
  12. 1950’s: In the early 1950s, the cost of leasing a computer ran up to $200,000 a month
  13. 1950: Claude Shannon’s “Programming a Computer for Playing Chess” is the first published article on developing a chess-playing computer program.
  14. 1950:?Alan Turing publishes “Computing Machinery and Intelligence” in which he proposes “the imitation game” which will later become known as the “Turing Test.”
  15. 1951:?Marvin Minsky and Dean Edmunds build SNARC (Stochastic Neural Analog Reinforcement Calculator), the first artificial neural network.
  16. 1952 — Arthur Samuel develops the first computer checkers-playing program and the first computer program to learn on its own.
  17. 1955: The term “artificial intelligence” is coined at the Dartmouth Conference, where a group of researchers, including John McCarthy and Marvin Minsky, gather to discuss the possibility of creating intelligent machines.?Reference
  18. 1956: the?Dartmouth Summer Research Project on Artificial Intelligence?(DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956.?reference
  19. 1955–56: The first AI program, called the?Logic Theorist, is developed by Allen Newell, Herbert Simon, and J.C. Shaw. The Logic Theorist is able to prove theorems in symbolic logic, and is considered the first AI program to demonstrate “intelligent” behavior.?Reference
  20. 1957: Frank Rosenblatt develops the Perceptron, an early artificial neural network enabling pattern recognition based on a two-layer computer learning network.
  21. 1961: The first robot to be used in a manufacturing setting, called Unimate, is introduced. Unimate is able to perform tasks such as welding and assembly on the production line.?Reference
  22. 1963: The first implementation of the perceptron was a machine built in 1958 at the?Cornell Aeronautical Laboratory?by?Frank Rosenblatt funded by the United States?Office of Naval Research?The Perceptron is able to learn to recognize patterns in data and classify objects based on those patterns.
  23. 1965–10?computer scientists convened in Dartmouth, NH, for a workshop on artificial intelligence, defining it as “making a machine behave in ways that would be called intelligent if a human were so behaving.”
  24. 1966: ELIZA, a natural language processing program developed by Joseph Weizenbaum, is released. ELIZA is able to carry on simple conversations with users, making it one of the first examples of a “chatbot.”?Reference
  25. 1967: First experiments with semi-automated computer-based facial recognition; manual measurements by Bledsoe; development of RAND tablet.?Reference
  26. 1960s:Hidden Markov models were described in a series of statistical papers by?Leonard E. Baum?and other authors in the second half of the 1960s.
  27. 1968–1970: The first natural language processing program, called SHRDLU, is developed by Terry Winograd. SHRDLU is able to understand and respond to commands in English, making it one of the first examples of a “intelligent” personal assistant.?Reference
  28. 1968: Work by?Hubel?and?Wiesel?in the 1950s and 1960s showed that cat?visual cortices?contain neurons that individually respond to small regions of the?visual field. Provided the eyes are not moving, the region of visual space within which visual stimuli affect the firing of a single neuron is known as its Neighboring cells have similar and overlapping receptive fields.Receptive field size and location varies systematically across the cortex to form a complete map of visual space. The cortex in each hemisphere represents the contralateral Their 1968 paper identified two basic visual cell types in the brain.?Reference
  29. 1969: The first robotic arm with multiple degrees of freedom, called the Stanford Arm, is developed at Stanford University. The Stanford Arm is able to perform tasks such as moving objects and assembling parts.?Reference
  30. 1971: The first artificial neural network is created by Dr. Bernard Widrow and his graduate student Ted Hoff at Stanford University.
  31. 1971–1972: The first expert system, called MYCIN, is developed at Stanford University. MYCIN is able to diagnose and recommend treatments for infectious diseases.?Reference
  32. 1970’s: Increased accuracy in computerised facial identification with 21 facial markers.
  33. mid 1970’s: One of the first applications of HMMs was?speech recognition, starting in the mid-1970s.?Reference
  34. 1977:?Scientific American?made the earliest known published reference to an Othello/Reversi program, written by N. J. D. Jacobs in?BCPL.[20]?published “Othello, a New Ancient Game”.?Referenceviola jones algorithm?Reference
  35. 1978:?Nintendo?releases the?video game?Computer Othello?in?arcades.[24]?Reference
  36. 1978–1979: The first “intelligent” personal assistant, called XCON, is developed at Xerox PARC. XCON is able to schedule appointments and manage other tasks for its users.?Reference
  37. 1980: The Othello program?The Moor?(written by Mike Reeve and?David Levy) won one game in a six-game match against world champion Hiroshi Inoue.?Reference
  38. 1980: The “neocognitron[9]?was introduced by?Kunihiko Fukushima?in 1980.[11][21][26]?It was inspired by the above-mentioned work of Hubel and Wiesel. The neocognitron introduced the two basic types of layers in CNNs: convolutional layers, and downsampling layers.?Reference
  39. 1980: The first commercially available expert system, called R1, is released. R1 is able to diagnose and recommend repairs for car engines.?Reference
  40. 1980’s: In the 1980’s, AI was reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds. John Hopfield?and?David Rumelhart?popularized “deep learning” and?“back-propagation “?techniques which allowed computers to learn using experience.?referenced
  41. 1982: The first practical application of machine learning, called the Expert System Shell, is developed. The Expert System Shell is able to diagnose and recommend repairs for car engines.?Reference
  42. 1984: The first self-driving car is demonstrated by Carnegie Mellon University. The car is able to navigate through city streets without a human driver.?Reference
  43. 1985: Deep Blue development starts at IBM.?Reference
  44. 1986: The first expert system for financial analysis, called DSSS, is developed. DSSS is able to analyze financial data and provide recommendations for investments.?Reference
  45. 1987:?Arthur Samuel was the pioneer in machine learning and won the Computer Pioneer Award by IEEE in 1987.?Reference
  46. 1987: The first machine learning program to outperform humans at a specific task, called the learning vector quantization (LVQ) algorithm, is developed. The LVQ algorithm is able to classify patterns in data more accurately than humans.?Reference?CD(CHECK DATE)
  47. 1987: The?time delay neural network?(TDNN) was introduced in 1987 by?Alex Waibel?et al. and was one of the first convolutional networks, as it achieved shift invariance.[30]?It did so by utilizing weight sharing in combination with?backpropagation?training.?Reference
  48. 1991: The first machine learning program for facial recognition, called the eigenface method, is developed. The eigenface method is able to recognize and classify human faces with high accuracy, leading to the widespread adoption of facial recognition technology in a variety of applications.?Reference
  49. 1992:?B. Brügmann employed it for the first time in a?Go-playing program?.[10]. The MCTS algorithm is able to defeat human players in Go, demonstrating the capabilities of AI in complex board games.?Reference
  50. 1997: Deep Blue, a chess-playing computer developed by IBM, defeats world champion Garry Kasparov in a highly publicised match.?Reference
  51. 1997: The first machine learning program for playing the strategy game Bridge, called the Bridge Baron, is developed. The Bridge Baron is able to defeat human players in Bridge, demonstrating the capabilities of AI in strategic games.?Reference
  52. 1997: Logistello won every game in a six-game match against world champion Takeshi Murakami. Though there had not been much doubt that Othello programs were stronger than humans, it had been 17 years since the last match between a computer and a reigning world champion.?Reference
  53. 2000: The first practical application of natural language processing, called the iPAQ Personal Assistant, is released. The iPAQ Personal Assistant is able to understand and respond to voice commands in English, making it one of the first examples of a “intelligent” personal assistant.?Reference
  54. 2001: Viola–Jones object detection framework is a?machine learning?object detection?framework proposed in 2001 by?Paul Viola?and?Michael Jones.[1][2]?It was motivated primarily by the problem of?face detection, although it can be adapted to the detection of other object classes.?Reference
  55. 2004: Stanford University’s Stanley wins the DARPA Grand Challenge, a competition for autonomous vehicles.
  56. 2004: CNNs were invented in the 1980s, their breakthrough in the 2000s required fast implementations on?graphics processing units?(GPUs).In 2004, it was shown by K. S. Oh and K. Jung that standard neural networks can be greatly accelerated on GPUs. Their implementation was 20 times faster than an equivalent implementation on?CPU.[46][28]?In 2005, another paper also emphasised the value of?GPGPU?for?machine learning.[47].?Reference
  57. 2006: The first machine learning program for natural language translation, called Google Translate, is released. Google Translate is able to translate text between a wide range of languages with high accuracy, making it a widely used tool for communication and language learning.?Reference
  58. 2007: The first machine learning program for playing the strategy game Poker, called the Polaris Poker Bot, is developed. The Polaris Poker Bot is able to defeat human players in Poker, demonstrating the capabilities of AI in strategic games involving incomplete information.?Reference
  59. 2007: The first machine learning program for playing the board game Checkers, called Chinook, is developed. Chinook is able to defeat the world champion Checkers player, demonstrating the capabilities of AI in complex board games.?Reference
  60. 2011: IBM’s Watson system is used to assist with medical diagnoses and treatment recommendations, demonstrating the potential for AI to be used in healthcare.?Refrence
  61. 2011: IBM’s Watson defeats two former Jeopardy! champions in a televised match, demonstrating the capabilities of natural language processing and machine learning.?Reference
  62. 2011:?Saio,?Edax?and?Cyrano, became much faster than Logistello and other programs at the game of Othello.?Reference
  63. 2015: The first machine learning program for image generation, called DeepDream, is developed by Google. DeepDream is able to generate images from user-specified input, leading to the creation of a wide range of visual art and media using AI.?Reference
  64. 2016: The first machine learning program for playing the strategy game Dota 2, called the OpenAI Bot, is developed. The OpenAI Bot is able to defeat human players in Dota 2, demonstrating the capabilities of AI in real-time strategy games.?Refrence
  65. 2016: AlphaGo Zero, a modified version of AlphaGo developed by DeepMind, defeats the original AlphaGo program in a 100-game match. AlphaGo Zero is able to learn and improve its skills without being given any human data.?Reference
  66. 2016: DeepMind’s AlphaGo system defeats Lee Sedol, the world champion Go player, in a five-game match. The victory is considered a major milestone in the field of AI, as Go is considered a much more complex game than chess.?Reference
  67. 2017: Google’s DeepMind develops an AI system called AlphaZero that is able to learn and improve its skills in a variety of games, including chess, Go, and shogi, without being given any human data.?Reference
  68. 2018: Google’s AutoML system creates a machine learning model that outperforms models created by human experts in image classification tasks.?Reference
  69. 2018: OpenAI’s language model GPT-2 is released, capable of generating human-like text.?Reference
  70. 2019: OpenAI’s language model GPT-3 is released, becoming the largest language model at the time and capable of performing a wide range of language tasks.Reference
  71. 2019: The first machine learning program for playing the strategy game Starcraft II, called AlphaStar, is developed by DeepMind. AlphaStar is able to defeat human players in Starcraft II, demonstrating the capabilities of AI in real-time strategy games.?Reference
  72. 2020: The first machine learning program for generating realistic-sounding music, called Jukedeck, is released. Jukedeck is able to generate original music tracks in a variety of genres, demonstrating the potential for AI to be used in the music industry.?Reference
  73. 2020: Google’s DeepMind develops a machine learning system called AlphaFold that is able to predict protein structures with high accuracy, a significant advancement in the field of bioinformatics.?Reference
  74. 2020: OpenAI’s language model GPT-3 is used to generate realistic-sounding audio of a human voice, leading to concerns about the potential for AI to be used for malicious purposes such as deepfake audio. GPT-3.?Reference
  75. 2021:?OpenAI’s DALL-E program is released, capable of generating original images from text descriptions.?Reference
  76. 2022: Point-E is able to model 3d objects with better accuracy than the competition (up to 600% faster)?Reference

Like I said a work in progress, I hope I can credit someone else soon ;)

要查看或添加评论,请登录

社区洞察

其他会员也浏览了