The past informs the present
How we got here (history)
A few historical notes on research into artificial intelligence and through this an illustration of accelerating returns, perhaps helping put into clearer context the conversation today about where we are going...
1956 is considered the starting date for what we think of as artificial intelligence. In that year John McCarthy, Marvin Minsky, Nathaniel Rochester, Claude Shannon, and others held "The Dartmouth Summer Research Project on Artificial Intelligence" and over 8 weeks explored many of the topics critical to research work over the next few decades including symbolic methods, systems focused on limited domains (early expert systems), and deductive systems versus inductive systems.
1960-1990 was dominated by enthusiasm for expert systems in which specific domain knowledge was encoded with the objective of making decisions as if a human being had made the decision. A key innovation was in inference engines which would use the encoded knowledge as a starting point and "infer" new rules.
1990s While expert systems were successful in some circumstances, it became clear that the work of manually building all of human knowledge into a system was impractical and a period widely known as the AI winter began in which research funding (and by extension research) declined. A variety of factors by the end of the 1990s had re-energized the research community including increasing data and computation which enabled new approaches.
"...there's this stupid myth out there that AI has failed, but AI is around you every second of the day." -Rodney Brooks 2002
2000-2020 A series of innovations took AI research in new directions in the first two decades of this century principally machine learning and deep learning. In 2012 Geoffrey Hinton published a breakthrough paper on deep learning and in 2017 a group of Google researchers published a paper on transformer architectures for AI systems titled "Attention is all you need." The primary advances of this period, going back to the challenges of the expert systems era, were in how AI systems would be trained -- automating and scaling the acquisition of knowledge and inference.
领英推荐
2020 While several versions of GPT and other large language models (LLMs) were released following the 2017 Google paper, the release of GPT-3 in 2020 by OpenAI was a major advance in capability. A series of breakthroughs in the last 3 years have demonstrated how scaling such systems results in "emergent behavior" solving problems across many different domains - image, sound, language, code, etc. A new era of symbolic manipulation with a range of powerful tools is now advancing the field at a rapid pace.
Looking back (and squinting a bit) we can see a pattern of accelerating returns:
40 years (1960-2000) primarily focused on expert systems
20 years (2000-2020) machine learning and deep learning
10 years (2020-2030?) transformer, diffusion, large language models...
So should we expect a major new shift in research by 2030?
Project Manager | GenAI, Traditional AI, & LLMs Training 《DON'T UNDERESTIMATE MANIFESTATION》
1 年Thanks for sharing this, we have come a long way.
Chairman at SCC Sequoia
1 年Interesting. Maybe even accurate. I see little in the technology THAT IS DANGEROUS. It is the manner in which we humans CHOOSE (notice that word) to use AI technology that can make it dangerous. Hercule Poirot would probably observe "Strychnine isn't dangerous. It's the person who puts it in my tea that is." I believe Vin Cerf, the father of the Internet) would agree that we need some"guardrails" ... some well understood guidelines. Bad guys will ignore them but the rest of us (I'd guess 95% of us) will observe them. It will be quite interesting to see the creative impact of AI on painting, sculpture, and fiction writing in the next 5-10 years. Seems to me that a Hemingway PLUS AI might come up with something even deeper than The Old Man and The Sea. Cheers!