The past informs the present
"History of AI" by OpenAI's DALL-E 2

The past informs the present

How we got here (history)

A few historical notes on research into artificial intelligence and through this an illustration of accelerating returns, perhaps helping put into clearer context the conversation today about where we are going...

1956 is considered the starting date for what we think of as artificial intelligence. In that year John McCarthy, Marvin Minsky, Nathaniel Rochester, Claude Shannon, and others held "The Dartmouth Summer Research Project on Artificial Intelligence" and over 8 weeks explored many of the topics critical to research work over the next few decades including symbolic methods, systems focused on limited domains (early expert systems), and deductive systems versus inductive systems.

1960-1990 was dominated by enthusiasm for expert systems in which specific domain knowledge was encoded with the objective of making decisions as if a human being had made the decision. A key innovation was in inference engines which would use the encoded knowledge as a starting point and "infer" new rules.

1990s While expert systems were successful in some circumstances, it became clear that the work of manually building all of human knowledge into a system was impractical and a period widely known as the AI winter began in which research funding (and by extension research) declined. A variety of factors by the end of the 1990s had re-energized the research community including increasing data and computation which enabled new approaches.

"...there's this stupid myth out there that AI has failed, but AI is around you every second of the day." -Rodney Brooks 2002

2000-2020 A series of innovations took AI research in new directions in the first two decades of this century principally machine learning and deep learning. In 2012 Geoffrey Hinton published a breakthrough paper on deep learning and in 2017 a group of Google researchers published a paper on transformer architectures for AI systems titled "Attention is all you need." The primary advances of this period, going back to the challenges of the expert systems era, were in how AI systems would be trained -- automating and scaling the acquisition of knowledge and inference.

2020 While several versions of GPT and other large language models (LLMs) were released following the 2017 Google paper, the release of GPT-3 in 2020 by OpenAI was a major advance in capability. A series of breakthroughs in the last 3 years have demonstrated how scaling such systems results in "emergent behavior" solving problems across many different domains - image, sound, language, code, etc. A new era of symbolic manipulation with a range of powerful tools is now advancing the field at a rapid pace.

Looking back (and squinting a bit) we can see a pattern of accelerating returns:

40 years (1960-2000) primarily focused on expert systems

20 years (2000-2020) machine learning and deep learning

10 years (2020-2030?) transformer, diffusion, large language models...

So should we expect a major new shift in research by 2030?

Ankit Singh

Project Manager | GenAI, Traditional AI, & LLMs Training 《DON'T UNDERESTIMATE MANIFESTATION》

1 年

Thanks for sharing this, we have come a long way.

James M. Spitze

Chairman at SCC Sequoia

1 年

Interesting. Maybe even accurate. I see little in the technology THAT IS DANGEROUS. It is the manner in which we humans CHOOSE (notice that word) to use AI technology that can make it dangerous. Hercule Poirot would probably observe "Strychnine isn't dangerous. It's the person who puts it in my tea that is." I believe Vin Cerf, the father of the Internet) would agree that we need some"guardrails" ... some well understood guidelines. Bad guys will ignore them but the rest of us (I'd guess 95% of us) will observe them. It will be quite interesting to see the creative impact of AI on painting, sculpture, and fiction writing in the next 5-10 years. Seems to me that a Hemingway PLUS AI might come up with something even deeper than The Old Man and The Sea. Cheers!

要查看或添加评论,请登录

Ted Shelton的更多文章

  • Ada Lovelace

    Ada Lovelace

    We might imagine the rise of artificial intelligence is purely a modern story. But concerns about machine…

    8 条评论
  • Consumerization of Technology

    Consumerization of Technology

    (where we are now..

    11 条评论
  • AI Interregnum

    AI Interregnum

    An interregnum: where one epoch is fading and another struggles to emerge. I have these wildly disparate conversations.

    12 条评论
  • Quantum-Enhanced AI?

    Quantum-Enhanced AI?

    Wednesday evening I tried to go to sleep early as I had to get up for a flight the next day and then two full two days…

    6 条评论
  • Cargo Cults and the Illusion of Openness

    Cargo Cults and the Illusion of Openness

    In the South Pacific during the 1940s, indigenous islanders witnessed military planes landing with supplies. After the…

    8 条评论
  • From WIMP to AI

    From WIMP to AI

    Evolving Interfaces and the Battle Against Cognitive Overhead The GUI Revolution and Its Growing Complexity Graphical…

    20 条评论
  • Harvesting your data

    Harvesting your data

    Much has been written this week about DeepSeek - overreaction by the markets, handwringing about China, speculation…

    5 条评论
  • Cognitive Surplus

    Cognitive Surplus

    Clay Shirky's 2010 book Cognitive Surplus: Creativity and Generosity in a Connected Age recently came to mind as I…

    17 条评论
  • Enterprise AI adoption

    Enterprise AI adoption

    I am going to go out on a limb here and just say that everyone will be wrong. Including me.

    21 条评论
  • Predictions for 2025

    Predictions for 2025

    What should we expect from AI research and development in the coming year? Will the pace of innovation that we have…

    14 条评论

社区洞察

其他会员也浏览了