Back to the Future: How Generative AI Revives GOFAI Paradigms with Recent Research
Introduction
In the current age of AI innovation, the spotlight is often fixed on groundbreaking advancements. Yet, it is crucial to remember that today's 'novelties' are deeply rooted in foundational work from decades past. Intended primarily for AI researchers, yet equally informative for business leaders, this article delves into how old AI concepts often resurface and thrive when technological advancements come into play. Take neural networks, for instance—an idea originating in the 1950s that only gained monumental success after the turn of the millennium. Similarly, with the arrival of Generative AI, we find ourselves at another pivotal juncture, prompting us to re-examine historical AI paradigms, especially in knowledge representation.
Early Ideas and Initial Implementation
A Journey Back in Time: The 1940s and 50s
Before AI became a mainstream term, and long before neural networks powered complex systems, the concept of creating a 'brain-like' machine was already taking shape. The late 1940s and the 1950s were a time of fascination with emulating human cognition, fueled by a blend of neuroscience, psychology, and computer science. This intersection of disciplines laid the groundwork for the first experiments in neural networking.
Marvin Minsky: The Forgotten Pioneer
Enter Marvin Minsky, a name that often resonates more with the era of Good Old Fashioned AI (GOFAI) rather than the modern neural network. Yet, in 1951, Minsky developed what could be considered the world's first randomly wired neural network learning machine. This machine was not just a rudimentary assembly of circuits; it was an audacious attempt to emulate aspects of human thought processes.
Why Minsky's Work Was Groundbreaking
In the context of our current AI renaissance, it's essential to recognize these initial forays, not just as history lessons but as foundational stones upon which our contemporary understanding is built.
The Call for A Paradigm Shift—GOFAI and Modern Research
Limited Time, Unlimited Potential
As a seasoned AI researcher and CTO of Okation.ai, my breadth of experience has allowed me to recognize valuable yet overlooked concepts in the AI continuum. Despite time constraints, I strive to maintain a dual focus: one foot in the practical applications of Generative AI that deliver real-world value, and the other in keeping abreast of emerging research trends. This article serves as a catalyst for collective, deeper engagement with the legacy of Good Old Fashioned AI (GOFAI). It's not merely a nostalgic journey but a pointed reevaluation aimed at addressing today's AI challenges, especially within the domain of Generative AI.
Shared Visions in the AI Community
In the field of AI, it's quite stimulating when you find your line of thinking echoed by other esteemed researchers. While I advocate for revisiting the GOFAI theories, I find myself in the good company of those who are paving the way in similar directions. Thomas G. Dietterich, an emeritus professor of computer science at Oregon State University, is one such advocate.
"Dissociating Language and Thought from Large Language Models: A Cognitive Perspective"
Although not authored by Dietterich, he strongly advocates for the insights presented in this paper. It aims to dissect the competencies of Large Language Models (LLMs) into two distinct categories:
Why This Matters
What's wrong with LLMs and what we should be building instead
According to a Keynote by Thomas G. Dietterich titled "What's wrong with LLMs and what we should be building instead" (YouTube link available in the reference section), a significant rethinking is required in how we approach Large Language Models.
Thomas G. Dietterich's Advocacy
According to Dietterich and the paper's authors, the problem with current Large Language Models (LLMs) is the entanglement of multiple functions:
These are combined into a single component within LLMs, making it hard to update or isolate specific types of knowledge.
The Need for Episodic Memory and Situation Models
Another crucial gap in current LLMs is the absence of episodic memory and situation models. These models are essential for understanding narratives and sequences in real-world scenarios. The lack of such features in LLMs limits their utility and real-world applicability.
领英推荐
Prefrontal Cortex Functions
The paper also suggests that large language models require a "prefrontal cortex" drawing an analogy with human brain functions that include:
System One and System Two
The paper outlines the distinction between System One (cognitive "muscle memory") and System Two (reasoning and decision-making). Current LLMs predominantly operate on the "System One" level and lack the "System Two" capabilities. Not to mention the cost of fine-tuning for new knowledge and the problems with RAG systems.
Way Forward
Dietterich suggests that the way forward lies in:
By following this roadmap, we can potentially overcome most of the limitations currently faced by Large Language Models.
Introduction to the New Wave of GOFAI with Knowledge Graphs and LLM
Knowledge graphs serve as a foundational structure for holding information in an interconnected manner. They go beyond mere data points to incorporate relationships, offering a nuanced and semantic understanding of the information. For example, they can elucidate the relationship between a "disease" and its "symptoms" or between a "company" and its "employees."
Synergy with Large Language Models
Large language models like GPT-4 can benefit from the structure provided by knowledge graphs. GPT-4, although powerful, essentially operates in a vacuum where each query is treated independently. The model's 'understanding' is temporary and isolated to a particular session, with no continuous learning or memory involved. Here's where the synergy comes into play:
Challenges and Considerations
However, the integration of these systems isn't without its hurdles:
Conclusion:
In summing up this discourse, it's evident that while a single LinkedIn article may not do justice to the depth of the topics covered, the aim has been to illuminate a path for AI researchers who share a similar perspective. While I find a lot of common ground with Dietterich's views on the future of AI, it's worth pointing out that his perspective might underplay the very real capabilities that LLMs offer in practical applications. Particularly, the current generative models have emerged as perhaps the most effective tool for grappling with unstructured data—something that has been a pivotal task in computer science for decades. The main thrust of this article, "Back to the Future: How Generative AI Revives GOFAI Paradigms with Recent Research" serves to underline this unique inflection point in AI history. We're witnessing an era where past paradigms aren't merely being revisited but are being reinvented and fortified with the lessons learned from modern computational practices. This cross-pollination of old and new could very well be the catalyst for the next great leap in artificial intelligence.
#AI #GOFAI #GenerativeModels #LLM #ArtificialIntelligence #UnstructuredData, #StructuredData #MachineLearning #KnowledgeGraphs #NLP #ComputerScience #Research #Dietterich #PracticalApplications #FutureOfAI
References: