History, Philosophy, & Science (HPS) - Via Wittgenstein and Turing to XAI?

History, Philosophy, & Science (HPS) - Via Wittgenstein and Turing to XAI?

1: The Communication Puzzle: How Do Intelligences Connect?

Can a machine truly grasp the concept of "castling" in chess, or is it merely manipulating symbols according to rules? Can different forms of intelligence, human and machine, ever truly understand each other? The "Hand of God" move in the AlphaGo vs. Sedol match wasn't just a brilliant Go move; it was a whisper of a new kind of intelligence trying to communicate via facta non verba, an action that whispered and yet echoed across the decades, back to the hallowed halls of Cambridge in 1939 where two intellectual giants, Ludwig Wittgenstein and Alan Turing, grappled with the very nature of language, mathematics, and thought.

Their ideas, debated and refined in those pre-war years, have come full circle, now illuminated by the dazzling advances in artificial intelligence of the 21st century. Systems like AlphaZero, mastering complex games with seemingly superhuman intuition, and the Transformer, wielding language with astonishing fluency, challenge us to reconsider what it means to think, to understand, and to communicate.

This series of articles will explore the intertwined journeys of human and machine intelligence, tracing the path from Wittgenstein's "language games" via Turing's groundbreaking work on computation and machine learning, and finally to the modern breakthroughs in AI. Along the way, we'll delve into the fascinating interplay between language, thought, and action, asking:

  • Can we develop AI systems that not only solve complex problems but also explain their reasoning in a way we can comprehend?
  • Is true intelligence a matter of domain mastery (winning the game), or does it require the ability to communicate and collaborate with others?
  • Can machines ever truly understand human language, or are they destined to remain forever in the realm of "mere" symbol manipulation?

To begin our exploration, let's consider a thought experiment. Could you, dear reader, pass a Turing Test to prove you are human? Certainly, you'd think. But could you pass a Turing Test to prove you are a human chess grandmaster? Or even more challenging, could you pass a Turing Test to prove you are simply a chess grandmaster, regardless of your humanness or lack thereof?

Most of us would likely fail the latter two tests. We might be able to play chess competently, but could we explain our strategies, intuitions, and "chess thoughts" in a way that would convince a grandmaster of our expertise? This human fallibility in explaining our own knowledge highlights a crucial challenge for artificial intelligence: explainability.

The iconic moments in the AlphaGo vs. Sedol matches, such as the "Hand of God" move and Sedol's revenge, further underscores this challenge. AlphaGo's seemingly creative and intuitive moves captivated the world, but its inability to articulate its reasoning left us with a sense of wonder and unease. How could a machine exhibit such brilliance without being able to explain it - autism, perhaps?

These questions lead us to a fundamental distinction: the difference between mastering the DomainGame (playing chess or Go at a high level) and mastering the LanguageGame (engaging in a meaningful conversation about the game, explaining strategies, and sharing insights). This distinction, as we'll see, has deep roots in the philosophical discussions of Wittgenstein and Turing, and it offers a powerful framework for understanding the challenges and opportunities in developing truly intelligent and explainable AI.

Join us as we embark on this intellectual adventure, tracing the evolution of ideas from the 1930s to the present day, and exploring the profound implications of a Wittgenstein-Turing Hypothesis: that intelligence is ultimately a bimodal problem, requiring fluency in both the skill of a domain and the skill of communication.

Here is a NotebookLM Podcast based on this note and the 4 classic documents.

2: 1939 - Two Visions of Mathematics: Wittgenstein's Language Games vs. Turing's Formal Systems

2.1: Wittgenstein - Mathematics as a Language Game

"The mathematical problems of what is called foundations are no more the foundation of mathematics for us than the painted rock is the support of a painted tower." - Ludwig Wittgenstein, 1939

Wittgenstein, in his lectures on the foundations of mathematics, proposed that mathematics, like language, is a "form of life," a "language game" embedded in human practices and social interactions. He rejected the notion of mathematics as a realm of absolute, pre-existing truths waiting to be discovered. Instead, he argued, mathematical truths are constructed through human action (communication), through how we use symbols and rules in specific contexts.

Wittgenstein's approach emphasized:

  • Meaning in Use: The meaning of mathematical symbols and expressions is determined by how they are used in practice, in the context of calculations, proofs, and discussions.
  • Rule-Following: Mathematics is a rule-governed activity, but these rules are not fixed and absolute. They are created and modified through human agreement and participation in shared practices.
  • Surveyability: We grasp mathematical structures and concepts through our ability to "survey" them, to see the connections and patterns within a system. This surveyability is essential for understanding and applying mathematical rules.

Wittgenstein's ideas challenged the prevailing formalist views of mathematics, highlighting the human and social elements and the importance of context in understanding mathematical meaning and truth.

2.2: Turing - Formal Systems and the Limits of Computability

While Wittgenstein was exploring the philosophical foundations of mathematics, Alan Turing, in his 1939 lectures (his first lecture as a fellow) and in subsequent work, delved into the formal and logical structures of mathematics. He focused on:

  • Axiomatic Systems: Turing emphasized the construction of mathematics through axiomatic systems, where theorems are derived from a set of axioms and rules of inference. Hilbert's problems.
  • Formal Logic: He explored different systems of formal logic, including propositional logic and predicate logic, as tools for representing and analyzing mathematical reasoning.
  • Computability and Turing Machines: He introduced his concept of the Turing machine, a theoretical model of computation that manipulates symbols on a tape according to a set of rules. This model provided a precise definition of computability and laid the foundation for modern computer science.
  • G?del's Incompleteness Theorems: Turing discussed the implications of G?del's incompleteness theorems, which demonstrated the inherent limitations of formal systems in capturing all mathematical truths. He explored ways to extend these systems through his work on ordinal logics.

Turing's work provided a rigorous framework for understanding the foundations of mathematics and the limits of what can be computed within formal systems. His ideas complemented Wittgenstein's philosophical approach, offering a more comprehensive view of mathematics as both a human activity and a formal structure.

Harmonization: Two Sides of the Same Coin

Wittgenstein and Turing's contrasting perspectives, though seemingly at odds, offer a richer and more nuanced understanding of mathematics. Wittgenstein's emphasis on language games and human practice reminds us that mathematics is not divorced from human experience and social interaction. Turing's focus on formal systems and computability highlights the power of logic and computation in exploring the structures and possibilities within mathematics.

Their ideas, though developed in the context of mathematical foundations, have profound implications for our understanding of intelligence, both human and artificial. As we'll see in the following articles, the DomainGame/ LanguageGame distinction echoes Wittgenstein's emphasis on the context-dependent nature of meaning and rule-following, while Turing's work on computation and machine learning lays the foundation for the development of AI systems that can master both games.

Are they speaking the same language?

3: 1950 - Turing's Gambit: The Turing Test and the Language of Intelligence

In 1950, Alan Turing, the father of computer science, shifted his focus from the abstract realm of mathematical foundations to a question that would shape the future of artificial intelligence: "Can machines think?"

His landmark paper, "Computing Machinery and Intelligence," proposed a revolutionary idea: the Turing Test. This thought experiment, designed to assess machine intelligence, centered on a simple yet profound question: could a machine engage in conversation so convincingly that a human interrogator could not distinguish it from another human?

While Turing's earlier work emphasized formal logic and computation, his 1950 paper revealed a deeper understanding of the importance of language in intelligence. The Turing Test, in essence, is a LanguageGame, requiring the machine to not only process information but also participate in the social conventions of human communication, to understand nuance, context, and even humor.

Turing's genius lay not only in devising this test but also in anticipating and addressing the objections to the very idea of machine intelligence. He tackled arguments ranging from theological concerns ("machines don't have souls") to those rooted in perceived limitations ("machines can only do what they're programmed to do").

His responses, often witty and insightful, revealed a deep understanding of the human psyche and a belief in the boundless potential of machines. He argued that machines could learn, adapt, and even surprise us, echoing his earlier ideas about "child machines" gradually acquiring knowledge (LanguageGame Mastery) and skills (DomainGame Mastery) through experience.

Turing's 1950 paper laid the foundation for the field of AI, shaping its goals and methods. The Turing Test, though often debated and reinterpreted, remains a benchmark for assessing machine intelligence, a testament to Turing's foresight and his understanding of the crucial role of language - h/tip to Ludwig - in bridging the gap between human and machine thought.

Connections to 1939

Turing's 1950 work can be seen as a natural evolution from his 1939 exploration of formal systems and computability. While the earlier work focused on the mathematical and logical foundations of intelligence, the later work extended these ideas to the realm of machine learning and human-computer interaction.

The Turing Test, with its emphasis on language and communication, implicitly acknowledges the limitations of purely formal approaches to intelligence. It suggests that true intelligence requires not only the ability to process information according to rules (DomainGame Mastery) but also the ability to engage in a meaningful exchange of ideas with other intelligent beings (LanguageGame Mastery).

Turing's 1950 paper, therefore, bridges the gap between his earlier formalist approach and Wittgenstein's emphasis on the social and contextual nature of language and meaning. It sets the stage for the development of AI systems that can not only master specific domains but also communicate their understanding in a way that is comprehensible to humans by mastering the language domain itself.

4: 2017 - AI's Breakthrough: Domain Mastery and Language Mastery - A Tale of Two Models

4.1: AlphaZero - Mastering the DomainGame

In the realm of artificial intelligence, 2017 marked a watershed moment with the advent of AlphaZero, a revolutionary algorithm that achieved superhuman performance in the games of chess, shogi, and Go.

AlphaZero's prowess stemmed from its unique ability to learn from self-play, starting with no knowledge except the game rules introduced by constraints and not by learning. Through countless iterations of playing against itself, it gradually refined its strategies, ultimately surpassing the abilities of human grandmasters and the strongest chess engines.

This achievement resonated with Turing's 1950 vision of "child machines" capable of learning and developing through experience. AlphaZero, in a sense, was a child prodigy, a savant learning the intricate strategies of complex games through relentless self-improvement.

However, AlphaZero's brilliance was confined to the DomainGame. While it could defeat any human or machine opponent, it could not articulate its reasoning or engage in a conversation about its strategies.

This limitation highlighted a crucial distinction: DomainGame Mastery does not imply LanguageGame competence. AlphaZero, despite its superhuman playing ability, remained silent in the realm of human communication.

Strengths of AlphaZero:

  • Demonstrated the power of AI in achieving superhuman performance in complex games.
  • Provided insights into the nature of intelligence and learning through self-play.
  • Opened up new possibilities for AI applications in various domains beyond game playing.

4.2: The Transformer - Mastering the LanguageGame

In the same year that AlphaZero was conquering the world of games, another groundbreaking AI model emerged: the Transformer. This innovative architecture, introduced in the paper "Attention is All You Need," revolutionized natural language processing (NLP).

The Transformer's key innovation was its ability to capture complex relationships between words in a sentence through a mechanism called self-attention. This allowed it to achieve state-of-the-art results in tasks such as machine translation, text summarization, and question-answering.

However, the Transformer's expertise was confined to the LanguageGame. While it could generate human-like text and engage in conversations, it lacked domain-specific knowledge and could not, for instance, play a game of chess or solve a mathematical equation.

This limitation echoed Wittgenstein's 1939 arguments about the context-dependent nature of language and meaning. The Transformer, despite its linguistic prowess, remained detached from the practical world of actions and consequences.

Strengths of the Transformer:

  • Revolutionized NLP with its innovative architecture and capabilities.
  • Enabled significant advances in language-based AI applications.
  • Provided a powerful tool for exploring the complexities of human language.

Harmonization: Bridging the Gap Between Domain and Language

AlphaZero and the Transformer, though developed in the same year, represent two distinct facets of artificial intelligence. AlphaZero excels in DomainGame Mastery, showcasing the power of AI in strategic thinking and problem-solving. The Transformer, on the other hand, demonstrates LanguageGame Competence, highlighting the potential of AI in generating and potentially understanding human language.

Their contrasting strengths and limitations raise a crucial question: can we bridge the gap between domain expertise and language understanding to create truly comprehensive AI systems?

The answer, as we'll explore in the next article, may lie in the concept of bimodal AI, which draws inspiration from both Wittgenstein's and Turing's ideas to envision AI systems that are fluent in the language of a domain and the language of human communication.

5: A Wittgenstein-Turing Hypothesis: Intelligence as a Bimodal Problem

Intelligence, whether human or artificial, has long been a subject of fascination and debate. What does it mean to think? To understand? To be truly intelligent? The groundbreaking work of Ludwig Wittgenstein and Alan Turing in the 1930s, along with the recent breakthroughs in artificial intelligence, may offer a new perspective on this age-old question.

From Wittgenstein's insightful exploration of language as a "form of life" to Turing's revolutionary ideas about computation and machine learning, a common thread emerges: the inextricable link between language (social), thought (personal), and action (public).

This brings us to a possible Wittgenstein-Turing Hypothesis: that intelligence is fundamentally a bimodal problem. It is not enough to master a specific domain or skill (the DomainGame). True intelligence requires also mastering the art of communication, of explaining one's reasoning, and engaging in a meaningful exchange of ideas with others (the LanguageGame).

A New Vision for Explainable AI: Growing Domain Expertise with Adult Language Skills

When Alan Turing first envisioned machine intelligence in 1950, he suggested we should create a child machine that could be educated. What's often overlooked is that this child machine would need sophisticated communication abilities to learn effectively from humans as it progresses toward OracleAI Mastery. This insight may point to a fundamental rethinking of how we approach explainable AI.

Consider a newborn domain expert - an InfantAI starting its journey toward DomainGame Mastery. Traditional approaches would have this system develop both expertise and communication skills simultaneously. However, just as a human prodigy needs adult-level language to share their growing understanding, our InfantAI needs sophisticated language capabilities from the start.

This leads to a new architecture: pair each developing domain system with a mature (general) language system and learn the domain-specific language system. As the domain expertise grows from infant to child to adult to master and ultimately toward oracle-level understanding, the language system maintains consistent adult-level communication ability. This allows the growing expert to explain its understanding at every stage of development.

Validation comes through specialised domain-specific Turing tests that evaluate both domain expertise and the ability to communicate that expertise. In chess, for example, it would need to recognise, identify and discuss the pieces, the rules, and strategies. These tests verify not just performance but understanding - can the system explain what it knows in human terms? Can it teach what it's learning? Can it discuss its progress?

This approach treats explainability not as a feature to be added post-hoc to domain expertise, but as a parallel capability that must be present from the start. It's the difference between teaching a chess prodigy to speak and teaching a fluent speaker to play chess - we choose the latter.

As the domain system progresses from InfantAI toward OracleAI, it maintains this crucial ability to communicate its growing understanding through its adult-level language partner through ongoing DomainGame_TuringTests. The language system acts as a skilled interpreter, translating between the domain system's internal representations and human understanding.

This fundamentally reframes the Explainable AI challenge. Instead of forcing domain systems to explain themselves, we provide them with sophisticated interpreters from the start and ensure a common vocabulary as their skills mature. The result may be AI systems that can grow in expertise while maintaining clear communication with humans throughout their development.

It's a return to Turing's original vision, but with a crucial twist - our child machine may need adult language skills to fulfill its potential. In doing so, we might finally bridge the gap between AI capability and human understanding.

One might even consider this method to learn specific DomainLanguages ("Talking about Chess") as the DomainGame to be mastered, developing a reduced LanguageSubGame tailored to the specific final DomainGame ("Chess"). Self-reference and incompleteness are features, not bugs!

Continuing the Discussion

This hypothesis reframes the challenge of AI explainability. It is not merely a technical problem of making algorithms more transparent. It is a communication problem, requiring AI systems to be fluent in both the language of their domain and the language of human/social discourse.

The Chess_TuringTest, as we proposed earlier, exemplifies this challenge. Used for a maturing InfantAI it requires the AI to not only play chess at a high level but also to engage in a meaningful conversation about chess with a human expert, explaining its strategies, and justifying its moves in a vocabulary familiar to Chess Players.

This bimodal approach has profound implications for AI development:

  • AI systems need to be trained on domain-specific data and language data. They need to learn the rules of the game and the rules of human/social communication.
  • AI models need to incorporate mechanisms for both domain-specific reasoning and language understanding and generation. They need to be able to think, talk, and act chess.
  • We need to develop new evaluation metrics that assess both DomainGame and LanguageGame competence. We need to measure AI's ability to perform tasks and explain its actions.

The journey towards creating truly intelligent and explainable AI is a journey towards creating AI that can participate in the "meta-game" of language, the game that underlies all human activity. It is a journey towards AI that can collaborate, compete, and comprehend alongside humans in a world increasingly shaped by both human and artificial minds.

And as Wittgenstein astutely observed, "To understand a phrase, we might say, is to understand its use." AI that can understand and use language in all its richness and complexity will be AI that can truly understand and participate in our world.

6: Conclusion: Language as the MetaGame

We've journeyed from the foundations of mathematics to the frontiers of artificial intelligence, guided by the insights of Wittgenstein and Turing. Their ideas, debated and refined in the 1930s, have found new relevance in the 21st century, as AI systems like AlphaZero and the Transformer challenge our understanding of intelligence and communication.

The Wittgenstein-Turing Hypothesis, that intelligence is a bimodal problem requiring mastery of both domain-specific skills and human communication, offers a framework for developing AI that is not only powerful but also explainable and trustworthy.

Language, as Wittgenstein recognized, is the "meta-game" that underlies all human activity. It is the key to collaboration, competition, and comprehension. AI systems that can master this meta-game, that can understand and use language in all its richness and complexity, will be AI systems that can truly understand and participate in our world.

The journey towards creating such AI is a journey towards a deeper understanding of our intelligence, our ways of thinking, communicating, and making meaning. It is a journey that requires collaboration between philosophers, mathematicians, computer scientists, and linguists, a journey that takes us back to the fundamental questions of what it means to be human in a world increasingly shaped by both human and artificial minds.

As we stand at this crossroads, the words of Turing ring truer than ever: "We can only see a short distance ahead, but we can see plenty there that needs to be done." The challenge is not merely to create AI that can win the game, but to create AI that can understand the game, explain the game, and ultimately, play the game with us. For this, Language is a prerequisite for Domain-specific AI.

Endnotes

1. Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., ... & Hassabis, D. (2017). Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815.

2. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems (pp. 5998-6008).

3. Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460.

4. Wittgenstein, L. (1939). Wittgenstein's lectures on the foundations of mathematics, Cambridge, 1939. (C. Diamond, Ed.). University of Chicago Press.


要查看或添加评论,请登录

Ebipere Clark的更多文章

  • Same Difference - Same Outcome

    Same Difference - Same Outcome

    STOCKHOLM (Reuters) - A Swedish study found that just 7.3 percent of Stockholmers developed COVID-19 antibodies by late…

社区洞察

其他会员也浏览了