Deeper down the rabbithole: AGI - from clickbait and politics to Aristotle and brain-inspired algorithms
generated with AI generator. Prompt: Alice Wonderland White Rabbit Singularity

Deeper down the rabbithole: AGI - from clickbait and politics to Aristotle and brain-inspired algorithms

Hello again Metapilots,

Today you're getting another much needed Philosophy of Mind seminar. Not only will this make you more interesting at networking events, but it will also make you less vulnerable to clickbait around AI doomsday arguments and potentially protect your mental health, especially if you like to scroll social media and your prefrontal cortex ends up a victim of involuntary infotainment targeting and visual pollution (we've all been victims of that).

As we traverse the ever-evolving landscape of 'artificial intelligence', our latest strategic insights from the US National AI R&D Strategic Plan beckon us to a future where human cognition and machine intelligence converge. This plan is not just a policy document; it's a blueprint for an era where AI partners with humanity to unlock the mysteries of the mind and the universe.

In a thought-provoking dialogue at a UK summit, tech maverick Elon Musk engaged with the notion of AI as an unprecedented catalyst for societal transformation. Envision a society where labor is obsolete, where AI-driven abundance is the status quo. Yet, Musk cautions us, AI's boundless potential comes with risks akin to those of unleashing a "magic genie" — a reminder that with great power comes great responsibility.

As we stand at the crossroads of this cognitive revolution, we invite you to join us in a discourse that spans from the Socratic forums to the AI labs — a dialogue that's essential for steering the future of AI and human destiny. Now let's put (some of) those neurons of yours to work.

In the field of philosophy of mind, which intersects with the development of Artificial General Intelligence (AGI), several schools of thought are particularly relevant. These perspectives offer different insights into how AGI could be conceptualized and whether it could truly replicate or even surpass human cognition. However, it is broadly accepted that, given realistic space and time resource constraints, human beings do not have indefinite generality of intelligence; and for similar reasons, no real-world system is going to have indefinite generality. Human intelligence combines a certain generality of scope, with various highly specialized aspects aimed at providing efficient processing of pragmatically important problem types; and real-world AGI systems are going to mix generality and specificity in their own ways.

  1. Physical Symbol System Hypothesis: This hypothesis posits that a physical symbol system has the necessary and sufficient means for general intelligent action. It's a foundational concept in AI that aligns with the school of functionalism in philosophy, suggesting that mental states are constituted solely by their functional role — that is, they are causal relations to other mental states, sensory inputs, and behavioral outputs. From this perspective, AGI can easily be reached as soon as we can create devices that perform functions equally or better based on arbitrary standards. We're guessing this is how AGI is used loosely across social networking sites, but then again no one really knows when there are no references. In this perspective, once an AI obsoletes humans in most of the practical things we do, it should be understood to possess general Human Level intelligence. The implicit assumption here is that humans are the generally intelligent system we care about, so that the best practical way to characterize general intelligence is via comparison with human capabilities.
  2. Strong AI Hypothesis: Proposed by John Searle, this hypothesis asserts that a suitably programmed computer with the right inputs and outputs would have a mind in the same sense that human beings have minds. This view aligns with some interpretations of dualism, where mental phenomena are non-physical, and some forms of emergentism, where the mind emerges from the brain but is not reducible to it. As an exercise to debate in more than one way and using nuances, try reading the different responses to the Chinese room argument HERE.
  3. Weak AI and Strong AI: The distinction between these two forms of AI is crucial. Weak AI, or artificial narrow intelligence (ANI), refers to AI systems designed for specific tasks. In contrast, AGI, which can be seen as a form of weak AI closer to strong AI, would have the generality of human intelligence. However, the paper argues that AGI remains an unrealized goal, partly because of the inherent tacit nature of human knowledge, as argued by philosophers such as Hubert Dreyfus.
  4. Argument from Tacit Knowledge: This argument stems from Dreyfus' critique of AI, drawing on the work of philosophers like Martin Heidegger and Ludwig Wittgenstein. It suggests that human knowledge is not entirely explicit and cannot be fully translated into a computer program because it relies on embodied experience and cultural practice, elements that are absent in computers.
  5. Non-Algorithmic Nature of Human Reasoning: Joseph Weizenbaum and Roger Penrose have argued that human reason, including qualities like prudence and wisdom as described by Aristotle, is not algorithmic and thus cannot be replicated by computational means. This perspective challenges the notion that AGI could ever fully emulate human intelligence.
  6. Brain-powered AGI or Whole Brain Emulation. Of which I am a proponent of, and have published on extensively in the past 20 years. The article that made me excited about this was from Paul and Patricia Churchland, which argues that while computers are not like the brain, computers designed to mimic the brain might be more like the brain. You see how that actually makes sense?

These schools of thought are informed by the works of ancient philosophers like Plato, who pondered the nature of reality and knowledge, and German philosophers such as Immanuel Kant, who explored the limits of human understanding and the structure of the mind. The question of whether AGI can be realized ties back to these fundamental philosophical inquiries about the nature of intelligence, consciousness, and the mind-body relationship.

The discussion surrounding AGI is not merely technical but deeply philosophical, engaging with long-standing debates in the philosophy of mind. As AI continues to progress, these conversations will become increasingly significant, influencing how we develop and integrate AI systems into society.


Faizan Patankar

CEO at Amygda | AI for Maintenance in transport

10 个月

I love getting the knowledge on The philosophy of AI from you through these posts!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了