The concept of self-reproducing automata envisioned by John von Neumann in 1948 heralded a paradigm where machines could mimic biological self-reproducing. Alan Turing in 1950 posed a question if machines can think. In 1951, Claude Shannon contemplated these two ideas and drafted a list of questions that is still extremely relevant:
- Can we design significant machines where the connections are locally random?
- Can we organize machines into a hierarchy of levels, as the brain appears to be organized, with the learning of the machine gradually progressing up through the hierarchy?
- Can we program a digital computer so that (eventually) 99 percent of the orders it follows are written by the computer itself, rather than the few percent in current programs?
- Can a self-repairing machine be built that will locate and repair faults in its own components (including the maintenance part)?
- What does a random element add in generality to a Turing machine?
- Can either of von Newmann’s self-reproducing models be translated into hardware?
- Can a machine be constructed that will design other machines, given only their broad functional characteristics?
Von Neumann never saw his self-reproducing machine come to life, but 75 years later this notion has resurfaced with contemporary advances in machine learning (ML), illuminating a pathway toward realizing von Neumann’s ambitious vision and touching some of Shannon’s questions. A few of last week's research papers suggest a future where machines could attain a level of autonomy and self-organization akin to biological systems.
- The idea of self-assembly in “Towards Self-Assembling Artificial Neural Networks through Neural Developmental Programs” underscores the potential for artificial networks to evolve autonomously. This process, inspired by biological neural development, alludes to a future where artificial networks might organically grow and adapt to tasks, possibly lessening the extensive engineering currently needed for effective neural network design.
- The exploration of Theory-of-Mind (ToM) in Large Language Models (LLMs), as discussed in “How Far Are Large Language Models From Agents with Theory-of-Mind?”, evaluates LLMs’ potential to pragmatically act upon inferred mental states, a crucial aspect of human intelligence. While unveiling a gap in translating inference into action, it also presents a new evaluative paradigm, potentially directing future research to bridge this divide.
- The self-improvement narrative discussed in “Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation” adds a significant layer to this discourse. The idea of self-improving code generation could serve as a scaffold for self-reproducing automata.
- The paper “Language Models Represent Space and Time” by
Max Tegmark
sparked discussions across the web on its terms and doubtful conclusions. Gary Marcus digs into why “correlations aren’t causal, semantic models”. However, temporal and spatial capabilities are fundamental for intelligent agents to interact meaningfully with their environment and should be explored as a step toward more sophisticated AI systems.
Reading all these papers, I was thinking once again about how history exploration can reveal that old ideas still have seeds to sprout.
The frequency of ‘self-’ in ML research is likely to rise, illuminating a pathway filled with both promise and challenges, demanding a balanced, multidisciplinary approach to navigate the technical, ethical, and philosophical intricacies of this quest. As we edge closer to the vision of self-reproducing automata (von Neumann was indeed a genius!), the journey asks for a thorough examination of intelligence’s nature, autonomy’s ethics, and the essence of human-machine co-evolution.
On my reading list recently I highly recommend for inspiration: “Theory of self-reproducing Automata” (1948)” by John von Neumann, “Computing Machinery and Intelligence (1950)” by Alan Turing, “Computers and Automata (1951)” by Claude E. Shannon
Senior Director, Business Enablement at FICO
1 年The theory of self-reproduction is interesting, because it requires us to understand at a deep level what it means to be autonomous and autopoietic. I am concerned that it should be seen as a "quest", though. Why do we want or need self-reproducing automata?
Economic??: Marketing, Finance, Banking.
1 年L?ke ??