The Legacy of Algorithms: The Rise of Computer Science and AI
Photo by Museums Victoria on Unsplash

The Legacy of Algorithms: The Rise of Computer Science and AI

The field of artificial intelligence has captured the human imagination for decades, conjuring up visions of intelligent machines that can think, learn, and reason like humans. But what are the origins of this fascinating field, and how has it evolved over time? In my first The Legacy of Algorithms post, I explored "Ancient Roots and Recursive Thinking," and in my second post, I examined "The Industrial Revolution and the Birth of Modern Computing." In this third installment, I dig deeper into the history of AI, exploring its roots in the early days of computer science and tracing its development through the pioneering work of researchers and institutions like the Defense Advanced Research Projects Agency (DARPA) .

The Rise of Computer Science and AI

The mid-20th century saw the emergence of computer science as a distinct discipline, with algorithms at its core. Donald Knuth's seminal work, The Art of Computer Programming, argued for algorithms as the central concept unifying the field (Finn, 2017; Knuth, 1968). This period also saw the birth of artificial intelligence (AI) as a research agenda. In 1962, J.C.R. Licklider created the US Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA), heralding an ambitious push to develop AI technologies (Fouse, Cross & Lapin, 2020). Early AI researchers drew inspiration from theories of human cognition and learning (McCarthy et al., 2006).

Marvin Minsky, Claude Shannon, Ray Solomonoff, and other scientists at the Dartmouth Summer Research Project on Artificial Intelligence (Photo: Margaret Minsky).

Parisi (2017) provides a thoughtful perspective on the historical development of machine intelligence, identifying three key periods that have shaped our current algorithmic landscape:

  • The 1940s to 1960s: This era laid the groundwork for modern algorithmic thinking, with cybernetics attempting to understand control and communication in machines and living organisms. This period saw the rise of cybernetic infrastructure in communication and the introduction of computational logic into decision-making procedures.
  • The 1970s and 1980s: During this time, there was a shift towards interactive algorithms and the development of expert and knowledge systems. This period marked a move from purely mathematical approaches to more context-aware and domain-specific applications of machine intelligence (National Research Council, 1999).
  • The post-1980s to the post-2000s: This era was characterized by a focus on intelligent agents, machine-learning algorithms, and big-data logic. This shift represents a move away from rule-based systems towards more adaptive and data-driven approaches to artificial intelligence (National Research Council, 1999).

DARPA's Critical Role in the Development of AI

DARPA, originally known as ARPA, has played a pivotal role in the development of AI over the past 60 years through significant investments that established AI as a field (National Research Council, 1999). DARPA's AI research can be visualized through Three Waves: (a) handcrafted knowledge, (b) statistical learning, and (c) contextual adaptation (Fouse, Cross & Lapin, 2020).

DARPA's Contributions to AI. A graphical representation of DARPA's contribution to AI beginning in 1962. DARPA's Three Waves are shown in blue, purple, and yellow, with the height of each wave representing DARPA's investment.

Within these three waves, DARPA's AI investments fall into six major phases: (a) AI Beginnings (1960s), (b) Strategic Computing (1980s), (c) Knowledge Representation & Planning (1990s), (d) Cognitive Systems (2000s), (e) Data Analytics (2010s), and (f) AI Next (2020 onward).

DARPA established major centers of excellence in AI at institutions like MIT, Carnegie Mellon University, and Stanford starting in the 1960s (National Research Council, 1999). Work at these centers led to pioneering expert systems like DENDRAL and MYCIN (developed at Stanford) that demonstrated the power of rule-based reasoning.

The Strategic Computing Initiative in the 1980s was a major $1 billion DARPA program aiming to advance machine intelligence, although it struggled to meet its ambitious goals (National Research Council, 1999). More recent programs like the $2 billion AI Next Campaign launched in 2018 continue to push the boundaries of the field.

DARPA's unique model of hiring top-notch technical program managers and giving them the resources to pursue high-risk, high-reward research has been critical to its success in AI and other areas. DARPA focuses on challenge problems rather than requirements, allowing for innovative approaches (Fouse, Cross & Lapin, 2020).

Symbolic AI vs. Neural Network-based Machine Learning

As AI developed, it branched into two main approaches: symbolic AI and neural network-based machine learning. Symbolic AI (Wave One), inspired by theories of cognitive development like those of Jean Piaget, focuses on explicit rules and logic. Just as Piaget emphasized structured stages in a child’s development, symbolic AI relies on predefined steps and rules to solve problems. For example, if you want a symbolic AI system to recognize a cat, you'd program specific rules: “a cat has fur, whiskers, and pointed ears.” It applies these rules to make sense of the world but doesn’t learn or adapt on its own.

In contrast, neural network-based machine learning (Wave Two), which draws inspiration from neuroscience and perceptual studies, works more like how the brain processes information. Instead of relying on predefined rules, it learns from data, similar to how humans learn by experiencing and observing (National Research Council, 1999). For instance, if you feed a neural network thousands of images of cats, it will gradually learn the patterns that define what a cat looks like, without needing any specific rules. This approach mimics how the brain processes sensory information, adapting and improving with more data.

While debates continue about how closely these approaches mimic human learning, symbolic AI excels in tasks with clear rules, while neural networks thrive in more complex, pattern-driven tasks. Together, they have produced powerful tools and important insights into the nature of intelligence and problem-solving (Markoff, 2016; National Research Council, 1999).

The Emergence of a New Form of Reasoning

As these forms of automated intelligence have entered the social culture of communication, they have also become central to critical theories of technology. These theories have consistently warned against the automation of decision-making, where information processing, computational logic, and cybernetic feedbacks are seen as potentially replacing human thinking and decision-making capacities.

However, as Parisi (2017) argues, this view may be limited. Instead of seeing machine intelligence as merely replacing human thought, she suggests that we are witnessing the emergence of a new form of reasoning. This machine epistemology goes beyond simply executing predefined tasks or mimicking human cognition. Instead, it introduces new ways of processing information and making decisions that may be fundamentally different from human reasoning.

Final Thoughts

The historical context of AI development reveals that it is not a linear progression towards replicating human thought, but rather an evolution of distinct forms of algorithmic reasoning, significantly shaped by sustained investments from agencies like DARPA over decades (National Research Council, 1999). As we integrate these technologies into various settings, including education, it is crucial to consider how these different modes of thinking might complement or challenge traditional approaches to learning and decision-making. By understanding the strengths and limitations of different algorithmic paradigms, we can make more informed decisions about their practical applications and work towards a future of complementary human-machine collaboration. These historical developments in computer science and AI have laid the foundation for the pervasive role of algorithms in the modern world.

References

Finn, E. (2017). What algorithms want: Imagination in the age of computing. The MIT Press. https://doi.org/10.7551/mitpress/9780262035927.001.0001

Fouse, S., Cross, S., & Lapin, Z. J. (2020). DARPA’s impact on Artificial Intelligence. AI Magazine, 41(2), 3–8. https://doi.org/10.1609/aimag.v41i2.5294

Knuth, D. E. (1968). The art of computer programming. Addison-Wesley.

Markoff, J. (2016). Machines of loving grace: The quest for common ground between humans and robots. Ecco.

McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence: August 31, 1955. AI Magazine, 27(4), 12–14. https://doi.org/10.1609/aimag.v27i4.1904

National Research Council. (1999). Funding a revolution: Government support for computing research (p. 6323). National Academies Press. https://doi.org/10.17226/6323

Parisi, L. (2017). Reprogramming decisionism. E-Flux Journal, 85. https://www.e-flux.com/journal/85/155472/reprogramming-decisionism/

要查看或添加评论,请登录

社区洞察

其他会员也浏览了