The Legacy of Algorithms: The Rise of Computer Science and AI
The field of artificial intelligence has captured the human imagination for decades, conjuring up visions of intelligent machines that can think, learn, and reason like humans. But what are the origins of this fascinating field, and how has it evolved over time? In my first The Legacy of Algorithms post, I explored "Ancient Roots and Recursive Thinking," and in my second post, I examined "The Industrial Revolution and the Birth of Modern Computing." In this third installment, I dig deeper into the history of AI, exploring its roots in the early days of computer science and tracing its development through the pioneering work of researchers and institutions like the Defense Advanced Research Projects Agency (DARPA) .
The Rise of Computer Science and AI
The mid-20th century saw the emergence of computer science as a distinct discipline, with algorithms at its core. Donald Knuth's seminal work, The Art of Computer Programming, argued for algorithms as the central concept unifying the field (Finn, 2017; Knuth, 1968). This period also saw the birth of artificial intelligence (AI) as a research agenda. In 1962, J.C.R. Licklider created the US Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA), heralding an ambitious push to develop AI technologies (Fouse, Cross & Lapin, 2020). Early AI researchers drew inspiration from theories of human cognition and learning (McCarthy et al., 2006).
Parisi (2017) provides a thoughtful perspective on the historical development of machine intelligence, identifying three key periods that have shaped our current algorithmic landscape:
DARPA's Critical Role in the Development of AI
DARPA, originally known as ARPA, has played a pivotal role in the development of AI over the past 60 years through significant investments that established AI as a field (National Research Council, 1999). DARPA's AI research can be visualized through Three Waves: (a) handcrafted knowledge, (b) statistical learning, and (c) contextual adaptation (Fouse, Cross & Lapin, 2020).
Within these three waves, DARPA's AI investments fall into six major phases: (a) AI Beginnings (1960s), (b) Strategic Computing (1980s), (c) Knowledge Representation & Planning (1990s), (d) Cognitive Systems (2000s), (e) Data Analytics (2010s), and (f) AI Next (2020 onward).
DARPA established major centers of excellence in AI at institutions like MIT, Carnegie Mellon University, and Stanford starting in the 1960s (National Research Council, 1999). Work at these centers led to pioneering expert systems like DENDRAL and MYCIN (developed at Stanford) that demonstrated the power of rule-based reasoning.
The Strategic Computing Initiative in the 1980s was a major $1 billion DARPA program aiming to advance machine intelligence, although it struggled to meet its ambitious goals (National Research Council, 1999). More recent programs like the $2 billion AI Next Campaign launched in 2018 continue to push the boundaries of the field.
DARPA's unique model of hiring top-notch technical program managers and giving them the resources to pursue high-risk, high-reward research has been critical to its success in AI and other areas. DARPA focuses on challenge problems rather than requirements, allowing for innovative approaches (Fouse, Cross & Lapin, 2020).
Symbolic AI vs. Neural Network-based Machine Learning
As AI developed, it branched into two main approaches: symbolic AI and neural network-based machine learning. Symbolic AI (Wave One), inspired by theories of cognitive development like those of Jean Piaget, focuses on explicit rules and logic. Just as Piaget emphasized structured stages in a child’s development, symbolic AI relies on predefined steps and rules to solve problems. For example, if you want a symbolic AI system to recognize a cat, you'd program specific rules: “a cat has fur, whiskers, and pointed ears.” It applies these rules to make sense of the world but doesn’t learn or adapt on its own.
领英推荐
In contrast, neural network-based machine learning (Wave Two), which draws inspiration from neuroscience and perceptual studies, works more like how the brain processes information. Instead of relying on predefined rules, it learns from data, similar to how humans learn by experiencing and observing (National Research Council, 1999). For instance, if you feed a neural network thousands of images of cats, it will gradually learn the patterns that define what a cat looks like, without needing any specific rules. This approach mimics how the brain processes sensory information, adapting and improving with more data.
While debates continue about how closely these approaches mimic human learning, symbolic AI excels in tasks with clear rules, while neural networks thrive in more complex, pattern-driven tasks. Together, they have produced powerful tools and important insights into the nature of intelligence and problem-solving (Markoff, 2016; National Research Council, 1999).
The Emergence of a New Form of Reasoning
As these forms of automated intelligence have entered the social culture of communication, they have also become central to critical theories of technology. These theories have consistently warned against the automation of decision-making, where information processing, computational logic, and cybernetic feedbacks are seen as potentially replacing human thinking and decision-making capacities.
However, as Parisi (2017) argues, this view may be limited. Instead of seeing machine intelligence as merely replacing human thought, she suggests that we are witnessing the emergence of a new form of reasoning. This machine epistemology goes beyond simply executing predefined tasks or mimicking human cognition. Instead, it introduces new ways of processing information and making decisions that may be fundamentally different from human reasoning.
Final Thoughts
The historical context of AI development reveals that it is not a linear progression towards replicating human thought, but rather an evolution of distinct forms of algorithmic reasoning, significantly shaped by sustained investments from agencies like DARPA over decades (National Research Council, 1999). As we integrate these technologies into various settings, including education, it is crucial to consider how these different modes of thinking might complement or challenge traditional approaches to learning and decision-making. By understanding the strengths and limitations of different algorithmic paradigms, we can make more informed decisions about their practical applications and work towards a future of complementary human-machine collaboration. These historical developments in computer science and AI have laid the foundation for the pervasive role of algorithms in the modern world.
References
Finn, E. (2017). What algorithms want: Imagination in the age of computing. The MIT Press. https://doi.org/10.7551/mitpress/9780262035927.001.0001
Fouse, S., Cross, S., & Lapin, Z. J. (2020). DARPA’s impact on Artificial Intelligence. AI Magazine, 41(2), 3–8. https://doi.org/10.1609/aimag.v41i2.5294
Knuth, D. E. (1968). The art of computer programming. Addison-Wesley.
Markoff, J. (2016). Machines of loving grace: The quest for common ground between humans and robots. Ecco.
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence: August 31, 1955. AI Magazine, 27(4), 12–14. https://doi.org/10.1609/aimag.v27i4.1904
National Research Council. (1999). Funding a revolution: Government support for computing research (p. 6323). National Academies Press. https://doi.org/10.17226/6323
Parisi, L. (2017). Reprogramming decisionism. E-Flux Journal, 85. https://www.e-flux.com/journal/85/155472/reprogramming-decisionism/
Russell Shilling, Ezekiel Dixon-Román, Paola Heincke, David Turk, CSM, GISP, CFM, Nicole Sansone Ruiz, Ph.D., Marie Bienkowski, Andrew K., Amanda Bickerstaff, Erin Mote, Kara McWilliams, Andreas Oranje, and Pat Yongpradit, I thought you would find this post of interest.