Where are all the computers?

Where are all the computers?


Long before electronic circuits, computation was a human endeavor. The term “computer” originated in the 17th century to describe individuals who performed mathematical calculations manually. By the 1930s, institutions like the United States National Advisory Committee for Aeronautics (NACA), began hiring women as “human computers” to process aerodynamic data. At Langley Research Center, a pioneering group of five white women formed the first computer pool in 1935, analyzing wind tunnel and flight test results. Their work required exceptional mathematical aptitude, precision, and endurance, as they solved differential equations and plotted trajectories using slide rules, graph paper, and mechanical calculators.

The demand for human computers surged during World War II and the Cold War space race. By the 1940s, NACA began recruiting African-American women, though segregation laws confined them to the segregated “West Area Computers” division. Despite systemic barriers, these mathematicians became instrumental to projects like the supersonic X-1 aircraft and Mercury program. Katherine Johnson, whose calculations verified the trajectory for John Glenn’s 1962 orbital flight, epitomized their critical role: her work was so trusted that Glenn reportedly insisted she manually check the IBM 7090’s results before his mission.

The human computers’ legacy extended beyond raw computation. They developed novel methodologies, such as iterative approximation techniques for solving nonlinear equations, which later informed algorithmic design. Their work laid the groundwork for modern numerical analysis, demonstrating that human intuition and creativity could outperform early mechanical systems — a reality mirrored in today’s debates over AI’s limitations.


IBM’s Mainframes and the Disruption of Human Labor

In the late 1950s, NASA’s investment in IBM 704 and 7090 mainframes marked a turning point. These room-sized machines could perform calculations in minutes that took human computers weeks. Initially met with skepticism, the IBM systems soon proved their worth during high-stakes missions like Mercury and Apollo. However, their introduction threatened the livelihoods of human computers, who faced a stark choice: adapt or become obsolete.

The shift was not seamless. Early programmers struggled to translate human intuition into machine-readable code. Fortran (Formula Translation), developed by John Backus at IBM in 1957, revolutionized this process by enabling scientists to write code using algebraic notation. For example, a simple Fortran loop to calculate orbital trajectories might resemble:

DO 10 J = 1,11  
I = 11 ? J  
Y = F(A(I + 1))  
IF (400 ? Y) 4,8,8  
4 PRINT 5,1  
5 FORMAT (I10, 10H TOO LARGE)          

This bridge between mathematical logic and machine execution democratized programming but demanded new skills from the workforce.

The replacement of human computers with IBM systems yielded unprecedented efficiencies. Complex simulations, such as reentry heating profiles for the Apollo command module, became feasible, accelerating the Moon landing. However, automation also concentrated technical power. Engineers who once collaborated with human computers now relied on a smaller cohort of programmers, altering workplace dynamics and marginalizing those unable to transition.

Similar patterns emerged in other sectors. The rise of spreadsheet software like VisiCalc (1979) and Excel (1985) displaced clerical workers but created demand for financial analysts. Word processors eliminated typist roles but expanded technical writing and editing fields. Each wave of automation reshaped labor markets, privileging adaptability over tradition.


The Human-Machine Symbiosis

The Fortran era also underscored the enduring value of human ingenuity. While IBM machines excelled at repetitive calculations, they lacked the contextual reasoning of their human predecessors. Programmers like Dorothy Vaughan synthesized domain knowledge (e.g., aerospace physics) with coding skills, ensuring machines solved the right problems.


AI’s Transformative Potential

Today’s AI systems, powered by neural networks and massive datasets, mirror the disruptive potential of 1960s mainframes. Large language models (LLMs) like GPT-4 can draft code, write reports, and solve complex problems that were once the exclusive domain of educated professionals. A 2023 McKinsey study estimates that AI could automate 30% of hours worked across industries by 2030, impacting roles in software engineering, law, and creative arts.

Yet, as with Fortran, AI’s rise is creating new niches. Prompt engineering, AI ethics auditing, and model fine-tuning are emerging disciplines requiring hybrid skills. For instance, biomedical researchers now use AI to predict protein structures but must validate outputs against experimental data — a blend of domain expertise and algorithmic literacy.


The arc from human computers to AI reveals a persistent truth: technological progress is cyclical, not linear. Each revolution displaces certain roles while creating others, demanding perpetual adaptation.


The human computers of Langley answered their era’s challenges with ingenuity and resilience. Their legacy implores us to approach AI not with fear but with the same determination to harness technology for collective advancement. The question remains: As we stand on the brink of another computational frontier, are we prepared to learn from history — or doomed to repeat its mistakes?


This article was originally published on my personal blog

Adebayo A

Research & Data Analysis | MERL Officer | Project manager | Public Health Advocate | Improving Lives with Evidence

3 周

Very insightful piece. ??

要查看或添加评论,请登录

Andrew Miracle的更多文章

社区洞察

其他会员也浏览了