When the giant IBM loses to CDC.

When the giant IBM loses to CDC.

The Cold War arms race gave rise to supercomputers in the 1960s, driven particularly by the high demand for computing power from two nuclear weapons laboratories:

Lawrence Livermore National Laboratory and, of course, Los Alamos National Laboratory.

And the emergence of supercomputers was driven by demands formulated by one of the key figures of the Manhattan Project and the father of the H-bomb: Edward Teller.

In early 1955, Edward Teller sought a new scientific computing system capable of performing three-dimensional hydrodynamic calculations.?

Edward Teller

Nearly unlimited budgets, the opportunity to explore bold R&D initiatives, and the prestige of building the world’s most powerful computer—these were attractive prospects for an industry on the rise.

And we are about to witness a new rendition of the battle between David and Goliath, with IBM and CDC.

  • International Business Machines (IBM) was already an industry giant, with a long history of business success since its founding in 1924 through the merger of companies from the late 19th-century electromechanical revolution. By 1960, IBM had $1.8 billion in sales, 104,000 employees, and a global presence.
  • Control Data Corporation (CDC): A company founded in 1957 by William Norris, with just a few dozen employees but big ambitions.

IBM was the first to make a move at the end of 1955, promising to create a machine capable of reaching a performance level of 4 MIPS, with the primary goal of undermining the adoption potential of a competing project: the UNIVAC LARC.

This led to the creation of the ambitious IBM 7030 Stretch project, which was completed in 1961.

CDC joined the race much later with the CDC 6600 project, which was brought to market in 1964.


Before going into the innovations of these machines, their legacy, and the people who created them, let’s quickly look at the outcome of this competition.

  • At its launch, the IBM 7030 Stretch was indeed the most powerful computer in the world, but it achieved only a third of its promised performance level (1.2 MIPS). This shortfall led IBM to cut its sale price by nearly half, resulting in only nine units sold. In short: a failure.

  • Upon its release, the CDC 6600 reached 2 MIPS, snatching the "blue ribbon" from IBM. This performance level was achieved by a machine costing only a third of the reduced price of the IBM 7030 Stretch. It quickly became a must-have for laboratories worldwide, selling over 100 units.

This situation was felt as a slap in the face by IBM President Thomas J. Watson Jr., who expressed his frustration in a scathing memo.?

IBM went so far as to employ a "vaporware" strategy to try to curb CDC’s success., they announced an advanced version of the IBM System/360, known as the ACS-1, which was intended to surpass the CDC 6600 in speed.

However, the machine was still only theoretical.

After documenting numerous lost sales due to IBM's announcement, Norris filed a major lawsuit against IBM in 1968. IBM ultimately failed to complete its “6600 killer,” and CDC won the suit, receiving $600 million in damages.


Let’s now turn to the people who will take part in this “battle.”

On the CDC side:

  • William Norris (1911-2006) co-founder and CEO of Control Data Corporation (CDC), Norris served in the U.S. Navy during World War II, where he gained experience in electronics and communications technology. After the war, he worked with Engineering Research Associates (ERA), a precursor to CDC, Norris’s vision extended beyond technology; he was known for his socially responsible approach, advocating for corporate involvement in solving societal issues like education and economic development, he remained active with CDC until his retirement in the 1980s.

William Norris

  • Seymour Cray (1925–1996) widely regarded as the "father of supercomputing.", he had an early passion for electronics, which he pursued through formal education and his service in the U.S. Army during World War II. After the war, Cray joined Engineering Research Associates (ERA) and later moved to CDC, where he led the development. Known for his innovative designs, Cray focused on maximizing processing speeds, which involved pioneering cooling systems, streamlined architectures, and parallel processing techniques. In 1972, Cray founded Cray Research to focus solely on supercomputing, where he continued to develop industry-leading machines, including the Cray-1, one of the most successful supercomputers of its time. Cray’s work transformed fields from scientific research to weather forecasting and established the supercomputer as a critical tool for complex computational tasks.

Seymour Cray

  • James Thornton (1925-2005) he played a critical role in designing the CDC 6600 and CDC 7600 supercomputers. Thornton joined CDC in the 1960s and became one of the key architects, contributing significantly to the architecture and logic that made them the fastest computers of their time. Thornton's expertise was particularly crucial in areas such as pipeline processing and parallel processing, innovations that were fundamental to the high-speed performance of CDC’s supercomputers. His work on instruction set design and efficient data handling helped push the boundaries of computational speed, cementing CDC's reputation as a leader in the industry. Beyond his work at CDC, Thornton authored the influential book Design of a Computer: The Control Data 6600, which provided deep insights into computer architecture and was used as a foundational text in computer science education. Thornton’s contributions have had a lasting impact on high-performance computing and inspired future generations of computer engineers.

James Thornton

On the IBM side:

  • Stephen Dunwell (1913-1994): He was instrumental in IBM's transition to electronics, notably by creating the first electronic punch card sorting machine. He was involved in IBM's transition to computing. Although held responsible for the failure of the Stretch project, the innovations it introduced were essential to the company’s future successes, including the IBM System/360.His work emphasized performance enhancements and error correction, advancing IBM’s computing technologies. Post-IBM, he focused on computer-assisted education, earning the IEEE Computer Pioneer Award.

Stephen Dunwell

  • Werner Buchholz (1922-2019): A German-born computer scientist and engineer, Buchholz joined IBM in the 1950s and contributed to the Stretch project. He is best known for coining the term “byte” and for his role in developing efficient data structures and memory management within the 7030 architecture. His work influenced IBM’s data processing standards for years.


Werner Buchholz

  • Gene Amdahl (1922-205): A computer architect and theoretical pioneer, Amdahl was deeply involved in Stretch’s architecture. He developed ideas on parallel processing efficiency, later formalizing these into Amdahl’s Law, which became a key principle in parallel computing. Amdahl left IBM to found his own company, Amdahl Corporation, where he continued to innovate in mainframe computing.

Gene Amdahl

Let's now move on to the technical innovations.

To achieve record performance goals, both machines shared certain technical choices:

  • An innovative 100% transistorized approach
  • An implementation of parallelism
  • A pipelined architecture
  • The foundations of the superscalar approach
  • Memory interleaving.

Many of the innovations introduced by these two machines paved the way for the future and are now commonplace.

For the IBM 7030 :

  • Interrupts: An advanced interrupt system that allowed the processor to respond to urgent events, such as hardware malfunctions or external signals, by temporarily suspending current tasks and addressing the higher-priority issue. This helped improve the machine's responsiveness and reliability, essential for the scientific applications it was designed to handle.
  • Memory Error Detection and Correction: First computer to incorporate error-correcting code (ECC) memory, a critical innovation for improving reliability.
  • Memory Interleaving: Memory interleaving divided memory into separate banks that could be accessed in a staggered manner. By overlapping memory accesses across these banks, the machine reduced latency and improved data throughput, allowing the CPU to fetch instructions and operands more rapidly.
  • Memory Protection: Memory protection mechanisms that restricted access to certain memory areas based on program permissions. This feature helped prevent accidental or malicious access to critical memory sections, enhancing system stability and security, especially important in a multiprogramming environment.
  • Multiprogramming: Multiprogramming allowed the IBM 7030 to run multiple programs simultaneously by efficiently managing system resources among them.
  • Pipelining: The IBM 7030 incorporated instruction pipelining, where instructions were divided into stages and processed sequentially, allowing the next instruction to begin before the previous one finished.
  • Immediate Operands: The Stretch allowed immediate operands, or constants embedded directly within instructions. This saved time by eliminating the need for a separate memory fetch for these values, speeding up calculations and improving overall processing efficiency.
  • Instruction Prefetch: Fetching instructions before they were needed, based on predicted execution flow. This approach minimized delays due to instruction fetching, allowing the processor to continue executing without waiting for the next instruction.
  • Operand Prefetch: Extended the prefetch concept to data operands. By fetching data operands early, the 7030 reduced potential bottlenecks when the CPU needed those values, thus speeding up arithmetic and logical operations.
  • Speculative Execution: To anticipate potential branches in program flow, allowing it to execute possible paths in advance. If the predicted path was correct, this reduced delays caused by branch instructions, helping to maintain a smooth and efficient processing flow.
  • Write Buffer: Temporarily held data waiting to be written to memory, allowing the CPU to continue executing instructions without pausing for each memory write. This helped to reduce write-related delays, especially when consecutive write operations were required.
  • Result Forwarding: Also called data forwarding, allowed the output of one operation to be used immediately as an input for another operation without waiting for it to be written and read from memory. This was especially useful in pipelined operations, as it minimized stalls and improved execution efficiency.
  • Variable-Length Instructions and Bytes: Stretch pioneered the use of a flexible 8-bit byte, which has since become the standard, as well as variable-length instructions that allowed for efficient data handling.
  • Advanced Multiprocessing and Parallel Processing: Although not fully realized, the architecture of Stretch supported multiple processing units that could work in parallel, laying the groundwork for later developments in supercomputing.

While the Stretch project was ultimately deemed a commercial failure due to not meeting its ambitious performance goals, its innovations had a lasting impact on computer architecture and were foundational to later IBM systems, especially the IBM System/360.

One of the industrial benefits of the project for IBM was the development of the IBM Standard Modular System (SMS). A modular approach to streamline the design and maintenance of IBM’s computers by using standardized, pre-assembled circuit modules, SMS allowed for more efficient production, easier maintenance, and scalability across various computing models.

The concept of modular design in SMS was influenced by IBM’s experience with the IBM 7030 Stretch, which pushed the boundaries of transistor-based technology but highlighted challenges in manufacturing and reliability for complex systems. SMS addressed these issues by making components more interchangeable and standardized, building on lessons learned from the ambitious but challenging development of the 7030.

Removing a processing card from a frame of the IBM Stretch mainframe computer

For the CDC 6600

  • Parallel Processing with Peripheral Processors: The architecture included 10 small peripheral processors dedicated to handling input/output (I/O) tasks, allowing the central processor (CPU) to focus solely on computation. This separation of tasks greatly improved efficiency by reducing the CPU's I/O workload, enabling it to perform more scientific calculations.
  • Scoreboarding for Instruction Dependency Management: A method for tracking instruction dependencies to manage resources and allow out-of-order execution. By using a scoreboard, the computer could handle multiple instructions independently and avoid delays caused by instruction conflicts, significantly enhancing parallel processing.
  • Functional Units and Pipelining: The CPU was equipped with 10 specialized functional units that could operate in a pipelined manner. Each functional unit could work on a specific part of an instruction, allowing several instructions to be processed at once and improving overall throughput.
  • Reduced Instruction Set Computing (Pre-RISC) Principles: A simplified instruction set that allowed for fast execution of instructions. This approach, now associated with RISC architecture, minimized the number of complex instructions and optimized processing speed, which later influenced the development of RISC-based processors.
  • High-Speed Circuitry and Compact Layout: Seymour Cray’s design employed high-speed transistors and a compact circuit layout to reduce distances between components and limit signal delay. This compact arrangement allowed the CDC 6600 to operate at higher speeds than previous computers.
  • Advanced Cooling System : To manage the heat generated by its dense circuitry, the CDC 6600 featured a highly effective cooling system. This was crucial, as the compact layout and high-speed operation generated significant heat, which could impact performance and reliability without adequate cooling.
  • Simple Control Console Interface: An operator console was "user-friendly" compared to the hundreds of switches and indicators that were the standard at the time, providing essential control features and status displays for managing the supercomputer. This was beneficial for operators monitoring the system, as they could quickly identify and address issues or manage the workload.
  • Memory Interleaving: This involved dividing memory into multiple banks, allowing the CPU to access data from different banks in a staggered fashion, effectively reducing wait times and optimizing memory access speeds.
  • Vector Processing Capabilities: Although it wasn’t a true vector processor, the CDC 6600’s architecture laid foundational principles for vector processing. By managing multiple operations and data flows simultaneously, it inspired later supercomputers with dedicated vector processing capabilities.

The 6600’s success paved the way for CDC’s future supercomputers, including the CDC 7600 and the Cyber series, and strengthened the company's image as a high-performance computing powerhouse. The machine also propelled Seymour Cray to fame, establishing him as a leading computer designer and later leading to the creation of Cray Research, which continued pushing supercomputing boundaries.


CDC 6600 Part

PS: The role of James Thornton is very often overshadowed by the image of Seymour Cray; however, Cray himself acknowledges the importance of Thornton's work in the preface of Design of a Computer: The Control Data 6600.(Thanks to Jonny Doin for sharing the text of the book with me.)


Fran?ois ?? Pacull

Conseil en entreprise / Entropy reducer

3 周

Nice compilation of information. Should be read by any student in computer science. And for the devs convinced that nothing was done before 2000 you may dig into the different technical points. ??

Mohamed Bellahcene

Associate Director Offering Cloud & DevSecOps

3 周

merci super article

Jonny Doin

Founder, CEO at GridVortex Systems

3 周

Stéphane Dalbera , I have to praise the level and details that you put in your articles. I always have a few things to learn or interesting details from your texts.

Carlos Eduardo Olivieri ?

Senior Back-end Developer | .NET C# | Creator of NuGet package FluentSimpleTree | Lifelong Learner

3 周
回复
Jonny Doin

Founder, CEO at GridVortex Systems

3 周

Another trivia around Cray, is a half joke regarding the origins of the RISC acronym. It's a half joke because it's half true: RISC is Really Invented by Seymour Cray.

要查看或添加评论,请登录

Stéphane Dalbera的更多文章