Reimagining the Symmetric 375
William & Lynne Jolitz Symmetric Computer Systems 375

Reimagining the Symmetric 375

What if we were to redo the Symmetric Computer Systems concept with modern technology, from the ground up? Take a concept from 1980 and transport it 40 years into the future.

Symmetric Computer Systems was intended to do a grand concept of an arbitrarily extensible computer architecture using processor memory storage nodes connected by high speed switched networking. A shared nothing architecture, it was to be scaled simply by adding nodes as vertexes in a graph

But this was way too much for a systems start-up funded on $300,000 of which 100,000 was UNIX System V licenses (we had expected a $43,000 Version 7 or 32V license, but AT&T wouldn’t do that).

The market also insisted on bitmap graphics for visualization, and serial/parallel/storage ports for peripherals. The hardware/software of the graphics was about the same cost as that of the node itself, and the networking wouldn’t quite fit within the budget either. Attempts to get funding for graphics/networking didn’t succeed.

The wire wrapped prototype, Rev 0 (gold colored) through Rev 2 were sold as without networking but with peripheral ports. A tiny $40,000 follow-on round got networking as a daughter board that was retrofitted to Rev 2 boards with immense difficulty (Rev 1 boards were for those who couldn’t wait for networking, and the 5 Rev 0 boards were for the initial customers who couldn’t wait for the Rev 1’s ... and believe it or not a slower, more costly "GEM" Multibus system from other vendors components for those that couldn't even wait for those!).

(Rev 3 was about backup via SCSI but the board run never worked right, and small modes for a hastily replacement 3A board resulted in the most stable version. Rev 4 shortly followed with increased memory capacity.)

The original concept was altered just to meet finance and market requirements - the original plan was to go to market with a single board bitmap graphics Ethernet with IDE storage and highest performance CPU memory. Then use a partially populated board (no graphics, some with storage) to build nodes with. Size of board was constrained to a sheet of paper size, later to be folded into two stacked half sheet boards.

The storage vendors weren’t thrilled to offer us - there were SASI, ST506/MFM and what would become IDE. Finis Conner insisted on a price premium and wouldn’t budge, which was ironic because IDE/ATA would become the most economic/performant in ten years. The SASI (soon to be SCSI) were already headed into enterprise space to take out the SMD storage module drives that cost/size were beyond our reach. (I tried to work a long term deal with Conner to get him his near term premium but gradually follow the obvious MFM pricing curve, but he insisted that IDE would never, ever drop below $500/drive - it did so unsurprisingly in a handful of years.) Part of the reason storage was chaotic in the 375 was vendor thrash.

The half sheet meant 2MB of 256K RAM, later 8MB (was to be 4MB/16MB - REV 5 would have been a 32MB 32532). Ports were sized for a cubical / single office use. Network was tough for the market requirements, because most had not experienced it, and proprietary networks dominated - it was something that came onto the board and then postponed, due to this market thrash.

Even the processor was a thrash, for market and vendor issues. The 16000 (later renamed 32000) chipset came from my prior involvement with National Semiconductor, who at the time had the best (but not perfect) match as covering the 32 bit architecture software market requirement, determined by BSD usage not System V (or other) usage. Also, the theory of National was to dominate on price/cost 32 bit processors as a follow-on to the 16/32 bit processors that Intel and Motorola then dominated. (While still at National they continually lied about process/“bug” issues, which continued the life of their processor line, and the workaround for these built into Symmetric systems allowed them to not be affected, but others were majorly affected. National responded by not increasing volume discounts, and even premium pricing later versions out of Symmetric’s reach.)

(Kenyon Mei left National to start the 386 then, when I arrived to do a portable C based compiler, and 4.1BSD port, most of the work on an advanced VM/drivers was done by David I. Bell, Laura Neff, and Mike Puckett. After I left, Landon Noll's group reworked it into 4.2BSD, and Mike Karels and I did the 4.3BSD version. Ken Abrams and Mike Karels sweated out the initial load /debugging of the wirewrap prototype that Donald Billings did from our joint design.)

Memory also was affected - generic 256K DRAM pricing did not continue into 1MB - Apple (Debbi Coleman) couldn’t get enough 1MB, so bought up a reserve at a hideous price. This meant few 8MB systems could be sold, no matter the price - the ones that were, were spot market buys. So limited was this that even bad chips couldn’t be resupplied with new ones, so purchases were risky. Most of the spot buys were from sources with stale pricing information - these dried up fast. The problem persisted until the 1B DRAM was obsolete.

So to return to the original concept - the node. With 20/20 hindsight (at the end of 2020!) it should have been:

386 processor, run a derived BSD from the ground up w/o *any legacy UNIX code*, DRAM modules, IDE non-Conner drives, multiple ethernet and power supplies.

Even the name should have been different. The original plan was to call it the 750 to market directly against DEC’s VAX 11/750, but with graphics (the idea was to head off the workstation and enterprise market by commingling them, with the idea that non display nodes would increase capability in increments of storage/processor/memory as stacks of boxes and/or racks). The 375 name was due to frustration with finance/market mis-appreciating graphics - we could only get half of the system financed. We never could sell the “node” as a product in itself. Let’s just call it Node.

The operating system eventually open sourced, and would have been an incremental rewrite to support a “headless” “dark mode” with a simplistic fault tolerance model of node dropout, with “split brain” problem addressed by paused reintegration as needed (e.g. not for realtime transaction processing, just gradually degrading cluster size).

There was no graphics workstation market - Sun and others were wrong. There was no NUMA market, Sequent and others were wrong. Interconnects and busses are all major distractions with no future product. There was a fault tolerant market, and Tandem was right, but it was too small and too specialized due to realtime transaction processing limits that Roel Pieper couldn’t make “reliability no limits” work. (Guardian operating system did achieve its goal of a practical fault tolerant system, but could never be mainstream or evolving. Tandem's ServerNet begot Infiniband but also too narrow for broad application.)

There also isn’t storage only architectures. The closest thing to the node today is what Google has always built its cloud out of. But even that missed the software and hardware architecture that was ultimately the goal.

Lynne thought ServerNet missed the internet as a transport layer, so devised a switched transport layer to serve in its stead. Store and forward is a mistake, and is “worked around” with Spectre provoking bugs, better dealt with by ballistic logic that reconciled the time/state "in flight".

Much of the passionate distractions of computer systems technology are due to transient technology limits and market aberrations. Even current cloud computing is simply addressing the weaknesses in this older technology base still pockmarked by this acne of the past.

Spectre bugs abound because of the amplification of old technology bugs by VLIW multiplication - there was no way to afford in the ruthless grab for performance the separate MMU’s and address faults necessary to retain the semantics of serialized memory references to avoid them, so they let the cards fall where they would. The Dan O'Dowd / Les Cohn National Semiconductor 16000 avoided memory increment/decrement address modes to allow memory faults that would need to be “instruction completed” by software inspecting/patching state, preferring to abort/restart. And even AMD’s multiple MMU’s minimize the Spectre “footprint” to the X86-64 unlike Intel’s. (Designed a 64-bit VLIW version using Lynne's ballistic logic to avoid inherent weaknesses of Intel, ARM, and RISC-V that wouldn't give rise to Spectre bugs, as an alternative.)

The 375 board architecture was all about the tight focus on processor and memory. Our biggest concern was this. Ideally, we didn’t want the three chip (CPU MMU FPU) but one all combined. Instead of Motorola’s 3 or 4 cycle memory access, we were stuck with 5 because of the CPU/MMU split adding a cycle. The FPU separation also hit the performance as well. (National’s plans were a full single chip unit with 32 bit memory and address bus before 1984, but this was postponed to the 532 version that arrived 1987, years too late for Symmetric, who never received a single unit.) Interim versions of the chip never held enough promise for a new revision, as the costs were higher than 386 that was going into consumer volume. Design-in was viewed as "locked-in". And National's management was sold on the Encore vision of supercomputer 1,000+ processors at Fairchild Clipper $1K per chipset vision, while Pentium horizon pricing suggested volume pricing of $200 in mid tier!

In truth all processor vendors eventually paint themselves into the corner, like Intel did. Apple switched from 6502 to 68000 to PowerPC to X86-64 and now to ARM, each of these times painfully. None of this was/is necessary as vendors are changeable, and you can take things in-house.

Computer architecture can be “future proofed”. The economics of chips and software are straightforward. One can even evolve them gracefully to the future such that one doesn’t need to impact the present when shedding the past.

But there’s no market demand for this seamless flow, and technology vendors see a disadvantage in that you can’t motivate new product sales if there isn’t pain to constantly be relieved.

Yet you can see that the move to “in house” silicon by Apple, Microsoft, and Google that to control within the firm products for this internal benefit means that such evolution is of value, and that common architectures are doomed.

So a modern reinvention of the Node would have a custom processor that was easily upgraded and steered away from oddities. National used frequency coding to encode word/address lengths, the same that does 8/16/32 easily can do X/Y/Z to any sizes as needed. Much like SpaceX built its Raptor rocket engine to be arbitrarily scalable, the same is easily done for a computer processor, with significant economic advantages for a broad product lineup.

Likewise processor and operating system architecture would stop following PDP-11 artifacts, and reuse silicon test frameworks like JTAG as virtualization/exception/recovery means rather than reimplementing the past structures that partially functioned as this to the kernel program. For much of the overhead of technology is simply slight variations in a theme, because of reverse compatibility.

If there are no common architectures, does reverse compatibility matter? Clearly data formats due (volume), but if software is described by languages/specifications that can be checked/altered/“evolved” in software as well, we don’t need binary compatibility. AI/ML “processors” also have abstractions that equally are circumscribed, so they can be “containerized” as well.

We evolve the library concept to handle what’s done by Docker and utilities - the OS only works with the presumptions of built “sub operating systems” that overlay/nest for compaction, and the concept of hardware is built *inside containers*. When you boot a processor/memory, it’s composed from these. 

Bought/sold/repaired is container management in all ways.

Networking is built as switching into the Node, both with a routing and cascade-able capability to a dimension of PHYs of the Node, which is dynamically reconfigurable.

Others dimensions are of cores, memory, and storage, all silicon. Memory isn’t persistent but storage is. The whole unit is specified as a certain process (say 7 micron) implying speed/density, and the volume is set like cubesats as built out of 10cm cubes with common power/thermal attachment (which is limitation of "blade" concept).

The equivalent of a 375 would then be a single cube that would fit on a power/cooling pedestal with an integral wireless network for immediate access/configuration (as the 375’s console was used). (Incorporated as a cloud server component, the wireless network would be “bonded” to the location’s other components as the beginnings of provisioning for use in that role, as an example hundreds of other roles.)

To get started, first 10cm cube would be composed of a commodity 4TB SSD storage with densely packed 7 micron FPGA (processor cache network) and 16 Gigabyte DDR3 Unbuffered SO-DIMMs, implementing a 16 core 5Ghz scalable processor with 8 phys. Performance like a Dell PowerEdge server.

This “new kid on the block” simply would begin reinvention of the computing model, as it wouldn’t function at all like a Dell PowerEdge server in the slightest. Because doing so would serve no purpose.

Like the 375, the point is to be minimalist, because the purpose is to magnify involvement in the data by removing everything else that doesn’t need to be there. This is not to deny that “some” does need to be there,

But that the “some” is inside only containers that it needs to be in. And the invisible cost of this minimalism is that the contents of the containers is massively managed by software and people. Because it is the weakest link in the entire concept.

The containers are themselves “simple”. But … how they are arranged and packaged is highly complex. Unlike Docker, they are not arbitrarily built nor designed with resource compartmentalization in mind.

Security, integrity, disaster/fault recovery, and redundancy are the primary focus of the containers. Less secure usage is allowed within virtualized environments offered by highly secured nested containers, offered as cloud services instances. Management is via deep verification on any alteration.

Cloud services might be composed of these in part or full (one could, like Tesla “autonomous taxi” service, renting out node services not unlike SETI@home). 

No distributed services are in this model (but distributed computing can be built upon the model).

Node isn’t a laptop, isn’t a server, isn’t a database and isn’t a replacement for any appliance. It, like a 375, is a component for building a fabric of technology from huge numbers of them, to prototype future rich follow on technologies without being saddled by the past.

So there you have it. A 40 year jump from NMOS 10MHz to modern CMOS 5GHz.

Caleb Eastman

IT/OT Convergence Subject Matter Expert

3 年

It's very interesting concept. I want to do the 2021 version of this exact thing. using RISC-V U74 Silicon from SiFive, FPGA for industrial ethernet side and a reprogrammable DPU like what Mellanox makes for their bluefield in order to accommodate Time Sensitive Networking compute offload for systems that are latency sensitive, highly distributed, and mission critical.....like ...I don't know. Autonomous cars at scale.

Michael Free McGlothlin

?? Code Jesus - I wash away your code sins so you can live in code paradise! ??Maker ??????Software Engineer ??Software Architect ??Legacy System Modernization Consulting ??E-Commerce Consulting ??LION #ONO

3 年

I built something a bit similar about 1999 that snapped together like lego bricks. That’d be way easier today with everything miniaturized and more power efficient.

Paul Vixie

Restoring Human Security to Pre-Internet Levels

3 年

bill, thanks for this. i'd never heard your larger vision at the time. may i pose a follow-up question? in what ways did your s/375 enable and/or influence dave rand's PC532?

Paul Vixie

Restoring Human Security to Pre-Internet Levels

3 年

in spite of some frustrations, this was the most exciting computer i'd ever heard of, farther ahead of its time than anything before or since. i wish i still had my s/375. you and lynne deserved accolades for this little machine.

Richard Grant

Tate exhibited artist | Abbey Road mentor | Strategic critical thinker | Creative & Technical innovator | 25k+ Connections

3 年

(what does it do)?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了