FireBox and Computer Architecture in 2020
I recently read an interesting article summarising the keynote presentation by Krste Asanovi? (Berkeley CS) at FAST 2014, titled 'FireBox: A Hardware Building Block for 2020 Warehouse-Scale Computers'. It's a very readable presentation, and even if (hypothetically) you aren't interested in the latter portion of the talk (on DIABLO, an FPGA-based simulator for 'warehouse-scale computers' (WSCs)), the first part alone makes some interesting points about the future of computing at scale.
Some of the high-level architectural predictions seem eminently reasonable and likely to come to fruition: encryption of all data both in transit and at rest; massive use of NVRAM, probably driven by next-generation technologies such as STT, PCM or memristors; high-radix switching and greater use of photonic technology. But there were a couple of low-level architectural predictions that seemed especially interesting to me.
One is the claim that the Instruction Set Architecture is 'The most important interface in a computing system'. This seems like the sort of thing only a CPU architect could really believe, as the rest of the world moves towards ubiquitous use of high-level languages, frequently JVM-based, running on cloud infrastructure at increasingly higher levels of abstraction, culminating in PaaS; in a PaaS you CAN'T care about the OS, let alone the ISA. Of course there will always be people that need to know the ISA details, but 'the most important interface in a computing system' claim seems hard to accept at face value.
However, the second prediction about CPU architecture, i.e., the importance of 'vertical integration' of functionality into the CPU, is much more interesting. We see this happening already with network interfaces, and in certain domains, e.g., baseband processing for 3G/4G cellular systems, this has been the norm for years: complex functions such as Turbo decoding are frequently provided as specialised hardware functional units, in order to perform tasks that are simply not computationally efficient in software.
Of course, at some level these functional accelerators are just extensions to the CPU instruction set, and so this trend supports the claim that ISAs are important; a good example is Intel's 'AES New Instructions', which have apparently made AES 'practically free of performance cost on our modern processors' (claim possibly to be taken with a pinch of salt). But for the sake of argument I'll stick to my point that the ISA is not 'THE most important interface...' (my emphasis).
A final observation is that at a meta-level it's interesting to see some of the higher level points being spelled out explicitly. For example, the trend towards service oriented architectures (in an informal sense, i.e., not the formal 'SOA' stuff), that would seem to me to already well-known and taken as a given by industry folks working in the distributed systems domain. The (perceived) need to spell that out reflects the fact, borne out by my own recent experience, that academia often has a distorted (certainly incomplete) view of what actually is already happening 'in the real world'. It's interesting to consider what could be accomplished if industry leaders got together with the academic community, but that's a subject for another day.
Free Agent
10 年Steve, you know well, it's not that industry is not interfacing with academia, it's just that the need to maintain competitive advantage also stifles the dissemination of the important parts of academic research done in service to a sponsor. Equivalently, I would say that with few exceptions (save for a few ever-present industry-to-academia/vice versa migrants), there is less than sufficient visibility into industrial developments at or near the time of development. Thus, academia is frequently playing something like a 2-yr or more game of catch-up.