Two computing systems attack the wall

A few months back?while walking around an airport waiting for my next flight, I wandered into a newsstand situated near my gate. In the store, the cover of TIME magazine (2023, Feb 13- 20th issue) caught my eye, the title read “The Quantum Leap”. Along with the title was a picture of a contraption containing a quantum processor ready to be immersed into a cryogenic container. The caption beneath it read “The Future of Computing is here: This machine can solve problems in seconds that used to take years.” I picked up the magazine and eagerly gleaned through the article. The essence of this interesting article was how rapidly quantum computers operate compared to classical computers. It also explained that possessing technological supremacy in this field has geopolitical as well as strategic defence-security implications. Today, many tech giants and several countries are competing to gain dominance in this field in order?to acquire unprecedented power. The article points out that the RSA code, which is an encryption technique, can be decoded by quantum computers within a few hours unlike a classical computer which may take a billion years to decipher the very same code! This RSA code is the essential algorithm used for cyber security in most of our online applications such as ‘WhatsApp’, and various banking apps. The main message of the article was that quantum computing is a powerful tool which could be a lifeline for most predictive analytics covering topics like artificial intelligence and machine learning and will be complementary to the classical computing system. Quantum computing technology could potentially solve some real-world problems which require complex parallel computation methods.?

The idea of developing a computer to solve world problems reminded me of a short course I attended in 2022 presented by Wilfred Gomes from Intel during the IEDM conference. The presentation was titled, “The path to Zettascale'' and according to the speaker, the final goal for this kind of system is to solve fundamental problems from first principles. He emphasized that we need to move from Exascale which is currently available to Zettascale to access greater computing power and thus program more complex simulations. Some examples of real-world problems that could be addressed by Zettascale systems are earthquake modelling, weather modelling and forecasting, room temperature superconductors, cold fusion and many more scientific enigmas which still need to be solved.?This talk provided an excellent overview of the semiconductor industry and explained the challenges faced by Zettascale systems.

To understand how to reach Zettascale systems we must first understand the existing Exaflop systems. Exaflop means it can accomplish 10^18?floating point operations per second. Floating point operations per second are a measure of computer performance and are defined in single or double precision (32 bits or 64 bits respectively). Currently we have powerful systems like the “Frontier '' from AMD that can achieve tens of Exaflop executions during peak performance, while consuming 20 to 30 MW/ exaflop, and occupying a space greater than 2 football fields! These high computing systems are made of commodity products, thus have multiple GPU, CPU and large banks of memories, all tightly interconnected. Intel too is in the process of building a similar system called Aurora. It is clear that if we wish to transition from an Exascale to a Zettascale system (10^21 flops/s) in the coming decade we will need to take a multi-pronged approach. In order to get to this stage quantitative progress will have to be made in the following areas:

  • Architecture/ Software
  • Process and packaging
  • Memory and I/O
  • Power & Thermals

The future architectures will need to leverage the best of CPU (single threaded configurations) and GPU (vector/matrix configurations). From the process-side, the industry will need to adopt nanowires or nano ribbons and eventually move to stacked 2D materials with back side power delivery. It will need to incorporate 3D_packaging, especially the chiplet configurations. Other changes too will need to be introduced such as the use of low power circuits as well as?memory strategy. Having more layers of cache memories or stacking more memories in the HBM is not the way forward because the energy per operation increases drastically. To solve this problem of latency and bandwidth the speaker suggested that memory should be built monolithically on top of logic. For the I/O section, a combination of optical interconnects and physical wiring will need to be optimized. Finally, power delivery to the system must be completely revolutionized because at present only 23% of the data center power is used for effective computing, the rest is used up simply for supporting the system like cooling it down and maintaining regular building operations. Here a combination of GaN-based circuitry with Si-Power MOSFETs must be employed. After this panoptic presentation, Mr. Gomez was asked, “where do you think lies the biggest challenge?” I was convinced that it would be in 2D materials or maybe decreasing the transistor's switching energy by 1000 times or even finding a way?to fully integrate opto-electronics onto the wafer along with Si devices. He continued by explaining?that there exists?today a pathway for architecture-software, process-package, IO, power-delivery but that there is no solution in sight as to how to overcome the Memory Wall! It?surprised me to hear that there is no solution in sight yet, clearly indicating how daunting this issue is in spite of all the foray of innovations in memory technology like spin torque-based memories, resistive memories, ferroelectric memories, as well as new schemes like computing in memory, and near memory computing.?

The memory wall problem has existed for over two decades. It exists because the CPU’s processing speed has outpaced that of the memory, advances in processor speed are not fully realized due to the slow functioning of the memory, the processor often idles while waiting for the memory to finish its operations before being able to move on. For several years memory researchers have been trying to create a device with a high bandwidth, low latency, and reduced power consumption to help the memory be more compatible with the processor.

If memory is the bottleneck for Zetascale systems to be successful then will quantum computers face the same challenge??I started digging into different papers on this subject and found that some researchers are already working on the concept of quantum memory for quantum computing. But such quantum memories are still just in their conceptual stage. Clearly both?Zetascale and Quantum Computing systems are still in their infancy, and both have many challenges ahead, especially related to fabrication, reliability of operations, and?surpassing the memory wall. It is believed that as these systems become more performant, more architecture and process simulations of the memory structure will be made. Either the industry will create a new type of memory architecture which will be very different from Von Neuman configuration or it will break the wall by creating a new material-based memory that will be super fast, have low latency, and consume extremely low power, or even better,?a combination of both. I wonder if such powerful computing technologies would develop in silos or could they be combined to form an unified powerhouse? Interestingly one of the plenary talks in IEDM was about Silicon Qubits and it was presented by Maud Vinet from CEA-LETI, where FDSOI technology is employed to integrate quantum computing in the VLSI technologies. Hence, it is not too far-fetched to imagine that Zetascale and Quantum Computing systems could co-exist in the same unit, having a renovated memory architecture.

I am reminded of a quotation by Andy D. Bryant (chairman of Intel’s board of directors) about the semiconductor industry, in which he said “the ingredient we begin with is sand. Everything else is value added by people.” Indeed, human collaboration has solved many challenges in the industry and will surely continue to do so. I am hopeful that within this decade, the memory wall problem will be solved, unleashing high computing power and providing solutions to the world's existential issues. In this industry we are always expecting the unexpected.

#Quantumcomputing #Zetascalesystems #Memorywall

要查看或添加评论,请登录

Arabinda Das的更多文章

社区洞察

其他会员也浏览了