The Death of Moore’s Law is the end of the beginning

The Death of Moore’s Law is the end of the beginning

The era of CPU dominance began to erode around 2010

Over the last 20 years, the skills of programmers have moved inversely to Moore's Law - the ability of software engineers to write complex programs is steadily declining and they are becoming more unskilled and more specialized ... and increasingly ignorant of general computational knowledge. . In the field of AI, engineers rely mostly on DL to work with more data and more computing power. As we approach the end of Moore's Law "the law of adding smaller and smaller transistors", we are a bit like "things are spinning in a full circle", we are again approaching the prevailing need for highly effective strategies for optimization and highly qualified software engineers.


This is the end of the beginning

Efforts are now focused on reconfigurable computing, a flexible platform that can adapt to new smart devices at both software and hardware levels. FPGA fabric with distributed memory and hardware-programmable DSP blocks, a multicore SoC, and software programmable, yet hardware adaptable, compute engines, all connected through a network on chip Because when you think about nature, the species that is most adaptable is always is the species that has the greatest resilience and thrives in the long run. Species can be optimized for a particular environment and set of conditions, but once those conditions change, it is no longer a viable species. So I think from this point of view, these adaptive platforms, on which you can then impose specific architectures, along with other changes, without changing the physical infrastructure, make sense.

The end of putting more transistors on one chip does not mean the end of innovation in computers or mobile devices.

But what this means is that we are at the end of the guaranteed annual growth in computing power. The result is the end of the type of innovation we have become accustomed to over the last 60 years. Instead of just faster versions of what we're used to seeing, device designers now need to get more creative with the 10 billion transistors they need to work with.

It is worth remembering that the human brain has 86 billion neurons. And yet we have learned to do much more with the same computing power. The same will apply to semiconductors - we will come up with radically new ways to use these 10 billion transistors.

For example, new chip architectures are coming (multi-core processors, massively parallel processors and special silicon for AI / machine learning and graphics processors like Nvidia), new ways to package chips and connect memory, and even new types of memory. And other designs insist on extremely low power consumption, and others on a very low price.


This is a whole new game

So what does this mean for consumers? First, high-performance applications that need very fast local computing on your device will continue to move to the cloud (where data centers are measured in the size of football fields), further activated by the new 5G networks. Second, while the computing devices we buy won't be much faster than modern software, the new features - face recognition, augmented reality, autonomous navigation and applications we haven't even thought about - will come from new software using new technologies. such as new displays and sensors.

The world of computers is moving into new and unexplored territory. For desktop and mobile devices, the need to "have to" is not related to speed, but because there is a new feature or application.

For chip makers, for the first time in half a century, all the rules are off. There will be a new set of winners and losers in this transition. It will be exciting to watch and see what emerges from the fog.

"The end of putting more transistors on one chip does not mean the end of innovation in computers or mobile devices." Great formulation. This is how I feel, but I didn't have a good way to express this.

回复

Turns out you can shock people if you say 50K LOC is a small program. Many people are unaware that industry devs often deal with codebases having > 1M LOC.

回复
Eliran Gerbi

Master of Science in Machine Learning & Data Science ★ Firmware Manager at West Pharmaceuticals

4 年

Maybe the answer is cognitive computer :) https://en.m.wikipedia.org/wiki/Cognitive_computer

回复
Sumit Kumar Nath

Principal Architect - Imaging @ HP | Ph.D in Electrical and Computer Engineering

4 年
回复

要查看或添加评论,请登录

Nikolay Raychev的更多文章

社区洞察

其他会员也浏览了