Exascale: The Future of Supercomputing
Exascale: The Future of Supercomputing

Exascale: The Future of Supercomputing

Supercomputer is a term that’s been around for a long time – probably for as long as we’ve had computers. It refers to machines that are currently used only for very high-end applications and are a generation ahead of what everyone else is using. Expensive, generally custom-built machines that crunch numbers for cutting-edge scientific projects or generate mind-blowing images for the most sophisticated Hollywood blockbusters. 

Today, supercomputing – the development of which is known as high-performance computing (HPC) - is approaching “exascale."

This refers to the speed at which it is capable of carrying out calculations – specifically the number of floating point operations per second (FLOPS) a machine can carry out. Today’s most powerful systems are approaching the point where this will start to be measured in exaFLOPS – a billion billion operations per second. Or 10 to the power of 18 (which is why we are celebrating Excacale Day on Oct 18th). To put this into perspective, what an exascale supercomputer can achieve in just one second, would take every single person on Earth, if they were calculating 24 hours a day, over four years!

Rather than just being a ridiculously big number, this new level of computing power will potentially unlock a huge number of possibilities for what we can do with computers. One expert I spoke to, Addison Snell, CEO of Intersect360 Research, told me that the exascale era could bring developments as significant as the first accurately simulated human brain, down to neurons and synapses.

In fact, it’s possible that many of the most significant breakthroughs we are hoping to accomplish as a species in the near future – from personalized cancer-fighting medicines to visiting Mars – will be tackled with exascale computers, assisted by software technologies such as artificial intelligence (AI).

No single computer has yet achieved exascale performance. However, just this year, the folding@home project announced that it had done so with its distributed network of thousands of computers in volunteers’ homes, connected together and pooling their power. Less than a month later, it smashed through the two exaFLOP barrier as hundreds of thousands of more volunteers joined the project, hoping to assist research into  Covid-19.  

But orders have been made for machines capable of exascale performance at US high-performance computer laboratories. From there, history tells us it will only be a matter of time before this level of computing power begins to become more widely available, trickling down to private-sector research institutions, industry, and eventually to devices we use at our desktops or hold in our hands.

Exascale computing offers exciting possibilities but presents formidable challenges. Primarily those are around ensuring the infrastructure around it is adequate to support it. You wouldn’t take a super-powered rocket train and fire it into the London Underground and just expect everyone’s train journeys to become much faster. Managing the emergence of exascale computing into the world needs careful planning if it is to be used to its real potential.

As Snell tells me, "20 years ago we were really early in the age of what you could call tera-scale computing … a million million calculations per second.

“Right at the turn of the century when the human genome project was first mapping the human genome, they used one of those early tera-scale supercomputers at Oak Ridge National Laboratories … as part of the verification of that calculation. That was what tera-scale computing brought to us."

Exa-scale offers the possibility of equally significant breakthroughs due to the sheer increase in number-crunching power – a million times faster than those computers that cracked the genome.

With power at this scale, it isn't surprising that it is treated by governments as a matter of national importance. Achieving exascale computing has been described as a "space race," with politicians and decision-makers keenly aware of the advantage and prestige that is up for grabs.

Dan Ernst, Distinguished Technologist, High-Performance Computing & Artificial Intelligence at HPE tells me, “There’s very much a demand for your country to have these capabilities … these abilities to make these scientific discoveries first and be in the leading edge.”

However, it's also true that there's international cooperation. Components are made by companies, including ARM and Intel, that trade internationally. "It's also a certain amount of 'co-op-etition'" Ernst tells me, "in many ways, open science, scientific progress, at the highest level, is really international at this point. These capabilities push the world forward, but they also provide advantages for your country."

And Snell adds, "If you have a major advancement in clean energy or climate change modeling, that doesn't benefit only one country. That's something that becomes a worldwide innovation very quickly."

Some industries and fields clearly have more use for computing power at this scale than others. After it makes it out of the hands of governments and their national laboratories, it's likely that the energy industry, finance, and pharmaceuticals will be early adopters, as well as automobile manufacturing. There it can create very realistic simulations for safety modeling - in fact, anything that benefits from “digital twin" engineering technology will be modeled more realistically and usefully.

As far as other clearly defined aspects of exascale supercomputing go, the elephant in the room is obviously AI. Today's machine and deep learning algorithms require huge amounts of computing power, and as AI becomes more sophisticated, that need will only grow.

Working hand-in-hand, with exascale processors providing the grunt-work and AI providing the brains, projects previously confined to science fiction (or horror) movies become a possibility, such as the previously mentioned simulated human brain. Crucially, the frameworks and cloud services that have been put in place to enable AI and other advanced computing paradigms to be delivered “as-a-service” could speed up the availability of exascale computing among wider society. 

As Ernst puts it, "We're going to see high-performance computing availability … it's already going extremely high compared to where it has been in the past. One of the biggest reasons for that is … the technologies of software and the mathematical models that have been built up in frameworks … so that everyday users of computers can make use of AI without working too hard.

“That hasn’t been the case in the past – you had to work hard to get performance. These models are enabling massive amounts of computing to be put to work in all of these AI fields … and really will show up all the way down the stack from high-end computing … down into edge and into mobile computing as well.”

AI, high-speed networks such as 5G, and high-powered computing are all technology trends that will be fundamental to many advances we will make in science and society. Innovators with the ability to put them together to develop new applications and breakthroughs will find themselves at the front of the technology race of the next decade.

If you are interested in learning more about Exascale and the future of supercomputing, then tune in to this podcast episode on the topic: 

https://element-podcast.libsyn.com/the-future-of-supercomputing



Thank you for reading my post. Here at LinkedIn and at Forbes I regularly write about management and technology trends. I have also written a new book about AI, click here for more information. To read my future posts simply join my network here or click 'Follow'. Also feel free to connect with me via TwitterFacebookInstagramSlideshare or YouTube.

About Bernard Marr

Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a strategic business & technology advisor to governments and companies. He helps organisations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligencebig datablockchains, and the Internet of Things.

LinkedIn has ranked Bernard as one of the world’s top 5 business influencers. He is a frequent contributor to the World Economic Forum and writes a regular column for Forbes. Every day Bernard actively engages his 1.5 million social media followers and shares content that reaches millions of readers.

Muhammad Sheikh Ramzan Hossain

Chief Editor at Islamic Science-Tech Review

4 年

Let us go ahead with campaign: Science for Health for protect Corona Virus Disease(COVID-19. https://wmitou1441blogspot.com

回复
Michael Henderson

Information Technology Executive and Sr. Manager

4 年

Thanks for the info

回复

This is a great

回复

Interesting. I can’t wait to see the tech “development timeline” once 5G goes live in all networks. Talk about a conduit. ??

回复

I agree with

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了