A New Supercomputer Puts the United States in the Top Spot

A New Supercomputer Puts the United States in the Top Spot

Oak Ridge National Laboratory’s Frontier Supercomputer Now Fastest

If you enjoy programming, datascience and WFH topics, you can subscribe to Datascience Learning Center?here. I cannot continue to write without tips, patronage and community support.

https://datasciencelearningcenter.substack.com/subscribe

No alt text provided for this image

AMD-Powered Frontier Supercomputer Breaks the Exascale Barrier

I like following the Supercomputer AiSupremacy race, though I don’t consider it really about artificial intelligence, more like pure-play computing.

This supercomputer race tends to change hands pretty quickly and often, but it’s all in good sport. The progress made is pretty incredible.

Check out the famous Top500 list?here.

Top 500 June 2022

No alt text provided for this image

Quintillion calculations a second Barrier Broken

Today, Oak Ridge National Laboratory’s Frontier supercomputer was crowned fastest on the planet in the semiannual?Top500?list.?Frontier more than doubled the speed?of the last titleholder, Japan’s Fugaku supercomputer, and is the first to officially clock speeds over a quintillion calculations a second—a milestone computing has pursued for 14 years.

True to the AiSupremacy stories I like to cover, this is America and China in race to the top of cutting-edge supercomputing why of course.

  • How can we visualize how powerful this supercomputer is? There’s an easy way:

Imagine giving all 7.9 billion people on the planet a pencil and a list of simple arithmetic or multiplication problems. Now, ask everyone to solve one problem per second for four and half years.

Frontier can do the?same work in a second,?and keep it up indefinitely. A thousand years’ worth of arithmetic by everyone on Earth would take Frontier just a little under four minutes.

You can read the original press release?here. (I like to always read the source).

No alt text provided for this image

The one min?promo video?for this is actually a bit amusing.

  • With 1.1 exaflops of performance. The system is the first to achieve an unprecedented level of computing performance known as exascale, a threshold of a quintillion calculations per second.
  • Frontier features a theoretical peak performance of 2 exaflops, or two quintillion calculations per second, making it ten times more powerful than ORNL’s Summit system. The system leverages ORNL’s extensive expertise in accelerated computing and will enable scientists to develop critically needed technologies for the country’s energy, economic and national security, helping researchers address problems of national importance that were impossible to solve just five years ago.

In the top 500, here are positions 5-10.

No alt text provided for this image

The Age of Exascale

The number of floating-point operations, or simple mathematical problems, a computer solves per second is denoted FLOP/s or colloquially “flops.” Progress is tracked in multiples of a thousand: A thousand flops equals a kiloflop, a million flops equals a megaflop, and so on.

The ASCI Red supercomputer was the?first to record speeds of a trillion flops, or a teraflop, in 1997. Suffice to say we’ve come a long way since then.

Still Huge Rooms

It’s true today’s supercomputers are far faster than older machines, but they still take up whole rooms, with rows of cabinets bristling with wires and chips. Frontier, in particular, is a liquid-cooled system by HPE Cray running 8.73 million AMD processing cores. In addition to being the fastest in the world, it’s also the second most efficient—outdone only by a test system made up of one of its cabinets—with a rating of 52.23 gigaflops/watt.

So, What’s the Big Deal?

Most supercomputers are funded, built, and operated by government agencies. They’re used by scientists to model physical systems, like the climate or structure of the universe, but also by the military for nuclear weapons research.

Supercomputers are now tailor-made to run the latest algorithms in artificial intelligence too. Indeed, a few years ago, Top500 added a new lower precision benchmark to measure supercomputing speed on AI applications. By that mark,?Fugaku eclipsed an exaflop?way back in 2020. The intersection of supercomputing and A.I. has yet to fully be scaled and explored.

The next steps for Frontier include continued testing and validation of the system, which remains on track for final acceptance and early science access later in 2022 and open for full science at the beginning of 2023.

China holds two top-ten spots with its Sunway TaihuLight from the National Research Center of Parallel Computer Engineering & Technology (NRCPC) and Tianhe-2A built by China's National University of Defense Technology (NUDT). Engadget notes however that China is?rumored?to already have no less than two exascale systems (according to the Linmark benchmark) on new Sunway Oceanlite and Tianhe-3 systems. China doesn’t disclose everything.

Frontier equates to 68 million instructions per second for each of the 86 billion neurons in the brain, highlighting the sheer computational horsepower. It appears this system will compete for the AI leadership position with newly-announced AI-focused supercomputers powered by Nvidia's Arm-based Grace CPU Superchips.

A Win for AMD as well

AMD's EPYC is now in 94 of the Top500 supercomputers in the world, marking a steady increase over the 73 systems listed in November 2021, and the 49 listed in June 2021. AMD also appears in more than half of the new systems on the list this year.

  • Frontier, a HPE Cray EX supercomputer, also claimed the number one spot on the Green500 list, which rates energy use and efficiency by commercially available supercomputing systems, with 62.68 gigaflops performance per watt.
  • However, in terms of power efficiency, AMD reigns supreme in the latest Green500 list — the company powers the four most efficient systems in the world, and also has eight of the top ten and 17 of the top 20 spots.?

Supercomputers Powering AI?

As very large machine learning algorithms have emerged in recent years, private companies have begun to build their own machines alongside governments. Microsoft and OpenAI made headlines in 2020 with a machine they claimed?was fifth fastest in the world. In January, Meta said its upcoming?RSC supercomputer would be fastest at AI in the world at 5 exaflops. (It appears they’ll now need a few more chips to match Frontier). Then there’s Tesla as well.

Dojo, its supercomputer designed entirely in-house. Dojo is a supercomputer by virtue of its complexity and speed but differs from other supercomputers in quite a few ways. It’s not yet a Supercomputer since it’s not fully built out yet.

With?scalability of models and super-computers?A.I. will be able to do new things for sure. Frontier and other private supercomputers will allow machine learning algorithms to further push the limits. Today’s most advanced algorithms boast hundreds of billions of parameters—or internal connections—but upcoming algorithms will likely grow into the trillions.

More relevantly, rumors were flying last year that China had as many as two exascale supercomputers operating in secret. Researchers published some details on the machines in papers late last year, but they have yet to be officially benchmarked by Top500. It’s widely suspected China is ahead in this race as of 2021 or 2022.

By solving calculations up to 50 times faster than today’s top supercomputers—exceeding a quintillion, or 1018, calculations per second—Frontier will enable researchers to deliver breakthroughs in scientific discovery, energy assurance, economic competitiveness, and national security.

Exciting times in AI, supercomputing and data science.

Thanks for reading and supporting the channel!

If you enjoy programming, datascience and WFH topics, you can subscribe to Datascience Learning Center?here. I cannot continue to write without tips, patronage and community support.

https://datasciencelearningcenter.substack.com/subscribe

Jagjit Singh Teji

Clinical Assistant Professor Chairman of Pediatrics at Northwestern Hospital at Huntley, Illinois. Master Clinician at Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Med

2 年

Just published NeoAI 1.0: Machine learning-based paradigm for prediction of neonatal and infant risk of death.

回复
Joseph Pareti

Board Advisor @ BioPharmaTrend.com | AI and HPC consulting

2 年

quite the opposite: this work 'https://arxiv.org/pdf/1810.01993.pdf' is about post processin weather data on Summit and it surpassed reduced precision exaflop performance as long as 3.5 years ago

Jason Stefanelli

Director, William Blair Investment Management

2 年

No more A.I. winters….

A thousand of these computers and we can simulate the human brain :-) Doubling the power every year, we will build zettascale computers in 10 years. And then miracles will start to happen :-)

要查看或添加评论,请登录

Michael Spencer的更多文章

  • TSMC "kisses the Ring" in Trump Chip Fab Announcement

    TSMC "kisses the Ring" in Trump Chip Fab Announcement

    Good Morning, To get the best of my content, for less than $2 a week become a premium subscriber. In the history of the…

    4 条评论
  • GPT-4.5 is Not a Frontier Model

    GPT-4.5 is Not a Frontier Model

    To get my best content for less than $2 a week, subscribe here. Guys, we have to talk! OpenAI in the big picture is a…

    11 条评论
  • On why LLMs cannot truly reason

    On why LLMs cannot truly reason

    ?? In partnership with HubSpot ?? HubSpot Integrate tools on HubSpot The HubSpot Developer Platform allows thousands of…

    2 条评论
  • Can AI Lead us to Enlightenment?

    Can AI Lead us to Enlightenment?

    This is a guest post by Chad Woodford, JD, MA, to read the entire thing read the original published today here. For…

    12 条评论
  • Apple is a Stargate Too for American Jobs and R&D ??

    Apple is a Stargate Too for American Jobs and R&D ??

    Apple's $500 Billion Investment Plan in the U.S.

    5 条评论
  • OpenAI o3 Deep Research vs. Google Gemini Deep Research

    OpenAI o3 Deep Research vs. Google Gemini Deep Research

    Good Morning, A whole lot of Deep Research, as we wait for Anthropic and OpenAI models like GPT-4.5.

    4 条评论
  • AI Capex, DeepSeek and Nvidia's Monster on the Horizon ??

    AI Capex, DeepSeek and Nvidia's Monster on the Horizon ??

    This is a guest post by the folk at the Pragmatic Optimist. To get access to my best work, consider a Paid subscription…

    4 条评论
  • Key to Using Perplexity for Intelligent Search

    Key to Using Perplexity for Intelligent Search

    Mastering Perplexity AI: Your Guide to the Future of Intelligent Search Hello there, This is a guest post by Nick…

    15 条评论
  • Google, OpenAI, DeepSeek Battle it out in AI

    Google, OpenAI, DeepSeek Battle it out in AI

    Good morning, To get access to the best of my work, consider subscribing for less than $2 a week. While DeepSeek felt…

    18 条评论
  • DeepSeek's R1 Disrupting America's AI Business Model

    DeepSeek's R1 Disrupting America's AI Business Model

    This week I am collecting thoughts on DeepSeek, the AI story of the year in 2025 so far. Today we feature a great piece…

    15 条评论

社区洞察

其他会员也浏览了