AI Is Eating Software
We’re delivering AI for every computing platform, every framework and every human endeavor.
The remarkable success of our GPU Technology Conference this month demonstrated to anyone still in doubt the extraordinary momentum of the AI revolution.
Throughout the four-day event here in Silicon Valley, attendees from the world’s leading companies in media and entertainment, manufacturing, healthcare and transportation shared stories of their breakthroughs made possible by GPU computing.
The numbers tell a powerful story. With more than 7,000 attendees, 150 exhibitors and 600 technical sessions, our eighth annual GPU Technology Conference this month was our largest yet. The world’s top 15 tech companies were there, as were the world’s top 10 automakers, and more than 100 startups focusing on AI and VR.
Behind these numbers is a confluence of powerful trends. AI is being driven forward by leaps in computing power that defy the slowdown in Moore’s law. AI developers are racing to build new frameworks to tackle some of the greatest challenges of our time. They want to run their AI software on everything from powerful cloud services to devices at the edge of the cloud.
The Era of AI Computing – The Era of GPU Computing
Unveiling Volta, the world’s most advanced AI computing architecture.
At GTC, we unveiled Volta, our greatest generational leap since the invention of CUDA. It incorporates 21 billion transistors. It’s built on a 12nm NVIDIA-optimized TSMC process. It includes the fastest HBM memories from Samsung. Volta features a new numeric format and CUDA instruction that performs 4x4 matrix operations – an elemental deep learning operation – at super high speeds
Each Volta GPU is 120 teraflops. And our DGX-1 AI supercomputer interconnects eight Tesla V100 GPUs to generate nearly one petaflops of deep learning performance.
Google’s TPU
Also last week, Google announced at its I/O conference, its TPU2 chip, with 45 teraflops of performance.
It’s great to see the two leading teams in AI computing race while we collaborate deeply across the board – tuning TensorFlow performance, and accelerating the Google cloud with NVIDIA CUDA GPUs. AI is the greatest technology force in human history. Efforts to democratize AI and enable its rapid adoption are great to see.
Powering Through the End of Moore’s Law
As Moore’s law slows down, GPU computing performance, powered by improvements in everything from silicon to software, surges.
The AI revolution has arrived despite the fact Moore’s law – the combined effect of Dennard scaling and CPU architecture advance – began slowing nearly a decade ago. Dennard scaling, whereby reducing transistor size and voltage allowed designers to increase transistor density and speed while maintaining power density, is now limited by device physics.
CPU architects can harvest only modest ILP – instruction-level parallelism – but with large increases in circuitry and energy. So, in the post-Moore’s law era, a large increase in CPU transistors and energy results in a small increase in application performance. Performance recently has increased by only 10 percent a year, versus 50 percent a year in the past.
The accelerated computing approach we pioneered targets specific domains of algorithms; adds a specialized processor to offload the CPU; and engages developers in each industry to accelerate their application by optimizing for our architecture. We work across the entire stack of algorithms, solvers and applications to eliminate all bottlenecks and achieve the speed of light.
That’s why Volta unleashes incredible speedups for AI workloads. It provides a 5x improvement over Pascal, the current-generation NVIDIA GPU architecture, in peak teraflops, and 15x over the Maxwell architecture, launched just two years ago, and well beyond what Moore’s Law would have predicted.
Accelerate Every Approach to AI
A sprawling ecosystem has grown up around the AI revolution.
Such leaps in performance have drawn innovators from every industry, with the number of startups building GPU-driven AI services has grown more than 4x over the past year to 1,300.
No one wants to miss the next breakthrough. Software is eating the world, as Marc Andreessen said, but AI is eating software.
The number of software developers following the leading AI frameworks on the GitHub open-source software repository has grown to more than from 75,000 from fewer than 5,000 over the past two years.
The latest frameworks can harness the performance of Volta to deliver dramatically faster training times and higher multi-node training performance.
Deep learning is a strategic imperative for every major tech company. It increasingly permeates every aspect of work from infrastructure, to tools, to how products are made. We partner with every framework maker to wring out the last drop of performance. By optimizing each framework for our GPU, we can improve engineer productivity by hours and days for each of the hundreds of iterations needed to train a model. Every network – from Caffe2, Chainer, Microsoft Cognitive Toolkit, MXNet, PyTorch, TensorFlow – will be meticulously optimized for Volta.
The NVIDIA GPU Cloud platform gives AI developers access to our comprehensive deep learning software stack wherever they want it — on PCs, in the data center or via the cloud.
We want to create an environment that lets developers do their work anywhere, and with any framework. For companies that want to keep their data in-house, we introduced powerful new workstations and servers at GTC.
Perhaps the most vibrant environment is the $247 billion market for public cloud services. Over the past six months, Alibaba, Amazon, Baidu, Facebook, Google, IBM, Microsoft and Tencent have added NVIDIA GPUs to their data centers.
To help innovators move seamlessly to cloud services such as these, at GTC we launched the NVIDIA GPU Cloud platform, which contains a registry of pre-configured and optimized stacks of every framework. Each layer of software and all of the combinations have been tuned, tested and packaged up into an NVDocker container. We will continuously enhance and maintain it. We fix every bug that comes up. It all just works.
A Cambrian Explosion of Autonomous Machines
Deep learning’s ability to detect features from raw data has created the conditions for a Cambrian explosion of autonomous machines – IoT with AI. There will be billions, perhaps trillions, of devices powered by AI.
At GTC, we announced that one of the 10 largest companies in the world, and one of the most admired, Toyota, has selected NVIDIA for their autonomous car.
We also announced Isaac, a virtual robot that helps make robots. Today’s robots are hand programmed, and do exactly and only what it was programmed to do. Just as convolutional neural networks gave us the computer vision breakthrough needed to tackle self-driving cars, reinforcement learning and imitation learning may be the breakthroughs we need to tackle robotics.
Isaac, introduced at GTC, brings reinforcement learning and imitation learning to robotics.
Once trained, the brain of the robot would be downloaded into Jetson, our AI supercomputer in a module. The robot would stand, adapt to any differences between the virtual and real world. A new robot is born. For GTC, Isaac learned how to play hockey and golf.
Finally, we’re open-sourcing the DLA, Deep Learning Accelerator – our version of a dedicated inferencing TPU – designed into our Xavier superchip for AI cars. We want to see the fastest possible adoption of AI everywhere. No one else needs to invest in building an inferencing TPU. We have one for free – designed by some of the best chip designers in the world.
Enabling the Einsteins and Da Vincis of Our Era
These are just the latest examples of how NVIDIA GPU computing has become the essential tool of the da Vincis and Einsteins of our time. For them, we’ve built the equivalent of a time machine. Building on the insatiable technology demand of 3D graphics and market scale of gaming, NVIDIA has evolved the GPU into the computer brain that has opened a floodgate of innovation at the exciting intersection of computer graphics, computer vision and artificial intelligence.
CEO at Nurlita Collection By. Nurlita Enterprises
4 天前Hi everyone i want ask how to make error the chip in the head ?
Lead Software Engineer at Walgreens
4 个月AI is still eating software
Head of IT ? Seasoned VP of Enterprise Business Technology ? Outcome Based Large Scale Business Transformation (CRM, ERP, Data, Security) ? KPI Driven Technology Roadmap
6 个月Jensen, Awesome! ??
CEO @ GlobalMedium LLC | MSc in Innovation, Leadership
9 个月His statement saying it’s End of coding.the fundamental principles of how computers operate, how software is developed, and how data structures and algorithms work remain crucial. These principles underpin AI and automation technologies, making the understanding of coding and programming essential for innovation and problemsolving. AI and automation tools can perform a wide range of tasks, but creativity, innovation, and the ability to customize solutions to specific problems are areas where human coders and programmers excel. The nuanced understanding and creative application of coding skills enable the development of new technologies, platforms, and solutions that AI, in its current state, cannot independently achieve. The development of AI and machine learning models themselves relies on skilled programmers and data scientists. Understanding programming languages, data manipulation, and algorithm development are all critical for advancing AI technologies. Thus, coding skills are indispensable in the field of AI development.Coding and programming skills are increasingly valuable in various fields outside of traditional software development, These skills enable professionals to analyze data applications specific to their fields.
他是一位著名的国际顾问,书籍作者和充满活力的演讲者: 人工智能,深度学习,元界,量子和神经形态计算,网络安全,投资动态。
9 个月Congrats on your numbers, Jensen Huang! NVIDIA has been on a roll lately, delivering breakthroughs in AI, gaming, and graphics. But as the old saying goes, what goes up must come down. Nothing lasts forever. So let's appreciate the moment and savor the thrill. It's a wild ride!