10 Years of NVIDIA
Some people collect stamps, coins and trading cards...
As a PC enthusiast I have been holding on to high end hardware. This spread represents 9 NVIDIA enthusiast class graphics cards and over 10 years of progress since #NVIDIA created CUDA. Starting with a 8800 GTX launched in 2006 rocking 768MB of VRAM and 128 cores (left) to a GTX 1080 launched in 2016 with 8GB VRAM and 2560 cores (right) this table holds 10 years of NVIDIA history.
The changes this hardware development has had on the IT industry has been remarkable. This hardware and in particular CUDA has brought machine learning and AI into mainstream business. Many of the concepts and algorithms for modern machine learning were actually developed in the 1960s. It took until the 2000s for the hardware and software to catch up and make the application practical thanks to the acceleration that these GPUs enabled.
领英推荐
As this hardware gets more powerful. I expect the software and hardware improvements to allow users to share the power of a monster of a GPU like the NVIDIA RTX 3090 (with its 10496 cores and 24GB of ram) among multiple users and workloads on a single system.
These shared computing concepts are not new, CPUs have been able to able to do this since the mid 2000s and have enabled transformational technologies in business and data centers (virtual machines and containers). Whats stopping this from happening with GPUs? Well licensing, software and hardware.
NVIDIA are you listening? I and many other enthusiasts would love have the ability to assign a virtual machine to use specific display output on these powerful consumer cards so we could effectively divide up a GPU like a 3090 to run all of our GPU acceleration needs.