Deep Learning, Broad Solutions

Deep Learning, Broad Solutions

Does ChatGPT have your business talking big ideas? Here’s why it’s faster (and less expensive) than you think to implement generative AI.


The world of AI spins fast.

Every day brings another attention-grabbing news headline, technological breakthrough, or brilliant idea for application. Things aren’t slowing down: In fact, the global AI market is expected to see a compound annual growth rate of 37.3% through 2030.

Every industry, meanwhile, is rushing to take advantage of what generative AI can offer, including the revolutionary developments of large language models (LLMs) like ChatGPT. The big challenge facing businesses right now, though, is how to implement generative artificial intelligence (GAI) and LLMs smartly, efficiently, cost-effectively, and of course, quickly. It’s hard to keep up.

At Intel’s Habana Labs, we’re in step with the rapid speed of AI advances. The industry benchmark data is in, and when it comes to training LLMs, the Habana Gaudi2 processor, which was purpose-built for the computing demands of GAI, outperforms the Nvidia A100 , currently the largest installed base for AI workloads. We’re delivering the powerful solutions customers need to embrace this exciting, watershed moment in the evolution of AI—and to help them navigate the opportunities, challenges, and demands of the future, too.

Here’s how we’re breaking down three of the most daunting obstacles facing businesses in the AI arena.

Time-to-Train

GAI models offer businesses unprecedented brainpower—but it takes time (and expertise) to fill those unmapped digital minds with the troves of relevant data required for, say, an LLM to hold an intelligent conversation with a customer. What’s more, models may need to be regularly reeducated (daily, even) to maintain accurate info.

Biting off this challenge can be intimidating and expensive due to the training time that is necessary. However, recent advancements in throughput are making training faster and more cost-efficient: According to the latest industry-standard MLPerf training metrics, for instance, the Gaudi2 consistently demonstrated faster performance than the A100 across popular computer vision and language models . On top of that, the Gaudi2 is known to provide extraordinary price-performance advantage relative to the Nvidia A100 . All this forward momentum will continue with the development of the Gaudi3, slated for availability in 2024, as well as Intel’s upcoming GPU, codenamed Falcon Shores, which will bring Gaudi performance and efficiency to the GPU form factor beginning in 2025. Already, customers can buy Gaudi-based training time by the hour on the AWS cloud, which provides superior price performance relative to comparable EC2 instances . Customers can also access Gaudi2 instances on the Intel Developer Cloud , or build their own deep learning systems on-premises with Gaudi2 servers from Supermicro and IEI.

“Faster time-to-train is essential,” says Susan Lansing, product marketing for Gaudi at Intel. “Faster Gaudi training of mission-critical customer models can mean more training, more frequently, resulting in greater model accuracy and increased end-application reliability.”

Power

Time spent training AI models goes hand in hand with something else: power consumption. Simply put, LLM training requires a lot of it—preparing GPT-3 for launch, for example, requires over 1,200 megawatt hours of energy for training, resulting in more than 500 tons of CO2. The tech industry is at an inflection point when it comes to managing carbon emissions and has an obligation to consume power responsibly. For these reasons, Intel and the Gaudi team are focused on reducing the size of large language models to less compute-intensive, more manageable scale using fine-tuning software advancements like LoRA. In addition, the Gaudi2 brings competitive throughput-per-watt versus GPUs for models like ResNet and BLOOMZ . Faster training to speed workloads and higher throughput-per-watt can result in lowering overall operational cost in the data center.

The technology may be complex, but the formula is simple: Less time plus less power equals significantly less spend.

Software

Hardware sets a stage, but software runs the show—and businesses spend massive investments of time and money building code for specific software models. As a result, changing toolkits can seem overwhelming. But Intel’s SynapseAI? software suite, optimized for Gaudi hardware, makes it exceptionally easy to migrate from Nvidia’s CUDA: SynapseAI integrates frameworks like PyTorch, which is the most widely used code for AI and machine learning (ML). All it takes is two lines of new code to get started building new or migrating existing code on the Gaudi2 accelerator.

From supportive software to faster training speeds to improved power efficiency, we’re building the dynamic hardware and software solutions that make it easier for businesses to employ the infinite possibilities of GAI and LLMs.

Head here to learn even more .?


Ready to Accelerate Your AI Goals? Join Us.

Connect, network, develop, and discover alongside the brightest minds in AI technologies and services at Intel Innovation, happening Sept. 19–20 at the San Jose McEnery Convention Center. Join us and register today.

We are committed to overcoming key challenges in AI implementation and offering efficient solutions! ?

Gerhard Lesch

Director - Business Development, Healthcare & Life Science at Intel Deutschland GmbH

1 年

Intel Innovation is an excellent place for consuming latest technology innovations. AI is certainly a big topic there too.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了