Is there a way to accelerate deployments and increase user productivity for AI?

Is there a way to accelerate deployments and increase user productivity for AI?

Deep Learning (DL) and Machine Learning (ML) are fantastic technologies to bring new business values. DL & ML technologies are based on models developed by Data Scientists in various type of Open Source frameworks whatever is Tensorflow, Caffe, Torch, etc.

Side to the modeling and the frameworks, Data Scientists require powerful platforms to train quickly models with a very high accuracy. A lot of computing power is required to train models and the hardware architecture has a direct impact on the model performance. To help Data Scientists to set-up optimized, validated and supported DL/ML environment for Power platforms, IBM is offering the PowerAI framework. The framework takes advantage of the Power hardware architecture without the need for Data Scientists to go into hardware details.

DL & ML are not new concepts but now the technologies are available to make it happen, whatever is for the training phase or for the deployment phase. DL & ML models use specific accelerators such as the Graphic Processor Unit (GPU) from NVidia or the Field-Programmable Gate Array (FPGA) from Xilinx.

IBM PowerAI Vision can be seen as ”Point-and-Click” AI for Images & Video. No need any more to be a Data Scientist to develop or train a model that will detect specific shapes (such as bikes, animals, etc.) IBM PowerAI Vision is a fully integrated solution which provides numerous functionalities like:

  • Classification of Images`     
  • Object Detection
  • Auto labeling of images and Videos
  • Prebuilt models for classification
  • RESTFul interface to integrate into solutions  
  • Import Custom Models to train and host for inference
  • Data Augmentation

For inference applications, IBM PowerAI Vision allows to train and deploy models such as YOLO model (You Only Look Once) on Xilinx Alveo FPGA card. Real time decoding and performance are key for inference, FPGA can acheive this in a very effective energy budget. More details in this very short video.  https://www.youtube.com/watch?v=DpjKlIeluw0

According to Steve Sibley, Vice President of IBM Cognitive Systems, “IBM sees inference as a key component of a complete, end-to-end AI platform, and POWER9's leadership I/O bandwidth for data movement makes it an ideal pairing with Xilinx’s new Alveo U200 accelerator card to bring inference to the enterprise.”



Thomas Boudrot

Vice President Sales and Business Development @ Kandou | Master of Electronics and Computer Science

6 年

Excellent article Philippe. Well done!

回复

要查看或添加评论,请登录

Philippe Hans的更多文章

社区洞察

其他会员也浏览了