Edge AI and Vision Insights Newsletter

Edge AI and Vision Insights Newsletter

A NEWSLETTER FROM THE EDGE AI AND VISION ALLIANCE Early July 2024 | VOL. 14, NO. 14

LETTER FROM THE EDITOR

Dear Colleague,?

On Thursday, September 26, 2024 at 9 am PT, SKY ENGINE AI will deliver the free webinar “Leveraging Synthetic Data for Real-time Visual Human Behavior Analysis Using the SKY ENGINE AI Platform ” in partnership with the Edge AI and Vision Alliance. Human-related applications of computer vision and AI are growing rapidly in both number and diversity. From medicine to retail to manufacturing and security, AI-powered solutions will soon be ubiquitous in our lives. In the midst of this flux, one constant remains: the need for high-quality deep learning model training data. Face and body analysis-related use cases such as face recognition and segmentation, gaze estimation and facial expression analysis make this requirement even more critical.

Training with manually-labeled real-world images has historically been the most common approach. However, this approach has multiple drawbacks, including:

  • Legal and ethical concerns
  • Labeling accuracy issues
  • Low-diversity datasets
  • Context bias
  • A lack of 3D information in ground truth

These obstacles are often difficult or impossible to eliminate. SKY ENGINE AI's approach leverages synthetic data to train computer vision models, providing an alternative methodology that bypasses these issues, and that offers support for different modalities including visible light, near infrared, radar and lidar.

In this webinar, Jakub Pietrzak, Chief Technology Officer for SKY ENGINE AI, will explain how 3D generative AI combines with physically-based rendering to train deep learning models for the analysis of human faces, bodies and behavior. He will also demonstrate example datasets created in the SKY ENGINE AI Synthetic Data Cloud, along with inference results from models trained purely on synthetic data. A question-and-answer session will follow the presentation. For more information and to register, please see the event page .

Brian Dipert

Editor-In-Chief, Edge AI and Vision Alliance


PROCESSOR ADVANCEMENTS

Addressing Tomorrow’s Sensor Fusion and Processing Needs with New Processors

From ADAS to autonomous vehicles to smartphones, the number and variety of sensors used in edge devices is increasing: radar, LiDAR, time-of-flight sensors and multiple cameras are more and more common. And, as sensors have improved, the data rates associated with them have increased. Traditionally, a dedicated processor has been utilized to process data from each sensor independently. Today, however, there is a growing need for a single, unified processor capable of processing multimodal sensor data utilizing both classical and AI algorithms and implementing sensor fusion for robust perception. In this talk, Amol Borkar, Product Marketing Director at Cadence, introduces the new Vision 341 DSP and Vision 331 DSP. These cores provide a versatile single-DSP solution for various workloads, including image sensing, radar, LiDAR and AI tasks. Borkar explores the architectures of these new processors, highlights their performance and efficiency and outlines the associated developer tools and software building blocks.

Efficiency Unleashed: A Next-gen Applications Processor for Embedded Vision

Machine vision is the most obvious way to help humans live better, enabling hundreds of applications spanning security, monitoring, inspection and more. Modern edge processors need private on-device and scalable hybrid machine learning capabilities to offer enough longevity to stay relevant in industrial and commercial IoT markets. In this presentation, James Prior, Senior Product Manager at NXP Semiconductors, presents the upcoming i.MX 95 family of applications processors. The i.MX 95 features a new neural processing unit from NXP—the eIQ Neutron NPU. Designed to scale from today’s conventional neural networks to tomorrow’s transformer-based models, the eIQ Neutron NPU scalable architecture delivers edge AI capabilities at high efficiency with award-winning tools, combined with chip-level security and privacy features. The i.MX 95 applications processor family features powerful processing and vision capabilities combined with safety, security and expandable high-speed interfaces.


MEMORY OPTIMIZATIONS

A Cutting-edge Memory Optimization Method for Embedded AI Accelerators

AI hardware accelerators are playing a growing role in enabling AI in embedded systems such as smart devices. In most cases NPUs need a dedicated, tightly coupled high-speed memory to run efficiently. This memory has a major impact on performance, power consumption and cost. In this presentation, Arnaud Collard, Technical Leader for Embedded AI at 7 Sensing Software, dives deep into his company’s state-of-the-art memory optimization method that significantly decreases the size of the required NPU memory. This method utilizes processing by stripes and processing by channels to obtain the best compromise between memory footprint reduction and additional processing cost. Through this method, the original neural network is split into several pieces that are scheduled on the NPU. Collard shares results that show this technique yields large memory footprint reductions with moderate increases in processing time. He also presents his company’s proprietary ONNX-based tool that automatically finds the optimal network configuration and schedules the subnetworks for execution.

The Importance of Memory for Breaking the Edge AI Performance Bottleneck

In recent years there’s been tremendous focus on designing next-generation AI chipsets to improve neural network inference performance. As higher-performance processors are called upon to execute ever-larger models—from vision transformers to LLMs—memory bandwidth is frequently the key performance bottleneck. With the demands for memory bandwidth and storage capacity varying across applications, it is critical to identify the right memory technologies that match the complexity and performance needs of your application. In this talk, Wil Florentino, Senior Marketing Manager for Industrial/IIoT at Micron Technology, explores how to choose the right memory to break the performance bottleneck in edge AI systems. He also highlights recent memory technology developments that are enabling higher memory performance and capacity at the edge.


UPCOMING INDUSTRY EVENTS

Silicon Slip-Ups: The Ten Most Common Errors Processor Suppliers Make (Number Four Will Amaze You!) – BDTI Webinar: August 22, 2024, 9:00 am PT

Leveraging Synthetic Data for Real-time Visual Human Behavior Analysis Using the SKY ENGINE AI Platform – SKY ENGINE AI Webinar: September 26, 2024, 9:00 am PT

Embedded Vision Summit : May 20-22, 2025, Santa Clara, California

More Events


FEATURED NEWS

Quadric’s 3rd Generation Chimera GPNPU Product Family Expands to 864 TOPS , Adds Automotive-grade Safety Enhanced Versions

Intel AI Platforms Support Microsoft Phi-3 GenAI Models

AMD Accelerates the Pace of AI Innovation and Leadership with an Expanded AMD Instinct GPU Roadmap

AiM Future Brings GenAI Applications to Mainstream Consumer Devices

Vision Components' MIPI Cameras with GMSL2 Support Cable Lengths of Up to 10 Meters

More News


EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Qualcomm Snapdragon X Elite Platform (Best Edge AI Processor)

Qualcomm’s Snapdragon X Elite Platform is the 2024 Edge AI and Vision Product of the Year Award Winner in the Edge AI Processors category. The Snapdragon X Elite is the first Snapdragon based on the new Qualcomm Oryon CPU architecture, which outperforms every other laptop CPU in its class. The Snapdragon X Elite’s heterogeneous AI Engine has a combined performance of greater than 70 TOPS across the NPU, CPU and GPU.

The Snapdragon X Elite includes a powerful integrated NPU capable of delivering up to 45 TOPS. In addition to raw performance, on-device AI benefits from a model’s accuracy and response time, as well as the speed for large language models, measured in tokens per second. The Snapdragon X Elite can run a 7 billion parameter Llama 2 model on-device at 30 tokens per second. The Oryon CPU subsystem outperforms the competitor’s high-end 14-core laptop chip in peak performance by 60%, and can match the competitor’s performance while using 65% less power. When compared to the leading performing x86 integrated GPU, Snapdragon X Elite delivers up to 80% faster performance, and can match the competitor’s highest performance with 80% less power consumption. Developers will have access to the latest AI SDKs too. Snapdragon X Elite features support for all of the leading AI frameworks, including TensorFlow, PyTorch, ONNX, Keras and more.

Please see here for more information on Qualcomm’s Snapdragon X Elite Platform. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了