Developing FPGA based edge-AI Solutions

Developing FPGA based edge-AI Solutions

In my article published in March.2023, I explored the role of Video Analytics in enhancing CCTV infrastructures. I emphasized the growing need for edge computing and highlighted FPGA as becoming an ideal choice for edge-AI Video Analytics applications. Many companies and developers are expressing interest in developing FPGA-based edge-AI solutions and desire to start their journey.

As a part of my ongoing series of technology business focused articles, this particular piece provides insights into the HOW-TO, of adopting FPGA for edge-AI Video?Analytics solution development. Join me, to explore the options and the steps involved.

FPGA advantages for your edgeAI

No alt text provided for this image

FPGAs are widely used in fields like Military-Aerospace, Communications, Medical and more, showcasing their reliability and high performance. FPGAs offer advantages in power efficiency (critical at the edge), low latency (required for real-time), and hardware customization, making them suitable for edge applications. FPGAs also provide inherent security benefits with on-chip encryption algorithms. Moreover, FPGAs integrate FPGA fabric, processor cores, memory, and peripherals into a compact chip. This integration optimizes space utilization in edge devices or embedded systems.?

Developers skillset

Developing an FPGA-based embedded vision edge-AI solution encompasses various stages, including hardware design, machine learning algorithm development, integration of AI models, and software integration for system applications. The ultimate goal is to enable the implementation of inference results for actionable tasks or the transmission of analytics to the cloud for further?processing. Hardware design includes selecting the FPGA, configuring the FPGA fabric for efficient data processing. Software development encompasses creating drivers and communication protocols for smooth interaction with the FPGA. Integrating AI models requires adapting and optimizing them for the FPGA architecture.

To handle the hardware, FPGA engineers with knowledge of operating systems and middleware are necessary. For software and application integration, engineers familiar with Python or other higher-level languages, with a background in data science for model development, tuning, optimization, and testing and validating performance?are?required.

Development Options & Approach

In the current trend, companies follow two primary choices: either developing custom FPGA-based edge-AI solutions from ground-up or adopting pre-built hardware options such as AMD's KRIA starter-kits or PlanetSpark's edge-AI box X7 (I will provide a brief feature of these options later in the article) and porting their ML algorithms and applications. The decision hinges on factors like the company's core competencies, expertise, available resources, and specific business needs.

For companies primarily focused on analytics software development, pre-built hardware is often an attractive option as it enables them to accelerate their?time?to?market.

Hardware development & selecting the FPGA:

If you decide to design your own hardware, I would recommend you to take a look at the XCZU7EV MPSoC as a starting point. This device family is widely recognized and commonly used for embedded vision?applications. It belongs to Zynq UltraScale+ MPSoC (Multiprocessor System-on-Chip) and combines FPGA fabric. The FPGA includes programmable logic and Processors comprising ARM Cortex-A53 application processors (for image and video processing) and Cortex-R5 real-time processors (for real-time tasks and deterministic performance). The FPGA also includes various other integrated peripherals.

The integrated XCZU7EV FPGA allows efficient space utilization and hardware customization flexibility. The FPGA platform also offers IP cores and libraries for image processing applications and helps to accelerate and optimize image processing tasks.

For hardware development, the tool, Vivado Design suite is used for implementation and verification. It supports IP blocks, analysis tools, synthesis, place-and-route,?and?debugging and various design entry methods, including schematic entry, HDLs (like Verilog and VHDL) and also using C, C++, or SystemC based on your preference.

Software Development in Vitis Environment:

The Vitis software environment platform is aimed at software developers. The platform offers pre-built and optimized libraries for image and video processing, signal processing, and data analytics and integrates tools and runtime support.

Within the platform, the tool, Vitis-AI focuses on deep learning and AI applications, providing a complete development flow for deploying AI models on AMD devices. It supports popular libraries and popular deep learning frameworks like TensorFlow, PyTorch, Caffe, and ONNX Runtime, enabling acceleration of a wide range of applications, including custom models.

Vitis-AI supports a variety of pre-trained models that can be used for AI inference on the FPGA devices. These models cover various domains such as image classification, object detection, semantic segmentation, and more.? These pre-trained models serve as a starting point for developers, allowing them to leverage existing models and fine-tune or deploy them on their FPGA devices using Vitis AI. Some of the popular models supported are ResNet, MobileNet, YOLO and more.

In Vitis-AI, the Vitis Video Analytics SDK provides essential plugins for video, pre-processing, deep learning, and post-processing. Developers have the freedom to use C/C++ or Python code.

Development Process:

Having explored the hardware and software development environment, let us examine the various development options. As mentioned earlier, there could be two?choices:-

1.????Build complete embedded vision edge-AI solution from ground-up (ie develop both hardware and software)

?Some suggestions: (for terms indicated by * : refer Glossary)

  • ?Need MPSoC FPGA architecture, Vivado*, Petalinux*, Vitis-AI* knowledge.
  • Choose right device based on Frame per second (FPS) and number of Channels (Ch) requirement
  • Understanding of DPU* Integration flow is required. Instruction set, tooling and memory optimization for DPUs* are different so the developer will need to do model-modification prior to porting.
  • Create Linux boot image as per custom hardware.
  • Run AMD (Xilinx) Model and application. You can download from Github to jump-start.
  • Use Vitis AI and Compile your Custom model and application.

?The Flow:?????

No alt text provided for this image
No alt text provided for this image
Fig ref: from AMD

2.????.OR. You develop your algorithm and application software and port it to readily available hardware like AMD’s KRIA board or PlanetSpark edge-AI X7 box.

Some suggestions:

  • Developer is more software focused and uses Vitis AI tool chain.
  • No need to create Vivado DPU design in this case. Ready BSP/petalinux image is available from AMD website for the hardware.
  • Use SD card petalinux image to boot the board. Download from AMD website.
  • Run AMD model and application. Download from Github
  • Use Vitis AI and compile your custom model and application and check performance.
  • PoC (Proof of Concept) can be done either by using KV260 or PlanetSpark X7 box. For deployment, PlanetSpark X7 box, will be a good choice. Linux image and porting support is available along with AI box.
  • Same as in KRIA, for the X7 box too, developer uses Vitis AI tool chain to compile model and application development.

No alt text provided for this image
No alt text provided for this image
. Vitis Development Platform. Fig ref adapted from AMD

Once the vision application meets your desired performance and accuracy goals, deploy it on the target edge device(s). Monitor the application's performance over time and make updates or improvements?as?needed.

KRIA, KV260 Vision AI Starter Kit:

The KV260 (KRIA) Vision AI Starter Kit, is an out-of-the-box ready for Application development for AI, embedded SW & HW developer and designed for Vision AI applications.

The KV260, with it’s K26 SOM, is a simple way to develop solutions for initial evaluation and early development. It combines FPGA acceleration, AI engines, and a comprehensive software framework.

No alt text provided for this image

It’s salient features:·??????

  • Multi-camera Support, upto 8 interfaces
  • 3 MIPI sensor interfaces, USB cameras
  • Built-in ISP component
  • 1GB Ethernet
  • USB3.0/2.0 ports
  • Extend to any sensor or interface
  • Very low cost

PlanetSpark-AI box for POC to Volume Project Deployment:

PlanetSpark, a Singapore edge-AI solutions company has developed X7 edge-AI box, a state-of-art edge-AI box based on AMD’s Zynq Ultrascale+MPSoC ZU7EV FPGA. You can use this X7 edge-AI box, for porting your models and analytics software from POC stage to actual project deployment.?

No alt text provided for this image

The PlanetSpark X7 box is DPU enabled which has advantages over general-purpose GPU in HPC (High Performance Computing) applications and execution of data intensive tasks like ML, AI and data analytics.?

It’s key features:-

  • The device is a compact box, that can support 8 RTSP video streams.
  • (x4) USB3.0 ports
  • (x1) a DisplayPort/HDMI port
  • 1GB Ethernet
  • (x2) microSD card slots for your applications
  • Compact form factor 151x111x53mm
  • Operating temp -30C to 70C
  • CE Certified
  • The 8Channel AI box is stackable and can be expanded to cater to more number of channels.

PlanetSpark-Singapore, also offers an ecosystem partners program that enables companies to join their ecosystem, fostering joint development of edge-AI-based solutions for the market. This collaborative approach could open-up opportunities for knowledge sharing, accelerated development, and joint- enhanced market? reach, vital in today’s shared economy.

Conclusion

I hope the above content provides you a head-start as well as gives you an overview of how to go about doing your edge-AI solution on a FPGA platform.

In a world that's continuously moving towards edge computing, future of FPGA-based edge-AI solutions is promising. And as more and more developers embrace FPGA for their edge-AI solutions, we can expect to see even more transformative applications of this technology

FPGA-based edge-AI solutions are transforming industries like surveillance, logistics, healthcare, agriculture, and retail and more. They offer advantages over CPU+GPU, including power efficiency, low latency, hardware customization, and on-chip encryption.

Stay tuned for the next write-up in my Technology Business Focused Articles series, where I will discuss real-world examples of utilizing AI-box technology, in various popular use cases, within the field of video analytics.?


Note: Vivado, Vitis, Vitis-AI, KRIA, Zynq referred in the article are Trademarks of AMD


My Special Thanks to Yingyu Xia , Aditya S. , Rahul Soni , David Wang Qing Sheng , Apratim Basu , Vikram Vummidi , Rick Law , and my colleagues at Excelpoint; colleagues of Excelpoint Academy and members of PlanetSpark R&D for their insights and guidance, that contributed in making of this article


Writer: RD Pai , VP Business Development @ Excelpoint ([email protected]) / https://www.dhirubhai.net/in/paird

An Engineer in Electronics & Telecommunications and certified post graduate in DataScience and Business Analysis, and a Business & Sales Leader in diverse roles & domains of Semiconductor Components to AI & IoT edge Solutions spanning AsiaPac. A proponent of digital transformation and Exco member of SGTech, member of SCS and AEIS?


Glossary:

References:

A)????If you want a hands-on workshop or training, you may contact Excelpoint Academy at

B)????If you are a C, C++ or python embedded engineer and a video analytics software/algorithm development company and wish to use ready KRIA kit or PlanetSpark AI box to port your software which may be on GPU:

a.?????To understand FPGA, Vivado, Vitis, Petalinux:

b.?????To use EVB and Run your ready to use applications

c.?????To understand Vitis Acceleration flow and port your Custom models on Edge-AI box:

d.?????To understand Vitis DPU integration:?

e.?????To understand GPU vis-à-vis FPGA architecture:

?C)????If you are an embedded solutions company and want to enter AI and use edge AI box or KRIA to design and build AI solution

a.?????To understand capability and run your AI/ML applications:

b.?????To Run Xilinx pre-trained model and check FPS benchmark to decide the hardware:

D)????You are an SI and looking for ready edge-AI solutions for your usecase

For partners: https://www.xilinx.com/products/app-store/kria.html

Contact PlanetSpark www.planetspark.io

E)?????You are a AI-product maker and want to provide a AI hardware solution with your AI software

a.?????Start with a Development board

b.?????Understand how flow works and RUN AI application. Check performance

c.??????Go through Vitis Acceleration flow to develop your AI software

F)?????You are an AI edge box maker who want to build box on AMD FPGA:

a.?????Go through the MPSoC architecture:

b.?????Use reference schematic to design your hardware

c.?????Understand the EVB/EVK

d.?????What IDE and libraries to use

IDE- Vivado and Vitis for DPU integration, Acceleration Library.

IDE -Petalinux for Linux image development.

IDE : Vitis AI for ML model compilation.

e.?????To create hardware schematic: Refer AMD (Xilinx) EVB Schematic (you will need to register or login (if you already have an account):

G)????Additional Insights:

a.?????What is a typical development setup:

Server with Intel high-end processor, 64GB memory, 2TB hard drive, RTX Nvidia GPU card for training environment

b.?????How do you compile, Run your code in Vitis environment?

c.??????How do you test your code. You can use AMD EVB and test the code

d.?????How do you train your dataset:

e.?????How do you quantize

f.??????How do you test accuracy

g.?????About the frameworks:

h.?????About libraries

i.???????About Linux or petaLinux or Ubuntu environment

要查看或添加评论,请登录

社区洞察

其他会员也浏览了