Object Detection and Tracking Solutions for Robotics and Agriculture

Object Detection and Tracking Solutions for Robotics and Agriculture

Welcome to the September edition of the Mixtile IoT Newsletter. This month, we share two innovative applications of Mixtile single-board computer products: robotic sensing using the YDLIDAR OS30A in conjunction with the Mixtile Blade 3, and AI-based bee detection and tracking with the Mixtile Edge 2 Kit. Discover the powerful object detection and tracking capabilities of these solutions.

Robotic Sensing with YDLIDAR OS30A and Mixtile Blade 3

1. Project Context

In the world of robotics, effective navigation is crucial for the successful deployment of autonomous systems. However, relying solely on onboard sensors can limit a robot’s ability to perceive its environment accurately, especially in complex or dynamic settings. To address this, we can enhance a robot’s navigational capabilities by integrating external sensors that provide a more comprehensive understanding of its surroundings.

This series of articles will explore how to achieve this by leveraging a YDLIDAR 3D depth camera, external to the robot, combined with a Mixtile Blade 3 single-board computer running ROS1. The objective is to gather 3D point cloud data and use YOLO (You Only Look Once) for object detection. This setup will allow us to build a more robust sensing system that enhances the robot’s ability to navigate and interact with its environment effectively.

2. Related Hardware and Software

Mixtile Blade 3 Single Board Computer

The Mixtile Blade 3 is a high-performance single-board computer designed to meet the demanding needs of edge computing applications, including robotics. Powered by the Octa-Core Rockchip RK3588, the Blade 3 delivers robust processing capabilities in a compact Pico-ITX 2.5-inch form factor.

Key features include:

  • Octa-Core Rockchip RK3588: Ensures powerful performance for complex computations and real-time processing.
  • Stackable via Low-latency 4x PCIe Gen3: Offers the flexibility to expand and scale your hardware setup easily.
  • Rich Interface: Provides a wide range of connectivity options, making it versatile for various peripheral integrations.
  • Versatile Edge Computing Unit: Ideal for tasks requiring intensive data processing and quick response times, making it a perfect fit for advanced robotics projects.

Benchmarks and the Choice of YOLOv8 NCNN for Object Detection

The primary reason for choosing YOLOv8 with NCNN over the Torch implementation or previous versions like YOLOv5 is the drastic improvement in inference speed. On edge devices like the Mixtile Blade 3, which rely on efficient use of computational resources, NCNN provides a much faster alternative for real-time object detection. This is critical for applications where quick decision-making is essential, such as in autonomous navigation and obstacle avoidance.

Moreover, NCNN’s lightweight nature allows it to run efficiently on ARM-based processors, making it an ideal fit for the Mixtile Blade 3’s architecture. The benchmarks clearly show that YOLOv8 with NCNN outperforms other configurations in both speed and efficiency, which directly translates into better performance for real-time robotic applications.

3. Project Results and Adjustments

Benchmarking YOLO using the Mixtile Blade 3 and YDLIDAR OS30A depth camera:

https://youtu.be/DnEgSWMh5uw


Impact of IR Intensity on Object Detection and Depth Sensing

In this project, we observed that adjusting the?IR intensity?setting of the YDLIDAR OS30A 3D Depth Camera significantly affects both object detection and depth sensing capabilities. Here’s a summary of our findings and how to optimize the settings for your specific use case.

Observations:

IR Intensity Set to 0:

  • Impact on Object Detection: With the IR intensity set to 0, object detection using YOLOv8 performed significantly better. This setting minimizes interference from the camera’s IR sensors, leading to clearer images and more accurate object detection.
  • Impact on Depth Sensing: Disabling IR intensity also disables the camera’s 3D depth sensing capabilities. This means that while object detection accuracy improves, the camera will not provide depth data, which might be critical depending on the application.

IR Intensity Set to 3 (Default):

  • Impact on Object Detection: At the default IR intensity of 3, the 3D depth sensing works well, but it introduces noise that negatively impacts the performance of object detection. The camera’s IR emissions create reflections and artifacts in the captured images, leading to less accurate detections.
  • Impact on Depth Sensing: Depth sensing is fully operational, providing 3D point clouds that can be useful for tasks like obstacle avoidance and environment mapping.

Potential Solutions

To overcome these limitations, a few strategies can be considered:

  • Alternate Between Modes: One approach could be to alternate between modes—switching between high IR intensity for depth sensing and low or zero IR intensity for object detection. By running these modes in sequence, the robot could gather depth data and then switch to a more optimized setting for object detection.
  • Fine-Tune YOLO Weights: Another solution is to fine-tune the YOLO model weights specifically for the environment and the specific characteristics of the YDLIDAR OS30A camera. This could improve the model’s ability to detect objects accurately, even with the IR intensity set at levels that enable depth sensing.

These solutions will be explored in more detail in the next article, where I will focus on refining the object detection capabilities to accurately detect the robot and its surroundings under varying conditions. By fine-tuning the YOLO weights and possibly integrating a mode-switching strategy, we aim to optimize both object detection and depth sensing simultaneously.

Adjusting the IR Intensity

You can adjust the IR intensity on-the-fly using the RQT Reconfigure tool:

rosrun rqt_reconfigure rqt_reconfigure

In the?rqt_reconfigure?interface, navigate to the?/camera_BMVM0530A1_node?settings and modify the?ir_intensity?parameter. Set it to?0?for better object detection or leave it at?3?for depth sensing.

https://youtu.be/7IF-Xx6-QsQ

Conclusion

The ability to dynamically adjust the IR intensity provides flexibility in balancing the trade-offs between object detection and depth sensing. By exploring further strategies such as alternating modes or fine-tuning YOLO weights, the camera’s performance can be optimized to suit a wide range of robotic applications. Stay tuned for the next article, where we will delve into these enhancements to achieve more accurate robot detection and sensing.

Explore the whole project and detail tutorials:

https://www.mixtile.com/capture-3d-point-clouds-and-detect-objects/


AI-based Bee Detection and Tracking with Mixtile Edge 2 Kit

1.?Project Brief

Mixtile Edge 2 Kit is high-performance ARM single board computer. It comes in variants of 2GB of LPDDR4 DRAM and 16GB eMMC Flash storage, or 4GB of LPDDR4 DRAM and 32GB eMMC Flash storage. This single board computer comes with preinstalled Android 11, and it runs Ubuntu Linux operating system in Android container. It comes with large connectivity options (Bluetooth, 4G/5G Cellular, GPS, and Lora, Zigbee and Z-Wave). For those, you will need module, but it comes with default onboard Wi-Fi connectivity, Gigabit Ethernet Port (RJ45) and Serial Port (RS-485). Because it comes with RS-485 port, which is industrial standard, and it comes within a strong metal case, it seems to me that it can be really used in industrial projects. We used official Raspberry Pi 5 power supply to power up my Mixtile Edge 2 Kit.

We use it in agriculture, bee detection, which can be essential for health and survival of bees.?This project will cover setting up Mixtile Edge 2 Kit, and custom photo dataset from video to train custom YOLOv5 bee detection model.?YOLOv5 models must be trained on labeled data to learn classes of objects in that data.?We gathered data from video and trained model on PC.

2.?Product Results Show

?Training results are summarized in the following table:

Demonstrated videos are on urls with detection finished completely on Mixtile Edge 2 Kit.?

https://youtu.be/wZbo47IuIVM

3. Conclusion

After testing, we found out that the Mixtile Edge 2 Kit is designed with a wide range of applications, from industrial applications, IoT devices, and smart home automation, to more than capable AI and edge detection. It is low powered device, with a lot of built-in connectivity options.

Explore the whole project and detail tutorials:

https://www.mixtile.com/ai-based-bee-detection-and-tracking/


要查看或添加评论,请登录

Mixtile的更多文章

社区洞察

其他会员也浏览了