Autoware on Yonohub (Vision pipeline) — Part 3

Autoware on Yonohub (Vision pipeline) — Part 3

This article is part of the Autoware series. Check out the full series: Part 1Part 2

We are pleased to announce the release of Autoware Vision blocks on Yonohub. With these blocks, you can use Autoware.AI directly on Yonohub without downloading it or setting up any particular environment. Seamlessly integrate your algorithms and blocks and connect them to any dataset or a ROSBag. Share and publish your blocks directly on Yonohub to showcase your solution while protecting all IPs. Sign up now and start using Autoware.AI blocks.

The previous article gave a brief introduction about Autoware and showcased Autoware.AI Localization and Perception blocks that can be used directly as a plug and play. In this article, we’ll show how to build a pipeline using vision algorithms from Autoware.AI. We’ve also made the blocks so that it can be used easily as drag and drop. You won’t waste time installing libraries and building packages. You just need to drag and drop the blocks you need to use and adjust the parameters and click the launch button. Also, the vision blocks are compatible with datasets available on Yonohub: nuScenes, KITTI, BDD, Comm.ai, and ApolloScape.

Learning Objectives

In this article, we’ll discuss:

  • Autoware Segmentation using ENet.
  • Autoware 2D Object Detection using YOLO.
  • Autoware Object Tracking using Beyond Pixels Tracker.

ENet Segmentation

Semantic segmentation is one of the tools that help perceive the environment surrounding autonomous vehicles and plays an essential role in finding target objects. ENet Segmentation stands for “efficient segmentation network,” and it’s meant for low latency operations. It’s 18 times faster than any algorithm made before it and requires 75 times fewer FLOPs, and has fewer parameters than previous models, and above all, it provides better accuracy.

No alt text provided for this image
No alt text provided for this image

YOLO Object Detection

YOLO stands for “You Only Look Once,” and it’s a very powerful, highly efficient, and real-time object detection algorithm. The algorithm is regularly updated to be more efficient than it’s at the moment.

No alt text provided for this image

YOLO consists of a single CNN that predicts multiple bounding boxes and class probabilities for those boxes. It’s trained on full images and directly optimizes detection performance.

No alt text provided for this image

Beyond Pixels Based Tracker

Detected object tracking is an essential component of scene understanding. Hence, it’s vital to use when detecting objects from camera images.

Beyond Pixels Tracker is a novel approach for multi-object tracking in road scenes. This method uses detected objects from YOLO or any other object detector and tracks all the detected items for each frame.

No alt text provided for this image

Vision Pipeline on Yonohub

Let’s make an Autoware based vision pipeline that applies segmentation, YOLO object detection, and tracking. To make that, we’ll need to:

Step 1: Setup the Project Files

For this pipeline to work properly, we’ll need to have some files for each module. For the image data, we’ll be using Kitti ROSBag that provides camera images along with Lidar, GPS, and IMU data, and it can be downloaded using this tutorial.

For the segmentation module to work, we’ll need to download sample pre-trained files from here.

For the YOLO object detection block to work, we’ll need to download the weights and config files from here.


Step 2: Getting Autoware.AI Blocks from YonoStore

Autoware.AI blocks are available for free on YonoStore to purchase them. Once you bought them, it’s a drag and drop process.

Blocks to purchase:

(KITTI raw rosbag — ENet Segmentation — YOLO — Byond Track — Camera Objects Visualizer — Rviz)


Step 3: Building the Pipeline

The following video shows how to build and run the pipeline and visualize outputs:

Also, instead of creating the pipeline from scratch, you can clone this Github repository into the “/MyDrive/” folder to get the pipeline and the required project files for it. Make sure to follow the instructions on how to get the KITTI ROSBag and download YOLO weights to be able to use the block proberly.

Yonohub

Yonohub is the first cloud-based system for designing, sharing, and evaluating autonomous vehicle algorithms using just blocks. Yonohub features a drag-and-drop tool to build complex systems consisting of many blocks, a marketplace to share and monetize blocks, a builder for custom environments, and much more.

No alt text provided for this image

Get $25 free credits when you sign up now. For researchers and labs, contact us to learn more about Yonohub sponsorship options. Yonohub: A Cloud Collaboration Platform for Autonomous Vehicles, Robotics, and AI Development. www.yonohub.com

If you liked this article, please consider following us on Twitter at @yonohubemail us directly, or find us on LinkedIn. I’d love to hear from you if I can help you or your team with how to use YonoHub.


References

https://www.autoware.ai/

https://gitlab.com/autowarefoundation/autoware.ai

https://github.com/tomas789/kitti2bag

https://arxiv.org/abs/1606.02147

https://arxiv.org/abs/1802.09298

https://pjreddie.com/darknet/yolo/




要查看或添加评论,请登录

Ahmed Radwan的更多文章

  • Self-hosting - Part 2 - Jellyfin

    Self-hosting - Part 2 - Jellyfin

    Missed Part 1? Check it out here. A year ago, I became a parent, and like many others, I’ve been thinking about how to…

    2 条评论
  • Self-hosting - Part 1 - Immich

    Self-hosting - Part 1 - Immich

    Disclaimer: My current setup isn't perfect and can be done in a better and more optimized way. I chose to do it this…

    4 条评论
  • IPC Mechanisms (ROS1 vs Shared Memory IPC)

    IPC Mechanisms (ROS1 vs Shared Memory IPC)

    Introduction Inter-Process Communication (IPC) mechanisms are fundamental to modern operating systems, enabling…

    6 条评论
  • LIO-SAM on Yonohub

    LIO-SAM on Yonohub

    We are pleased to announce the release of LIO-SAM Mapping ready-to-use Block on YonoArc. You can use the block to…

  • AirSim with Autoware

    AirSim with Autoware

    As illustrated in our previous articles(Autoware.ai Vision & Autoware.

    5 条评论
  • Autoware on Yonohub — Part 2

    Autoware on Yonohub — Part 2

    We are pleased to announce the release of Autoware Localization and Perception blocks on Yonohub. With these blocks…

    9 条评论
  • Autoware on Yonohub?-?Part?1

    Autoware on Yonohub?-?Part?1

    We are pleased to announce the release of AutowareAI ready-to-use environment on Yonohub. With this environment, you…

  • Waymo Open Dataset Player on Yonohub

    Waymo Open Dataset Player on Yonohub

    Introduction Waymo is well-known for their development in autonomous vehicles since 2009 and in 2017 they started a…

    13 条评论

社区洞察

其他会员也浏览了