Waymo Open Dataset Player on Yonohub

Waymo Open Dataset Player on Yonohub

No alt text provided for this image

Introduction

Waymo is well-known for their development in autonomous vehicles since 2009 and in 2017 they started a limited trial of a self-driving taxi in Arizona state then created a commercial self-driving car service in 2018 called “Waymo One”. Their development in autonomous vehicles is great in terms of reliability of their systems as the system has never been involved in an accident while on autonomous mode ever before.

In October 2018, Waymo celebrated 10 million miles of self-driving in their fleet.

In August 2019, Wamyo released the Waymo Open Dataset, which is the largest multimodal corpus for autonomous driving.


Sensor Layout and Coordinate Systems

The dataset provides a variety of sensor outputs shown in the following figure of sensor configuration on Waymo’s autonomous vehicle:

No alt text provided for this image

Dataset camera images are 1920x1280 which is equivalent to Ultra HD resolution and a horizontal field of view (HFOV) of +-25.2 degree.

LiDAR sensors in Waymo open dataset outputs huge amount of points and covers each corner around the vehicle with almost no blind spots that the sensor can't cover. Top LiDAR covers a vertical field of view (VFOV) from -17.6 to 2.4 degrees and its range is 75 meters and covers 360 degrees horizontally. Front, side left, side right and rear LiDARs covers a relatively smaller areas than the top LiDAR. They all cover a vertical field of view (VFOV) from -90 to 30 degrees and their range is 20 meters which is smaller than the top LiDAR.


Dataset Analysis

And it’s not just data. They provide around 12.1 million 2D camera detection object bounding boxes with their labels and tracking IDs and 9.8 million 3D laser detection object bounding boxes with their labels and tracking IDs as well.

No alt text provided for this image

Not only that, the dataset was recorded in different weather conditions, times of day and locations across San Francisco, Mountain View and Phoenix. Also vehicle poses and transformation to each sensor is provided with the dataset.

No alt text provided for this image

It’s a great addition to anyone working on development of autonomous vehicles especially if you don’t have enough resources on hand.


Waymo On Yonohub

Yonohub is the first cloud-based system for designing, sharing, and evaluating autonomous vehicle algorithms using just blocks. Yonohub features a drag-and-drop tool to build complex systems consisting of many blocks, a marketplace to share and monetize blocks, a builder for custom environments, and much more.

No alt text provided for this image

I created a Waymo Dataset Player block on Yonohub to ease the development of autonomous vehicles technology whether you want to benchmark your detection algorithm or perform 3D mapping, localization, image segmentation, etc. The Dataset Player block gives you access to:

  • Camera Images: Outputs five images from the five cameras around the vehicle.
  • LiDAR data: Outputs five Point Cloud messages, one for each sensor of the five LiDAR sensors.
  • 2D Object Bounding Boxes: for each of the five cameras, bounding boxes with labels for the objects are published.
  • 3D Object Bounding Boxes: for LiDARs, bounding boxes with labels and metadata such as speed and acceleration in x,y directions are published.
  • Vehicle Poses: for each frame, vehicle poses are broadcasted as a transformation from Odom frame to base_link frame.
  • Sensors Transformations: for each frame, sensors transformations are broadcasted from base_link to sensor_frame.

Waymo Dataset Player pipeline on Yonohub:

Waymo Dataset Player pipeline running on Yonohub

Camera images with drawn bounding boxes in the dashboard:

No alt text provided for this image

LiDAR point cloud data as appears on RVIZ:

No alt text provided for this image

Here is a quick Tutorial on how to create the same pipeline presented above and visualize Waymo data in less than 5 mins:


References

https://arxiv.org/abs/1912.04838

https://github.com/waymo-research/waymo-open-dataset







Moamen Abdelrazek

AI Engineering Manager

4 年

3ash <3

Sharoze Ali

AI Engineer & Computer Vision Expert @Netcad | Machine Learning | LLMs | Data Science

4 年

congratz & thanks, Ahmed, How can I made a Waymo open dataset player block on Yonohub?

Great job my friend!

Jaimeet Patel

Senior Software Engineer, Computer Vision IRL

4 年
回复

要查看或添加评论,请登录

Ahmed Radwan的更多文章

  • Self-hosting - Part 2 - Jellyfin

    Self-hosting - Part 2 - Jellyfin

    Missed Part 1? Check it out here. A year ago, I became a parent, and like many others, I’ve been thinking about how to…

    2 条评论
  • Self-hosting - Part 1 - Immich

    Self-hosting - Part 1 - Immich

    Disclaimer: My current setup isn't perfect and can be done in a better and more optimized way. I chose to do it this…

    4 条评论
  • IPC Mechanisms (ROS1 vs Shared Memory IPC)

    IPC Mechanisms (ROS1 vs Shared Memory IPC)

    Introduction Inter-Process Communication (IPC) mechanisms are fundamental to modern operating systems, enabling…

    6 条评论
  • LIO-SAM on Yonohub

    LIO-SAM on Yonohub

    We are pleased to announce the release of LIO-SAM Mapping ready-to-use Block on YonoArc. You can use the block to…

  • AirSim with Autoware

    AirSim with Autoware

    As illustrated in our previous articles(Autoware.ai Vision & Autoware.

    5 条评论
  • Autoware on Yonohub (Vision pipeline) — Part 3

    Autoware on Yonohub (Vision pipeline) — Part 3

    This article is part of the Autoware series. Check out the full series: Part 1, Part 2 We are pleased to announce the…

  • Autoware on Yonohub — Part 2

    Autoware on Yonohub — Part 2

    We are pleased to announce the release of Autoware Localization and Perception blocks on Yonohub. With these blocks…

    9 条评论
  • Autoware on Yonohub?-?Part?1

    Autoware on Yonohub?-?Part?1

    We are pleased to announce the release of AutowareAI ready-to-use environment on Yonohub. With this environment, you…

社区洞察

其他会员也浏览了