Deep Learning May Disrupt LiDAR in the Robo-Taxi Revolution of the 2030s
Michael Spencer
A.I. Writer, researcher and curator - full-time Newsletter publication manager.
Prediction: Deep Learning will Disrupt LiDAR for Robo-Taxi Revolution
My main place for writing articles on artificial intelligence is AiSupremacy which you can subscribe to here.
https://aisupremacy.substack.com/subscribe
If you value my articles it's also a way to show community support. I never thought I'd say it, but those LiDAR sensors might not even be required in the future, say by around 2035. All the while I'm actually bearish on Tesla's self-driving capabilities.
While in 2022 today, LiDAR startups are all the rage, LiDAR sensors might go out of style as robo-taxi technology and computer vision further evolves.
I follow trends in autonomous vehicle (AV) industry very closely. LiDAR is considered the leading sensor for self-driving vehicles in the 2015 to 2022 period. I think by 2025 or 2027 there will be a new paradigm.
As the bottlenecks to full autonomous vehicles AVs becomes apparent, I think Deep learning > LiDAR systems.
With this increasing reliance on software comes an interesting?shift away?from highly specialized sensors like LiDAR, long a staple for robots operating in semi-structured and unstructured environments. Dozens of LiDAR and AV sensor companies are fueling the coming Robo-taxi revolution, but they could be made nearly obsolete by other deep learning startups.
The Waymo vs. Tesla debate will become more interesting. Most cars use a variety of cameras, ultrasonic sensors and radar to enable features like adaptive cruise control, parking assistance, automatic emergency braking, and blind spot monitoring; these allow the cars to “see” but are limited in terms of range and depth.
LiDAR stands for “Light Detection and Ranging” and describes a sensor technology that can create a map of the environment around it. In early self-driving vehicles, you could spot these spinning cylindrical items on the car. However even LiDAR augmentation may not be the final chapter in the Robo-taxi story.
We know that by 2035, most cars will be electric and have some degree of autonomy. Car ownership will have been partially disrupted by Robo-taxi fleets. While the advent of AVs has been slower than some expected, the realization is now that?deep learning?is the key. This is contrary to what people said even a few years ago.
LiDAR is Unlikely to Scale Very Well
Here I’m starting to become a contrarian, I think Deep learning will just scale better in that last and most difficult 1% of how further autonomy can be build into cars. Project's like Google’s Waymo stumbled on that last one percent.
So how could Deep learning disrupt the widespread use of LiDAR sensors? Modern deep learning machine vision models seem like magic compared to the technology from ten years ago. Any teenager with a GPU can now download and run object recognition libraries that would have blown the top research labs out of the water ten years ago.
领英推荐
LiDAR Was Once Seen as the Holy Grail of Full Autonomy
LiDAR started gaining significant popularity in the early 2000s due to some groundbreaking academic research from Sebastian Thrun, Daphne Koller, Michael Montemerlo, Ben Wegbreit, and others that made processing data from these sensors feasible.
That research and experience led to the dominance of the LiDAR-based Stanley autonomous vehicle in the DARPA Grand Challenge (led by Thrun), as well as to the founding of Velodyne (by David Hall, another Grand Challenge participant), which produces what many now consider to be the de-facto autonomous car sensor.
In many cases, LiDAR has proven to be very much the right tool for the job. A dense 3D point cloud has long been the dream of roboticists and can make obstacle avoidance and pathfinding significantly easier, particularly in unknown dynamic environments
What is the chink in the armor of LiDAR that could disrupt it? As light photons are bouncing off objects within a warehouse, robots can easily get confused under the direction of LiDAR. Furthermore, while LiDAR technology has decreased in cost over the years, it's still expensive.
At the end of the day some researchers believe?2D machine vision?in warehouse settings is cheaper, easier, and more reliable than LiDAR.
Machine learning should improve faster than we can make LiDAR less costly. Software beats hardware in the end. The hardware itself is also cheaper when using machine vision.
Tesla’s own general computer vision system is an interesting bet on Deep Neural Networks winning this problem. This if you can believe?Tesla is also an A.I. company.
The cost to scale AV Robo-taxis globally will be a race. So cost reduction is rather important. Cameras are cheaper than LiDAR, and most LiDAR systems need cameras with fiducials anyway.
One way machine learning is already disrupting logistics and trucking is already here. Think about it, machine vision is already making an impact in logistics and fulfillment centers by automating rote tasks to increase the productivity of labor. It’s highly likely than it will be machine learning and not LiDAR sensors that change the paradigm of transportation at scale.
What has transformed the E-commerce warehouse of logistics? Today in many places in the world, fleets of mobile robots to navigate the warehouse, performing key tasks like picking, replenishing, inventory moves, and inventory management. They do this without disruption and with machine-precision accuracy.?
Using robotics systems driven by computer (machine) vision is also removing barriers to adoption because of their affordability.
This kind of automation will arrive eventually in the AV revolution. It’s likely to be simultaneous to the EV revolution. EV pioneers like Tesla therefore have a unique opportunity if they can be leaders in the AV revolution as well.
LiDAR is likely not on the way out. Those sensors will still be useful, but for how many more years?
I predict that by 2035 they may have been phased out by the increasing sophistication of deep learning in computer vision. We’ll be using a combination of cameras, sensors and that may realistically disrupt the LiDAR sensors we have today in 2022.
Computer Science Engineer
2 年Amazon ZOOX...
senior Engineer at port
2 年Awesom
An inquisitive mind with a passion for dissecting complex issues, providing expert analysis, challenging the status quo, finding innovative solutions, and leading others to success.
2 年To think LiDAR is the long term solution to robotics is ignoring the evolution of sensory perception in nature. Early species depended on sent, sound, and even sonar (bats and whales) for sensory perception. The most advanced species, man, evolved into using sight and memory of the sight as the primary sensory tool. The same evolution is happing in robotics because it is how the robot wil develope its own imprinted maps.
Transformation | Culture | People | Technology | Sustainability
2 年Well placed arguments, it will be interesting to see.
Accounting | Finance | Automation | Data Science
2 年Interesting point of view!