One more reflection from CES 2020
From Mobileye's CES 2020 presentation. (Gridlock is gridlock, regardless how smart your car is.)

One more reflection from CES 2020

Now that the dust has settled and business cards have been sorted from CES 2020, I'll tell you what stood out to me: Mobileye's presentation.

Far from the most hyped name in AV, Mobileye grabbed my attention by doing 3 things:

  1. Flexing a bit with an impressive demo. this unedited 20-minute drive through Jerusalem shows that they're taking L4/L5 work seriously
  2. Sharing openly and specifically. More than many others, Mobileye is candid about their approach, shows you how the tech works, and lays out aggressive but specific forecasts.
  3. Reminding us that their business model works today, and tomorrow. Who else is shipping millions of ADAS systems, to dozens of OEMs, in low-end and high-end vehicles, all while credibly incubating (alleged, R&D-heavy) billion dollar businesses like data services for smart cities and robo-taxis?

An impressive demo.

If you have 22 minutes and 26 seconds to spare, you should watch this video.

First, the driving scene is hard: narrow streets, lots of pedestrians, trafficked roads, unprotected turns, and roundabouts. YouTube abounds with rad driving clips, but that's what they often are: cherry-picked clips (a common critique in the industry). Mobileye's demo is worth watching (at 2x speed) because it's unedited and from many viewpoints, including a drone-view of the start-to-finish ride.

Second, they accomplished this feat only using 12 cameras and two processors the size of postage stamps. Commercially, that matters because cameras are really cheap, "hundreds of dollars", as Mobileye CEO Amnon Sashua says. Technically, this is impressive because Mobileye managed to extract enough 3D data to drive without any 3D sensors (common sensors include ultrasonics, radar, lidar). You and I see depth in a picture, but in reality there's just a bunch of red, green, and blue dots in 2D in a picture. Functionally, Mobileye isn't being parsimonious with sensors just for the sake (or cost) of it. During the presentation, CEO Amnon Shashua says that he believes, consistent with a (mostly) broad consensus, that other expensive sensors (i.e. lidar) are required to make any full autonomous solution robust enough to ship in the real world. Getting the camera-only drive to be good enough (meeting a specific "mean time between failures") is a big part of how Mobileye plans to prove its safety case. Which takes me to the next point.

Mobileye shares specifics

Just as he did a year ago at CES 2019, CEO Amnon Shashua's shared a comprehensive but digestible hour-long presentation going "under the hood" on why their approach makes sense and how their tech works.

No alt text provided for this image

The "why". Mobileye didn't unveil some brand new framework at CES. About two years ago, Mobileye published a set of standards (RSS) to try to make objective what good driving means. What is a "safe distance" to follow another car? What are "dangerous situations" on the road, and how should we expect vehicles to get out of them? The goal was to put on paper, expose assumptions, and develop consensus about good robot driving habits. All of these unspoken rules of the road would be codified into algorithms everyone can see.

The camera-only approach is a part of the broader framework that Mobileye thinks will demonstrate that their system is safe enough, which they're defining as "driving 2 hours a day for 10 years" without a [non-safety critical] error, and 10 million hours without a safety-critical error. If you can stitch together two independent-enough approaches that each have their own probabilities of failing, the argument goes, then the resulting product is much smaller probability. By the way, "2 hours a day for 10 years" is a euphemism for 10^-4, a far cry from their 10^-7 target. All at once, that expression highlights to me:

1- How hard the vision-only performance target is in light of a layperson, vivid conceptualization of what "10^-4" means.

2- Even so, how insufficient that achievement would be w.r.t. the overall 10^-7 goal

3- How critical some of the assumptions are (especially the near-independence of the camera-only and 3D-only approaches) that get you from 10^-4 to 10^-7.

Does putting common sense, implicit driving rules down into math formulas actually work? Do the stats they show compute? At least the approach is visible enough to invite critique.

The "how". Shashua spends 27 minutes explaining six neat computer vision (running AI on pictures to extract info) tricks to detect objects on the road, and four approaches to turn 2D into 3D, all of which enabled that demo ride. He's not handing out source code, but there's enough info to show you why they think the approach is clever. Showing a few different inputs to their approach, such as one CV algorithm just to identify wheels, one just to identify strollers, one to identify open car doors, and one that just extracts the road from the broader picture, helps build confidence in the robustness of their redundant approach.

Is it reasonable to create redundancy by layering on a bunch of object detectors? How does the 3D view created from stereo compare to 3D view created from other sensors? At least the approach is visible enough to invite critique.

No alt text provided for this image

A business model for today and tomorrow

It's a popular, if unspoken, belief that the ADAS/feature market is a different game than the market for true autonomy (L4/5). Even further, the AV-insider crowd would say, people won't own true AVs: the costs are too high, and for a host of individual and city incentives, private vehicle ownership is on its way out.

Contrarily, Mobileye is explicit that the path to autonomy is through ADAS: building L2+ features facilitates the development of L4+ robotaxis. A few years of managing capability- and geofenced-limited robotaxis will in turn help spawn privately-owned AVs; Mobileye just needs to work out the kinks through managing robo-taxi fleets for few years and get the cost of kit down to a publicly stated targeted of $5k/vehicle. Along the way, Mobileye plans to sell lucrative services enabled by the scale that ADAS gives them.

It's no secret that Mobileye dominates the ADAS market: LTM operating income was $245mm, representing a margin of 28% and YoY growth of 71%. Their expertise is in computer vision algorithms, and their secret sauce to date has been in packaging CV solutions ("EyeQ") for OEMs whose core business has been selling cars, not making bleeding-edge R&D investments. Mobileye has a broad installed base: 47 vehicle programs across 26 OEMs, including the crème de la crème systems such as Cadillac Supercruise. And the work they have to do anyway for L5, such as making HD Maps, can be readily leveraged as an ADAS feature.

Mobileye has flagged 2022 as their marquee year a for robo-taxi launch, representing hundreds of vehicles across 4 incubating projects with partners such as Nio and VW. The breadcrumbs look credible -- the JV with VW promises to bring MaaS to the streets of Tel-Aviv, with testing ongoing today -- but I'll maintain my skepticism for these kinds of public, date-specific proclamations.

What's particularly interesting is Mobileye's ambition to generate revenue without moving passengers: selling static and eventually dynamic map data. In an (engaging!) Autonocast interview in Nov 2019, Mobileye VP Jack Weast described this as a multi-billion dollar opportunity. Mobileye supposedly thinks it can eschew the traditional method of using lidar vehicles driving deliberate routes to stitch together a 3D map in favor of a bunch of picture tiles from forward facing cameras. Through six "harvesting" agreements with OEMs, Mobileye will leverage its installed base of millions of vehicles to siphon picture tiles, about "1 kilobit per km" over cellular networks to "auto-magically" (yes, auto-magically) create a living HD map.

No alt text provided for this image

Accepting that "magic" at face value, that distributed tile-by-tile surveying infrastructure would be a tremendous asset for a variety of products, not just driving: cities, retailers, and who knows.

Cities could use such data for evidenced-based planning, including construction traffic monitoring. Per Weast's reasoning, they could answer questions like "Where should we build crosswalks?" by asking questions like "Where are people jaywalking"? Retailers could answer questions about where to invest in new stores or when to invest in promotions by asking the question "What does the foot traffic look like outside my stores in X and Y place"?


Links to references and related reading:

The RSS conceptual explanation is here, but if you don't feel like reading, Mobileye VP Jack Weast explains it in this YouTube video.

This is the Autonocast interview with Jack Weast.

This is the CES 2020 presentation slides and video.

Here's some other reactions to Mobileye's CES presence: ArsTechnica, The Verge, VentureBeat, TechCrunch

And maybe it's my banker roots, but I find dispassionate filings grounding amidst the presentation hype: financial reports for Q4/FY 2019 (ending Dec 28).

Kit Cutler

Supply Chain and Operations Leader

5 å¹´

Great article! I also found Jack Weast's Autonocast interview very interesting. Thanks for adding this additional analysis!

Gloria Yi Qiao, JD, MBA

Entrepreneur, deal maker, closet geek, day dreamer, world traveler, proud mom of two. Exploring the next big thing!

5 å¹´

Great article Ross!? Appreciate you taking the time to reflect on what you saw at CES (vs. just browsing : P)?

Jessica Zeng Horvitz

Social Impact & Renewable Energy Investments Principal at Google

5 å¹´

Very cool!

赞
回复

要查看或添加评论,请登录

Ross Ahya的更多文章

  • Where has the magic gone?

    Where has the magic gone?

    Here's a hard truth about consumer tech: most 'revolutionary' products fail because people don't want innovation—we…

  • Field of play

    Field of play

    I appreciated this thought from Ben Evan's newsletter. A challenge, though, is foreseeing that field of play will…

    1 条评论

社区洞察

其他会员也浏览了