What is needed for autonomous driving vehicles: an analogy

What is needed for autonomous driving vehicles: an analogy

Cars that operate autonomously.

Scary? Dangerous? Sci-fi? Or practical as fuck, and only hindered by (rapidly advancing) technological limitations and human fear?

In Gary Kasparov's book "Deep Thinking", I read that around 1946 we started using self-operating lifts, instead of needing a person to operate lifts for us. While we had the technology to do so for about four-and-a-half (!) decades prior.

The reason? People wouldn't get on a lift if a machine was operating it. Way too dangerous!

I don't know about you, but to most people I speak to, that sounds pretty ridiculous nowadays.

Could the same be the case with autonomous driving vehicles..?

Granted: operating in the complicated dynamics of traffic is different than moving a metal box up and down through a dedicated shaft.

And granted: this is supposed to be a short story using autonomous driving vehicles as an analogy.

So let's not get bogged down in the technical details of traffic.

For a moment, let's assume we've got the basic minimum computing and sensing technology needed to make this possible. Let's move back to first principles of what is needed for autonomous driving vehicles to be useful and safe. So, what are the things we still need for the use of autonomous driving vehicles to make any sense?

Where are we going?

Imagine getting in a car and not knowing where it's going to take you. While maybe fun in some niche applications, that's not what I (and most others) mean when talking about autonomous driving cars.

An autonomous vehicle still needs to be told where you want to go.

And you can't really be vague about it either. Tell a car you want to go to "Rotterdam" and you'll probably end up there, but a lot of walking might still be involved to get to where you actually want to end up. (Let alone if you're already in Rotterdam: then it might actually take you further away from your destination.)

It needs a clear instruction.

Now, there is a caveat:

With enough data, computing power and insight (and maybe some networking), you might be able to give details other than specific geographic coordinates.

Imagine instead saying: "Take me to an Italian restaurant, no more than 20 minutes from home. They need to have a table for two available when we get there, and serve gluten-free options. I want it to have at least a 4.5/5 rating on Google. Preferably somewhere with a view, but it's ok if that's not an option."

Now that would be cool. More complicated. But cool.

However, I wouldn't soon risk asking to just be taken to "An Italian".

You've got to be clear about where you want to be going.

To quote Yogi Berra: "If you don't know where you are going, you might wind up someplace else."

Autonomous driving vehicles need a clear goal.

The more descriptive you can be about the details that are relevant to you, the better the vehicle will able to do it's job.

Destination inputted. Mutual understanding achieved. Great - let's go.

Except: that's of course not all we need for this to be a sensible adventure.

There's rules.

Knowing where to go is essential, but there are limitations and binding prescriptions on what we can do to get there. Enter: rules of traffic.

Being an autonomous driving vehicle doesn't mean being fully autonomous in deciding which actions to perform to get to the goal. In the 'how' of getting there, it needs to stick to some pretty essential agreements.

Some of these are restrictive: things not allowed.

Some of these are prescriptive: things that must be done.

Some might be legal requirements (like stopping at a red light), and some might be user-decided (like 'take the most eco-friendly route possible with a max of 20mins travel time').

But the autonomous driving vehicle needs rules-of-play on top of a goal to get to.

Then you can trust it to figure out how to get there autonomously. Right?

Well, almost.

Goal: clearly provided.

Rules: clearly provided.

Route: trust the vehicle.

But...

You can't predict traffic: Sense & Adapt

As I hope you know: engaging in traffic isn't a 'set & forget' exercise. Unpredictable shit happens. Traffic isn't just complex (lots of moving parts), it's also complicated (hard to pre-calculate all movements). Because, you know, people.

The vehicle needs to two more things: It needs to have sufficient sensors of all types to interpret what's going on around it (e.g. pedestrians) and inside of it (e.g. battery levels). And it needs to have rules and strategies in place to adapt its actions and route to the real-time developments around it.

The more processable data it has, the beter it will be able to execute its route to the goal, while staying within the rules and instructions delivered.

Autonomous Operation

So those are the first principles for autonomous operation:

  • Goal | Clearly tell it where you are trying to get to. The clearer on the relevant details, the better. And the more trust you have on the power of interpretation, the more abstract those details can be (coordinates vs type of restaurant).
  • Rules | Establish all relevant limitations on the 'how' of getting there. Restrictive and prescriptive. Some you might need to provide as the user. Some might (should?) come pre-loaded. Once clearly established, trust it to plan the 'how' of getting there.
  • Sense & Adapt | Make sure it has all the sensors and interpretive power to make sense of developments as they happen. It needs to constantly measure relevant internal and external signals, and adapt based on the rules provided to get to the goal.

That's what we mean when talking about autonomous operation. Not just independently doing random shit, of course.

And if you forget that you might be a little scared (think: autonomous lifts), that all sounds rather dreamy, doesn't it?

But: I promised this was an analogy.

People & teams

If I haven't done a complete hatchet job, the above first principles sound like pretty common sense.

The same goes for Autonomous Operation of teams. Autonomous Teams.

People and teams of people (and teams of teams of people) can do a lot autonomously if you stick to first principles.

Autonomous teams need a clear goal or purpose. No clear goal or too vague instructions? You might end up where you didn't want to go.

Autonomous teams need clear restrictions and prescriptions. Limit as much as needed to feel comfortable in a safe-enough process (and not a millimetre more). The more trust in the pre-loaded settings of the people, the less details needed.

Autonomous teams need to have the tools and (access to) the relevant data to evaluate progress and changes, and actively monitor and adapt their approach.

Got all these things in place? Then you might as well trust the teams to get shit done without constantly keeping your hands on the steering wheel.

Who knows: maybe you've been manually driving a brilliant autonomous vehicle this whole time.

Will you get all the details right the first time? Probably not. (If you do, call me! I want to learn your magic ways!). Sense & Adapt. Refine. Improve.

Goal. Rules. Sense & Adapt.

Autonomous operation is pretty similar for vehicles and teams.

The big difference?

A team of people is a lot less likely to drive into an old lady if the rules aren't sufficiently formulated. "Safe enough to try" is a lot more within reach when working with people.

And if you don't trust that that's the case in your situation, you've got a rather different problem...

---------


I'm Roshan de Jong. I help organisations build systems that scale their impact. Working on something cool? Say 'hi!'

Michael van Dijk

CEO & Co-Founder | Building data-driven organizations | Enabling leaders to make winning decisions

1 年

I have seen this analogy before????

回复

要查看或添加评论,请登录

Roshan de Jong的更多文章

社区洞察

其他会员也浏览了