AI data needs ethical sourcing, EU releases legislation draft for AVs, ADAS struggles with collision tests and Tesla gets sued over insurance premiums
Dear Reader,
Seems 2022 is the year it has finally become acceptable to acknowledge political and ethical challenges also on LinkedIn. I welcome this change: Business doesn’t exist in a vacuum, and the consequences of our actions extend to other parts of life and the world - so we need to be aware of our responsibilities and actually conduct business in accordance with the values we claim to hold dear.
Having just joined the space of AI-based perception for ADAS and AVs, I recently became aware of one ethical challenge in particular - let’s dive right in:
—
How the AI industry profits from catastrophe - via MIT Technology Review
ADAS/AV perception systems based on Machine Learning/Deep Learning require lots of annotated real-world data both for training and validation. The production of such “ground truth” data requires manual labor, called data labeling or annotation which is typically sourced from low-cost economies.
This article is a depressing, infuriating read about exploitation of data annotation workers and highlights the risks of “crowdsourcing” such labor outside social security structures and reliable wages. If you are involved in choosing ground truth providers for ML/DL training and validation, you should not shy away from it.
It's worth keeping in mind that there is no magic wand in procurement: Pushing costs lower and lower doesn't magically make things cheaper - it might mean someone else is paying a high price.
Compliance rules and codes of conduct can help - if they have actual power over operational decisions, even (and especially) when following them comes at a cost. And, to quote my colleague Ester Svensson at Annotell on enforcing them: "We must constantly strive to 'know what we don't know' and investigate whether reality matches the conditions we are trying to create."
---
EU Releases ADS Legislation Draft - via EE Times
The European Union is working on legislation to allow automated driving systems (ADS) on the roads of its member states.
The current draft includes quite a lot of detail about the approach that regulators are looking at, with 10 pages of ADS performance requirements. It also describes five traffic scenarios that need to be mastered to gain approval - including normal driving with demonstrated anticipatory behavior for interacting with other road users, critical scenarios with sudden obstacles to be detected/collision risks to be avoided as well as failure scenarios requiring the system to perform a minimum-risk maneuver to reach a safe condition.
Cybersecurity, data logging/event recording and software management are also addressed by the proposed legislation - the article gives a pretty good overview across it all and includes a link to the original document.
---
领英推荐
In contrast, current ADAS functions are still tested in pretty basic ways - including the use of foam models on proving grounds. One such test was recently conducted by AAA, with Hyundai, Subaru and - to some degree - Tesla failing to brake for oncoming dummy vehicles.
More than the individual outcome of this particular test and those particular systems, it got me thinking about the way we conduct ADAS testing today, and if it will need to be reinvented tomorrow:
Automotive-grade systems are typically trained on (and validated against) loads of data, real and synthetic. One important detail for perception is teaching the vehicle to tell a real car from a "fake" one: For example, you don't want your system to brake for an image of a car front on the back of a trailer, or on a billboard ad - such “phantom braking” maneuvers would be a safety risk themselves.
So, OEMs and their suppliers try their best to make perception good enough that it can discern between real cars and fakes. But what will happen when they eventually become so good they recognize test dummies as dummies - and thus “fail” to classify them as cars, as the test design would require? Do we dumb them down to pass a test scenario, which might worsen their performance on the real road? Or do we need to get better at lying to cars and create more realistic test conditions? Curious to hear from others on this.
—
Tesla owner suing over false Forward Collision Warnings that impacts his Safety Score and insurance premiums - via Drive Tesla Canada
Storytelling time: In my first-ever sales job, one of the key metrics for performance was the number of calls we made to target companies. If you were below a set threshold at the end of the day, you’d have to explain yourself.
One effect of that rule was that salespeople started documenting every single phone call that connected, regardless of who would pick up the phone: Secretaries, receptionists, non-stakeholders - everyone who would answer got put through a pitch and had a call documented in the CRM since it counted against the threshold and helped save you a talking-to at the end of the day.
Waste of everyone’s time and resources? You got that right - the lesson learned here is what’s called Goodhart's Law:
"When a measure becomes a target, it ceases to be a good measure."
The same might be challenging Tesla, who are offering insurance premiums based on driver’s safety score. This score can be negatively affected by what a lawsuit (and other sources) claim are incorrectly triggered forward collision warnings and resulting phantom braking. The resulting raise of premium costs incentivizes drivers to put more miles on their vehicles than they ordinarily would, to bring the average number of incidents per distance traveled back down, thus lowering the amount they need to pay.
Not only does this artificially create unnecessary risk for the insurance provider (more distance driven equals more opportunities for accidents), it also leads to a waste of energy, electric or not.
—
That’s it for this month - hope you like the read! As always, any thoughts you might want to share are welcome in the comments.
All the best
Tom Dahlstr?m
Director - Research and Consultancy at CAVT Ltd
2 年Hi Tom! re Tesla owner suing over false FCWs ... I have long had a problem with who decides what safe driving looks like - and to whom. A few years back an analysis in the UK Motor Insurance sector, and mentioned by Matthew Avery of Thatcham Research, found that ratings for drivers based on steering, speeding, acceleration and braking characteristics (e.g. with a black box) did not correlate with claims experience: brisk, alert safe drivers score worse than dithering and hesitant drivers. A bit of a generalisation, but you can probably relate to that. So who decides the characteristics of an ADS?