Autonomous Vehicles: How To Punish Bad Driving
Autonomous Vehicles: Fail to Prepare, Prepare to Fail (Part 6)
Background: In November, the Law Commission published a preliminary consultation paper reviewing the options for regulating automated road vehicles when they start to arrive in the UK in the next few years. Over the coming weeks I will be summarising the key proposals of the consultation to show that critical thinking is well underway, and the UK is at the forefront. The conversations about this topic are live and the Law Commission is welcoming responses until February 2019 (https://www.lawcom.gov.uk/project/automated-vehicles/)
This piece considers what should happen when a driving offence is ‘committed’ by an autonomous vehicle, explores the responsibilities of the “user-in-charge” whilst we transition to Level 5 autonomous vehicles, and highlights gaps in current legislation with regards to death and serious injury.
How do you punish an Autonomous Vehicle for driving badly?
Many offences, such as exceeding speed limits and driving dangerously, arise from the way a vehicle is driven. Historically, this was clearly the responsibility of the human driver. However, the review suggests that when an autonomous vehicle is engaged and conducting the entire dynamic driving task, complying with traffic law should be the legal responsibility of the ‘automated system driving entity’ (ADSE) and not the human user of the vehicle. To this end, legislation may need amending to explicitly exclude “users-in-charge” of a vehicle in autonomous mode, as the concept of a driver is currently quite flexible.
But what should happen if a vehicle driving itself conducts a manoeuvre which would amount to an offence if done by a human? Government policy dictates that automated driving systems should observe road traffic standards, but infractions may still occur. The Australian ‘National Transport Commission’ (NTC) has commented that current road traffic penalties seek to influence human behaviour, and a new enforcement system is required for autonomous vehicles. The NTC recommends that every ‘Automated Driving System’ (ADS) be backed by an ‘Automated Driving System Entity’ which would be subject to sanctions if things go wrong, and the review proposes a similar system.
Speeding represents a good example of how the process might work: although an ADS would be programmed not to break speed limits, this may still happen for various reasons. Currently, a “notice of intended prosecution” would be submitted to the owner. In the event that a vehicle was in autonomous mode, the owner would provide relevant data to the police to prove they were not driving and the police would investigate. If the problem was caused by faulty software, the review proposes that the issue be submitted to the new ‘safety assurance agency’ who could then impose appropriate sanctions on the ADSE, such as fines, suspensions or withdrawal of ADS approval.
The User-In-Charge: retaining continuity of some responsibilities in the transition
Many offences, such as being qualified and fit to drive and ensuring vehicles are insured, do not arise from the way a vehicle is driven. The review proposes that the “user-in-charge” should fulfill these responsibilities in a similar way to how drivers have done historically.
A user-in-charge may be called upon to drive, either following a handover or after an autonomous vehicle has achieved minimal risk condition and stopped. As such, they need to be qualified, fit and insured to drive the vehicle. The review proposes that the “user-in-charge” should be in a position to operate the controls, which for the immediate future equates to “in the driving seat” and will allow them to be identified. As vehicle design evolves in the future, the concept of a driving seat will also change and this will be reviewed in a future consultation.
The review also recommends that legislation is amended so users-in-charge are responsible for insurance, road-worthiness, complying with police/traffic officer directions, duties following an accident and ensuring that children wear appropriate restraints.
In the long term, automated driving systems will develop to a point of operating without a user-in-charge. A future consultation will consider legal obligations which will present difficulties in the absence of a user-in-charge, such as duties following an accident and complying with directions of a police officer.
Death or Serious Injury: Holding the right people to account
There are currently eight criminal offences of causing death or serious injury through driving, however these offences wouldn’t apply in the absence of a human driver. The consultation paper analysed how far the offence of manslaughter would apply where wrongdoing associated with autonomous vehicles causes a death.
There were two gaps in the law: death or serious injury caused by interference (e.g. painting over lines or interfering with sensors) and wrongdoing within the organisation that developed the system.
Interfering with roads or vehicles is prohibited under Section 22A of the Road Traffic Act 1988 – this includes interfering with a motor vehicle, traffic signs or other equipment. Since there has only ever been one manslaughter conviction for breaching Section 22A, there is uncertainty whether it provides sufficient basis for unlawful act manslaughter. The review seeks views on whether a new offence of ‘causing death or serious injury by wrongful interference with vehicles, roads or traffic equipment’ should be introduced.
The review also considers potential wrongdoing that could occur in organisations developing or manufacturing automated driving systems, such as: falsely claiming to have conducted tests, suppressing poor test results, installing ‘defeat device’ software to perform better in tests, and disabling safety critical features. In the case of death, a corporate manslaughter prosecution could be brought if the way an organisations “activities are managed or organised” cause a person’s death and amount to gross breach of duty of care, with failings by senior managers representing a substantial element.
The are several key problems of bringing a corporate manslaughter prosecution. First, the offence only applies to death and not serious injury. Secondly, the requirement for failings by senior managers to be a substantial element shield large multi-national companies with complex management structure. It is unsurprising that most companies convicted of corporate manslaughter have been SMEs where directors are involved in day-to-day decision-making. The review seeks views on introducing new corporate offences to cover off these potential gaps.
Closing Thoughts
The proposed shift in liability for offences from human users to manufacturers/developers in the context of automated vehicles is likely to fundamentally alter the liability landscape. Manufacturers have been responsible for their products being safe and reliable, but automated vehicles may represent one of the first platforms where those products can ‘commit offences’ and the manufacturers will be held to account.