The 'Driverless' Car Era: Liability Considerations; COMPLEX LITIGATION

The 'Driverless' Car Era: Liability Considerations; COMPLEX LITIGATION


New York Law Journal

November 13, 2017 Monday


Copyright 2017 ALM Media Properties, LLC All Rights Reserved Further duplication without permission is prohibited


Section: EXPERT ANALYSIS; Pg. p.3, col.1; Vol. 258; No. 92


Byline: Michael Hoenig


Body

Most readers have heard something about the advent of so-called "self-driving" or "driverless" cars. Some of the more technical terms used by safety regulators, scientists, the motor vehicle industry and others are "automated" or "autonomous vehicles" (AVs), "connected" cars (i.e., cars that feature vehicle-tovehicle communications, or "V2V," as well as vehicles that can "communicate" with infrastructure, or "V2I"). In such vehicles, depending on the level of car autonomy, some (or even all) functions of the traditional driver's tasks are handled by features built into the vehicle.



How is this possible? By huge advances in computer technology (hardware and software), artificial intelligence, sensors, cameras, radar, and mirrors, the car itself can be transformed into a platform "intelligent" enough to "self-drive" safely. At the highest levels of car autonomy, the "driver" can, in effect, be transformed into a "passenger," now free to do things other than drive. A while back, the Society of Automotive Engineers (SAE) identified six levels of car autonomy, functional categories adopted by the National Highway Traffic Safety Administration (NHTSA), the federal agency that regulates car safety by promulgating safety standards, policing industry compliance, identifying defects and ordering recalls.


The six levels of automation proceed from "Level 0" (no autonomy; human driver performs all driving tasks) to Level 1 (driver controls the vehicle but some driving assist features may be included in the vehicle design to sometimes assist with steering or braking/accelerating), to Level 2 (partial automation; car has combined automated functions but the driver must remain engaged and monitor the environment at all times).


Levels 3 to 5 are much more advanced when it comes to incorporating autonomous features. In Level 3, an Automated Driving System (ADS) can itself perform all aspects of the driving task under some conditions but the human driver must be ready to take back control at any time the ADS requests the driver to do so. In all other circumstances, the driver performs the driving task. In Level 4, the automated system can itself perform all driving tasks (and monitor the environment) in certain circumstances. The human driver need not pay attention in those circumstances. At Level 5, the vehicle's automated system can do all the driving in all circumstances. The human occupants are just passengers and need never be involved in driving.


NHTSA, the safety agency, has been pushing the industry, in stages, to get to the self-driving era expeditiously. In September 2016 the agency issued its vision for "highly automated vehicles" (HAVs) in the form of a "Federal Automated Vehicles Policy" along with issuance of an "Enforcement Guidance Bulletin on Safety-Related Defect and Automated Safety Technologies." The two publications are not compulsory upon the industry but furnish activity guidelines the agency expects will be followed. Readers can readily access these items via the NHTSA.govwebsite. Congressional committees, too, have acted with haste. Already, House and Senate draft bills have proposed legislation regarding regulation of AVs. These drafts emerged amidst heavy lobbying by interest groups (including lawyers).


Lifesaving Benefits




Why this push? Why the rush towards autonomous vehicles? From the regulatory perspective the paramount rationale is safety benefits. Automated vehicles' potential to save lives and reduce injuries is rooted in traffic facts: 94 percent of serious crashes are due to human error; more than 35,000 persons died in motor vehicle-related crashes in the United States in 2015; more than 2.4 million injuries occur per year. By removing the accident-causing human error from the traffic equation, lifesaving benefits to protect drivers, passengers, pedestrians and bicyclists will result.


NHTSA also foresees additional economic and social benefits. A study showed motor vehicle crashes in 2010 cost $242 billion in economic activity, including $57.6 billion in lost workplace productivity, as well as some $594 billion due to loss of life and decreased loss of quality of life due to injuries. Were the vast majority of motor vehicle crashes eliminated, such costs could be erased.


NHTSA also believes that highly automated vehicles will smooth traffic flow and reduce traffic congestion. Americans spent an estimated 6.9 billion hours in traffic delays in 2014, cutting into time at work or with family, increasing fuel costs and vehicle emissions. A recent study stated that automated vehicles could free up some 50 minutes each day that previously was dedicated to driving. Further, self-driving vehicles may provide new mobility options to millions more Americans. There are some 49 million Americans over age 65 and some 53 million with some form of disability who could benefit. One study suggested that automated vehicles could create new employment opportunities for approximately two million people with disabilities.


Automated vehicles will have to react to other vehicles' movements on the roadway. As autonomous vehicles increase, their V2V "communications" will help inform appropriate safetyrelated maneuvers. In effect, the cars will "talk to each other." Similarly, since roadway infrastructure, signage, traffic controls, utility poles, guardrails, etc. are part of the environment, "connectivity" of automated vehicles to roadway signals and controls will have to be implemented. This will come at quite some cost. Nevertheless, many states have begun to take concrete steps to join the AV world. This means, for example, revising motor vehicle codes, allowing automated vehicles to be tested on actual roadways and devising insurance programs that satisfy the needs of the new AV era.


One huge complexity is that automated vehicles will, for a substantial period of time, have to share roadways with enormous populations of vehicles that are not automated. In effect, the AV cars will not be "connected" to Level 0 vehicles. Accordingly, even if the potential for human error is largely eliminated in the AV cars, it will not be eclipsed in the scores of millions of non-AV cars and trucks. Accordingly, we can expect a sharper focus on the non-AV driver as a potential accidentcausing liability target. Similarly, because the new technology likely will be viewed with suspicion by many, we can expect the owner of the AV car or the vehicle manufacturer (and suppliers of the hardware and software) to be targets for suit.


Space limitations here preclude a detailed discussion of the complicated liability and regulatory picture in the brave new world of AVs. (See generally, e.g., M.A. Geistfeld, "A Roadmap For Autonomous Vehicles: State Tort Liability, Automobile Insurance And Federal Safety Regulation," 105 California L. Rev. 101 (2017); S.P. Wood, et al., "The Potential Regulatory Challenges of Increasingly Autonomous Vehicles," 52 Santa Clara L. Rev. 1423 (2012); M.I. Krauss, "What Should Tort Law Do When Autonomous Vehicles Crash?," Forbes (April 7, 2017); J. Villasenor, "Products Liability and Driverless Cars: Issues and Guiding Principles for Legislation," Brookings Institution (April 2014); "Autonomous Car Liability," Wikipedia). Therefore, only some abbreviated highlights regarding liability considerations are identified here. Indeed, there also are serious policy questions as to whether traditional liability doctrines ought to fully control when an emerging technology that promises vast lifesaving and injury-preventing benefits is in its early or interim stages.


Liability Considerations




Imposing, at the outset, crushing liability costs or explosive class action exposures upon manufacturers of AVs or suppliers of their software could stunt the development and improvement of self-driving vehicles. That could threaten achieving the lifesaving benefits that motivated NHTSA to push for the new technology frontier in the first place. Alternatively, spurring liability costs early on can force manufacturers to pass those costs to their purchasers, many of whom will decline to absorb the increased purchase price, thereby dooming the increased infusion of AVs onto U.S. roadways and frustrating the overall objective of saving lives. Therefore, some scholars and experts have suggested that federal preemption of certain kinds of lawsuits should govern at least early stages of AV use. Alternatively, a victim compensation fund could be established by Congress, much like the National Childhood Vaccine Injury Fund in 1986 when liability concerns threatened public health by jeopardizing access to vaccines. A third approach is to adopt a nofault insurance program. See V. Schwartz, "Driverless Cars: The Legal Landscape" (June 14, 2017) (Panel 3: Liability & Insurance, in C. Silverman, et al, "Torts of the Future", etc. (U.S. Chamber Inst. for Legal Reform 2017)).


To begin with, let's understand that self-driving cars will rely on algorithms that program the vehicle to "optimize" its "decision-making" when confronted with a set of circumstances. Some scenarios inevitably will force the automated vehicle to "choose" between avoiding/minimizing injury to the driver versus killing or injuring others by the AV's automated evasive maneuvers. For example, let's say the AV confronts a truck speeding head-on towards the AV in a school zone. At the same time, school children come running out of school and congregate on the sidewalk. If the AV stays where it is or merely brakes, the truck likely will smash the car and kill or maim the driver. If the AV, however, maneuvers to the right to evade the truck, it will mount the sidewalk and kill or injure many students.


This scenario (numerous others can be hypothesized) presents an ethical quandary for the AV programmers. Should the AV's algorithms be designed to always favor what's best for the driver (and the other car occupants) or should the algorithm "choose" the course of least overall harm, including those outside the AV? Indeed, will purchasers readily buy an AV that may treat its owner unfavorably? NHTSA has called for industry members and others to coordinate transparently on such ethical dilemmas and come to a consensus. However, we can visualize the legal "field day" the school children's lawyers would have in court. Programmed AV "behavior" decisions can easily be criticized by lawyers in hindsight. Arguably, were NHTSA to approve algorithmic AV "decision-making" when such ethical quandaries are presented, that regulatory approval ought to immunize the car maker from claims that second guess the algorithmic choice.


The self-driving car era will trigger a host of other liability considerations. Should routine products liability rules (design, manufacturing defect, warnings, warranty, misrepresentation, consumer fraud) apply to such advanced, softwareintensive technology? Certainly, we can expect that the consumer must be fully informed and warned about the product and its limitations. Other questions abound. Will pre-trial discovery into complex technical, trade-secret topics become a quagmire that drives up litigation costs? After a crash between AV and non-AV vehicles, how is liability, if any, to be apportioned? Will individual trials become too complex, too lengthy and too expensive for court systems to handle en masse?


Then there will be liability concerns and challenges about vulnerability of AVs to hacking, invasions of privacy and cybercrime by third parties. Anti-hacking specialists have demonstrated that increased computer portals can allow bad actors to "break into" or "take over" operative functions of an AV and cause damaging or injurious mischief. Similarly, since AV car features will include advanced electronic communication capabilities, car occupants are likely to become vulnerable to privacy breaches and cybercrime incursions. Will such threats trigger loads of individual litigations and class actions? Will traditional liability rules handle such challenges or will the rules have to adapt to the premise that AV technology, for all its miracles, may not be perfect. Perhaps some legal slack will have to be given in exchange for preserving the lifesaving and other benefits envisioned by NHTSA.

MICHAEL HOENIG is a member of Herzfeld & Rubin.



要查看或添加评论,请登录

社区洞察

其他会员也浏览了