Autopilot - In Everything
By: Raymond Brogan on LinkedIn

Autopilot - In Everything

Is the beloved everyday computer setting Humans up for failure?

"As we continue to let automation fly our planes, carry out security checks and assemble products. The Automated cars were once things that we only saw in the movies, and now is becoming reality as you read this article. So a question that human should ask... "is your reliance on automation erasing the existence of your skills ?"

When a sleepy Alex Rooney walked into the "Flight Control Station" (my editor explained that "cockpit" is now a sexually offensive term in communications and writing. But I don't care who I offend, this is America, the last time I checked.) of the jumbo-jet he captained, he was confronted with a scene of confusion. The jumbo-jet was shaking so violently that it was hard to read the instruments. An alarm was alternating between a high pitch blip and an automated voice repeating “STALL STALL STALL.” His junior co-pilots were at the controls. In a calm tone, Captain Rooney asked: “What’s the situation?”

Co-pilot David Swanson’s answer was less calm, in his voice, there was the sound of panic. “We have completely lost control, we don’t understand anything! We tried everything!”

The crew, however, in fact, had control of the jumbo-jet. One simple course of action could have ended the crisis they were facing entirely, and they had NOT tried EVERYTHING. But the Co-Pilot David Swanson was right about one thing - he didn’t understand what was happening.

As William Langewiesche, a writer and professional pilot, described in an article he had written for Vanity Fair dated October 2014(* bottom of this article), Air France Flight 447 had begun straightforwardly enough – an on-time take-off from Rio de Janeiro at 7.29pm on 31 May 2009, bound for Paris. (The Names from the article have been changed in this article only.)

To become a commercial airline pilot or co-pilot or even to just be seated within the cockpit takes years of training and even more hours experience in actual flight time privately. But like every person on this planet, the three pilots had their vulnerabilities. Pierre-Cédric Ronin, 32, was young and inexperienced. David Swanson, 37, had more experience but he had recently become an Air France manager and no longer flew full-time. Captain Alex Rooney, 58, had experience aplenty but he had been touring Rio with an off-duty flight attendant. It was later reported that he had only had an hour’s worth of rest.

Fortunately, with their vulnerabilities, the crew had been in charge of one of the most advanced jumbo-jets in the world, an Airbus 330, legendarily smooth and easy to fly. Like any other modern aircraft, the A330 has an autopilot to keep the plane flying on a programmed route, however, it also has a much more sophisticated automation system known as "fly-by-wire". A traditional jumbo-jet gives the pilot direct control of the flaps on the plane – its rudder, elevators, and ailerons. This means the pilot has some room to make mistakes. "Fly-by-wire" is smoother and safer. It inserts itself between the pilot, with all his or her faults, and the plane’s mechanics. A tactful translator between human and machine, it observes the pilot tugging on the controls, figures out how the pilot wanted the plane to move and executes that maneuver perfectly. It will turn a clumsy movement into a graceful one. The "Rolls-Royce" of the Industry, if you will.

This makes it very hard to crash an A330, and the plane had a superb safety record: there had been no crashes in commercial service in the first 15 years after it was introduced in 1994. The paradox of such greatness, running the risk to building a plane that protects pilots so purposefully from even the tiniest error. This means that when something challenging does occur, the pilots will have very little knowledge and/or experience to draw on as they try to meet that challenge.

Ronin seemed nervous. The slightest hint of trouble produced an outburst of swearing: “Putain la Vache. Putain!” – the French equivalent of “Effing cow. Eff!” More than once he expressed a desire to fly at “3-6” – 36,000 feet – and lamented the fact that Air France procedures recommended flying a little lower. While it is possible to avoid trouble by flying over a storm, there is a limit to how high a plane can go. The atmosphere becomes so thin that it can barely support the aircraft. Margins for error become very slim. The plane will be at risk of "stalling". At this angle, the wings will no longer function as wings and the aircraft no longer behaves like an aircraft. It loses airspeed and falls gracelessly in a nose-up position. Eventually, it will "nose-down" if there is enough altitude, and safely recover. If not, at lower altitudes, the aircraft will simply slam into the "deck", used by aeronautics, or earth or ground, tail to belly. At times, aircraft have been known to go into a "flat spin", providing if the conditions for such a spin existed.

Fortunately, a high altitude provides plenty of time and space to correct the stall. This is a maneuver that is a fundamental rule to learning how to fly any fixed-wing aircraft: the pilot pushes the nose of the plane down and into a dive. The diving plane regains airspeed and the wings once more work as wings. The pilot then gently pulls out of the dive and into level flight once more. For rotor wing aircraft, the process is a little more complicated.

Continuing with the story... as the plane approached the storm, ice crystals began to form on the wings. Ronin and Swanson switched on the anti-icing system to prevent too much ice building up and slowing the plane down. Swanson nudged Ronin a couple of times to pull left, avoiding the worst of the weather.

"In a time of fear and desperation, you MUST always remain calm, for panick is always the factor to imminent death. "

And then an alarm sounded. The autopilot disconnected. An airspeed sensor on the plane had iced stopped functioning, but it required the pilots to take control. Then something else happened at the same time and for the same reason: the "fly-by-wire" system downgraded itself to a mode that gave the pilots less help and more latitude to control the plane. Lacking an airspeed sensor, the plane was unable to babysit Ronin, so to speak.

The first consequence was almost immediate: the plane began rocking right and left, and Ronin overcorrected with sharp jerks on the stick. And then Ronin made a simple mistake, he pulled back on his control stick and the plane started to climb immediately and very steep.

As the nose of the aircraft rose and it started to lose speed, the automated voice barked out: “STALL STALL STALL.” Despite the warning, Ronin kept pulling back on the stick, and in the black skies above the Atlantic, the plane climbed at an astonishing rate of 7,000 feet per minute. The jumbo jet's air-speed was diminishing; it would soon begin to slide down through the storm and towards the water, 37,500 feet below. Had either Ronin or Swanson realized what was happening, they could have fixed the problem, at least in its early stages. But they did not. Why?

The source of the problem was the system that had done so much to keep A330s safe for 15 years, across millions of miles of flying: the fly-by-wire. Or more precisely, the problem was not fly-by-wire, but the fact that the pilots had grown to rely on it. Ronin was suffering from a problem called mode confusion. Perhaps he did not realize that the plane had switched to the alternate mode that would provide him with far less assistance. Perhaps he knew the plane had switched modes, but did not fully understand the implication: that his plane would now let him stall. That is the most plausible reason Ronin and Swanson ignored the alarm – they assumed this was the plane’s way of telling them that it was intervening to prevent a stall. In short, Ronin stalled the aircraft because in his gut he felt it was impossible to stall the aircraft.

Aggravating this confusion was Ronin’s lack of experience in flying a plane without computer assistance. While he had spent many hours in the cockpit of the A330, most of those hours had been spent monitoring and adjusting the plane’s computers rather than directly flying the aircraft. And of the tiny number of hours spent manually flying the plane, almost all would have been spent taking off or landing. No wonder he felt so helpless at the controls.

The Air France pilots “were hideously incompetent”, wrote William Langewiesche, in his Vanity Fair article. And he thinks he knows why.

Langewiesche argued that the pilots simply were not used to flying their own airplane at altitude without the help of the computer. Even the experienced Captain Rooney was rusty: of the 346 hours he had been at the controls of a plane during the past six months, only four were in manual control, and even then he had had the help of the full fly-by-wire system. All three pilots had been denied the ability to practice their skills because the plane was usually the one doing the flying.

This problem has a name: the paradox of automation. It applies in a wide variety of contexts, from the operators of nuclear power stations to the crew of cruise ships. How many of you readers out there can do a simple task that has been taken from you by the device known as the mobile phone? The Skill? Remembering your best friends, or parents or anyone's phone number for that matter. The fact is that we, and I am sure I speak for the majority, can no longer remember phone numbers because we have them all stored in the contacts of our cell phones, we also struggle with mental arithmetic because we are surrounded by electronic calculators, which is also on our cell phones.

"The better the automatic systems, the more out-of-practice human interactions will become,  thus, are more extreme the situations we are faced with."

The psychologist James Reason, author of Human Error**, wrote: “Manual control is a highly skilled activity, and skills need to be practiced continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practicing these basic control skills … when the manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions.”

The paradox of automation, then, has three strands to it. First, automatic systems accommodate incompetence by being easy to operate and by automatically correcting mistakes. Because of this, an inexpert operator can function for a long time before his lack of skill becomes apparent – his incompetence is a hidden weakness that can persist almost indefinitely. Second, even if operators are expert, automatic systems erode their skills by removing the need for practice. Third, automatic systems tend to fail either in unusual situations or in ways that produce unusual situations, requiring a particularly skillful response. A more capable and reliable automatic system makes the situation worse.

There are plenty of situations in which automation creates no such paradox. A customer service web page may be able to handle routine complaints and requests so that staff are spared repetitive work and may do a better job for customers with more complex questions. Not so with an airplane. Autopilots and the more subtle assistance of fly-by-wire do not free up the crew to concentrate on the interesting stuff. Instead, they free up the crew to fall asleep at the controls, figuratively or even literally. One notorious incident occurred late in 2009 when two pilots let their autopilot overshoot Minneapolis airport by more than 100 miles. They had been looking at their laptops.

When something goes wrong in such situations, it is hard to snap to attention and deal with a situation that is very likely to be bewildering.

His nap abruptly interrupted, Captain Rooney arrived in the cockpit 1min 38secs after the airspeed indicator had failed. The plane was still above 35,000 feet, although it was falling at more than 150 feet a second. The de-icers had done their job and the airspeed sensor was operating again, but the co-pilots no longer trusted any of their instruments. The plane – which was now in perfect working order – was telling them that they were barely moving forward at all and were slicing through the air down towards the water, tens of thousands of feet below. But rather than realizing the faulty instrument was fixed, they appear to have assumed that yet more of their instruments had broken. Rooney was silent for 23 seconds – a long time if you count them off. Long enough for the plane to fall 4,000 feet.

It was still not too late to save the plane – if Rooney had been able to recognize what was happening to it. The nose was now so high that the stall warning had stopped – it, like the pilots, simply rejected the information it was getting as anomalous. A couple of times, Ronin did push the nose of the aircraft down a little and the stall warning started up again STALL STALL STALL – which no doubt confused him further. At one stage he tried to engage the speed brakes, worried that they were going too fast – the opposite of the truth: the plane was clawing its way forwards through the air at less than 60 knots, about 70 miles per hour – far too slow. It was falling twice as fast. Utterly confused, the pilots argued briefly about whether the plane was climbing or descending.

Ronin and Swanson were shouting at each other, each trying to control the plane. All three men were talking at cross-purposes. The plane had it's nose up, but losing altitude, and rapidly.

Swanson: “Your speed! You’re climbing! Descend! Descend, descend, descend!”

Ronin: “I am descending!”

Rooney: “No, you’re climbing.”

Ronin: “I’m climbing? OK, so we’re going down.”

Nobody said: “We’re stalling. Put the nose down and dive out of the stall.”

At 11.13pm and 40 seconds, less than 12 minutes after Rooney first left the cockpit for a nap, and two minutes after the autopilot switched itself off, Swanson yelled at Ronin:“Climb … climb … climb … climb …” Ronin replied that he had had his stick back the entire time – the information that might have helped Rooney diagnose the stall, had he known.

And then suddenly, the penny seemed to drop for Rooney, who was standing behind the two co-pilots. “No, no, no … Don’t climb … no, no.”

Swanson announced that he was taking control and pushed the nose of the plane down. The plane began to accelerate at last. But he was about one minute too late – that’s 11,000 feet of altitude. There was not enough room between the plummeting plane and the black water of the Atlantic to regain speed and then pull out of the dive.

In any case, Ronin silently retook control of the plane and tried to climb again. It was an act of pure panic. Swanson and Rooney had, perhaps, realized that the plane had stalled – but they never said so. They may not have realized that Ronin was the one in control of the plane. And Ronin never grasped what he had done. His last words were: “But what’s happening?”

Four seconds later the aircraft hit the Atlantic at about 125 miles an hour. All lives on board Air France Flight 447, 228 passengers, and crew perished instantly.

Air France Flight 447 was a scheduled passenger flight from Rio de Janeiro, Brazil to Paris, France, which crashed on 1 June 2009. The Airbus A330, operated by Air France, entered an aerodynamic stall from which it did not recover and crashed into the Atlantic Ocean at 02:14 UTC, killing all 228 passengers, aircrew, and cabin crew aboard the aircraft.

This incident happened in June of 2009. Since then, we now have driverless cars in test phase and even tested on the streets in real driving situations. For the most part, all was going well... well until...

" IT WAS always going to happen, the question was how would the world react?"

It was announced early in the morning of JULY 1, 2016 that a Tesla Model S has been involved in a fatal crash during which the autopilot mode of the vehicle was activated.

The accident occurred on a highway in northern Florida when a tractor trailer drove perpendicular across the highway crashing into the Tesla car which was driving itself.

Tesla, which said the driver was ultimately responsible for the vehicle’s action even when in autopilot mode, said both the driver and the car failed to notice the tractor-trailer “against a brightly lit sky” and the brakes failed to kick in.

In a company blog post detailing the accident, Tesla said the National Highway Transportation Safety Administration (NHTSA) had been informed of the accident and had been conducting an investigation into the crash.

Tesla CEO Elon Musk tweeted the blog post offering his “condolences for the tragic loss.”

The driver of the Tesla was a 40-year-old Joshua Brown, the owner of a technology company who nicknamed his vehicle “Tessy” and had praised its sophisticated “autopilot” system just one month earlier for preventing a collision on an interstate as viewed in the video below.

And yet the truck continued into the lane... the nerve of some people, makes you wonder if their brain was replaced with an autonomous device.

While details about the accident was only recently reported, it took place on May 7.

The cameras on Mr. Brown’s car had failed to distinguish the white side of a turning tractor trailer from a brightly lit sky and didn’t automatically activate its brakes, according to a government report obtained by the Associated Press.

Frank Baressi, 62, the driver of the truck said the Tesla driver was “playing Harry Potter on the TV screen” at the time of the crash and driving so quickly that “he went so fast through my trailer I didn’t see him.”

“It was still playing when he died and snapped a telephone pole a quarter mile down the road,” he said.

Tesla Motors Inc. said it is not possible to watch videos on the Model S touch screen and Mr. Baressi acknowledged he couldn’t see the movie, he only heard it playing.

"Self-driving cars could make our roadways safer, but they also introduce new privacy and legal concerns."

I wanted to share this article and image by Nicole Blake Johnson. Nicole is a social media journalist for the CDW family of technology magazines.

She writes :

The thought of driverless cars cruising freely on the open road is both exciting and terrifying.

Autonomous cars could bring faster commutes, fewer crashes and greater fuel savings to motorists. So far, California, Florida, Michigan, Nevada and Washington, D.C., allow testing of driverless cars. If you’re curious to know how these cars operate on the road, The Washington Post chronicled the adventures of an autonomous car on the streets of D.C.

But like any developing technology, autonomous cars come with potential pitfalls. There are privacy, software and legal concerns. For example, who is liable if the car gets into an accident? Is the software that powers these driverless cars vulnerable to hacks?

Here’s an infographic from auto parts retailer CJ Pony Parts that weighs both the pros and cons of having self-driving cars on our roadways.

Pretty well laid out and nicely formatted, don't you agree?

Now there is the issue of insurance.

So I did some investigating and found some pretty good information that will make you either understand or have more concerns all depending on how you see that glass of water...

THE TOPIC

Each new generation of cars is equipped with more automated features and crash avoidance technology. Indeed, many of today’s high-end cars and some mid-priced ones already have options, such as blind-spot monitoring, forward-collision warnings and lane-departure warnings. These will be the components of tomorrow’s fully automated vehicles. At least one car manufacturer has promised to have fully automated cars available by the end of the decade.

Except that the number of crashes will be greatly reduced, the insurance aspects of this gradual transformation are at present unclear. However, as crash avoidance technology gradually becomes standard equipment, insurers will be able to better determine the extent to which these various components reduce the frequency and cost of accidents. They will also be able to determine whether the accidents that do occur lead to a higher percentage of product liability claims, as claimants blame the manufacturer or suppliers for what went wrong rather than their own behavior. Liability laws might evolve to ensure autonomous vehicle technology advances are not brought to a halt. Change is happening, and it is now. (in my opinion)

RECENT DEVELOPMENTS

  • In January 2016 U.S. Transportation Secretary Anthony Foxx released a new policy updating the National Highway Traffic Safety Administration's (NHTSA) 2013 preliminary policy statement on autonomous vehicles. In March 2016 the agency also announced a 10-year $3.9 billion commitment to support the development and adoption of safe vehicle automation. According to a statement, NHTSA will propose guidance to industry on establishing principles of safe operation for fully autonomous vehicles in mid-2016. A link at the bottom of this article marked with a + (Plus symbol) will direct you to that article.
  • Nevada was the first state to allow the use of autonomous vehicles in 2011. Since then, five other states—California, Florida, Michigan, North Dakota and Tennessee—and Washington, D.C., have passed autonomous vehicle legislation. Sixteen states introduced legislation related to autonomous vehicles in 2015, up from 12 states in 2014, nine states and D.C. in 2013, and six states in 2012.
  • In 2015 Tesla Motors Inc. activated its Autopilot mode, which allows autonomous steering, braking, and lane switching. In July 2016 the first fatality from an autonomous vehicle was reported. The National Highway Traffic Safety Administration is investigating what role if any that the Tesla Motors Model S Autopilot technology had in a Florida collision between the vehicle and a tractor trailer. Tesla said autopilot sensors failed to detect the truck, turning in front of a Model S, against a bright sky. The crash killed the vehicle’s owner.
  • An Insurance Information Institute Pulse survey conducted in May 2016 found that 55 percent of consumers say that they would not ride in an autonomous vehicle. Earlier polls found that 50 percent said that a driverless car’s manufacturer should bear responsibility in case of an accident, and only 25 percent say that they would be willing to pay more for a driverless car to cover the manufacturer’s liability in case of an accident.
  • According to the Insurance Institute for Highway Safety, it is anticipated that there will be 3.5 million self-driving vehicles by 2025, and 4.5 million by 2030. However, the institute cautioned that these vehicles would not be fully autonomous, but would operate autonomously under certain conditions.
  • On June 6, 2016, a Google prototype autonomous vehicle (Google AV) was involved in a minor collision with no injuries. On June 15, 2016, a Google AV was rear-ended with no injuries.
  • A study by the Insurance Institute for Highway Safety (IIHS) has found that improvements in design and safety technology have led to a lower fatality rate in accidents involving late model cars. The likelihood of a driver dying in a crash of a late model vehicle fell by more than a third over three years, and nine car models had zero fatalities per million registered vehicles. Part of the reason for the lower fatality rate might also stem from the weak economy, which led to reduced driving, the IIHS said.
  • The study, which looked at fatalities involving 2011 model year cars over a year of operation, found that there was an average of 28 driver deaths per million vehicle car years through 2012, down from 48 deaths for 2008 model cars through 2009. Eight years ago there were no models with a zero death rate.
  • The IIHS attributed the lower death rate to the adoption of electronic stability control, which has reduced the risk of rollovers, and to side airbags and structural changes that improve occupant safety. However, the IIHS said, there was a wide gap between the safest and the least safe models, with the riskiest cars mostly small lower cost models.
  • General Motors will offer a super cruise system with hands-free automated driving on freeways that have proper lane markings by 2016. However, drivers will have to be ready to take over control of the vehicle and cars will be fitted with a device designed to alert the driver to pay attention even during highway driving. Toyota said it plans to offer crash-avoidance technology in Toyota and Lexus models by 2017. Daimler is now offering a system on certain models that allows a car to brake, accelerate and remain in its lane without human intervention at speeds of under 16 miles an hour.
  • According to the Google Self-Driving Car Project, as of June 2016, there were 24 Lexus RX450h SUVs on the road and 34 other prototype vehicles. 1,725,911 miles were driven autonomously, and 1,158,921 miles were driven in manual mode.
  • A survey by IEEE, a technical professional organization dedicated to advancing technology for humanity, of more than 200 experts in the field of autonomous vehicles found that of six possible roadblocks to the mass adoption of driverless, these three were ranked as the biggest obstacles: legal liability, policymakers, and consumer acceptance. Cost, infrastructure, and technology were seen as less of a problem. When respondents were asked to specify the year in which some of today’s commonplace equipment will be removed from mass-produced cars, the majority said that rear view mirrors, horns and emergency brakes will be removed by 2030, and steering wheels and gas/brake pedals will follow by 2035.
  • In February 2014 federal agencies approved vehicle-to-vehicle (V2V) communications systems that will allow cars to “talk” to each other so that they know where other vehicles are and can compensate for a driver’s inability to make the right crash avoidance decisions because of blind spots or fast moving vehicles. V2V communication uses a very short range radio network that, in effect, provides a 360-degree view of other vehicles in close proximity. The Department of Transportation estimates that safety systems using V2V communications will be able to prevent 76 percent of crashes on the roadway.
  • A study of the benefits self-driving vehicles by the RAND Corporation, released in 2016, includes a discussion of liability insurance options. The study, “Autonomous Vehicle Technology: A Guide for Policymakers,” explores the benefits, drawbacks and risks of autonomous vehicle use. According to the study, manufacturer liability is likely to increase, while personal liability is likely to decrease. Benefits include lower driver error that hopefully results in fewer vehicle crashes, better mobility to those otherwise impaired and drawbacks include an unquantified impact on occupations and economies based on public transit, crash repair. A named risk is inconsistent state regulations.

BACKGROUND

Self-driving cars are definitely on the way, but it may be some time before we are all being transported by fully automated vehicles.

Most accidents are caused by human error so if this factor can be minimized by taking control of the moving vehicle away from the driver, the accident rate should tumble. Data from the Institute for Highway Safety (IIHS) and Highway Loss Data Institute (HLDI) already show a reduction in property damage liability and collision claims for cars equipped with forward-collision warning systems, especially those with automatic braking. The exact percentage varied depending on the car manufacturer.

Among the major automakers testing self-driving cars are Audi, Ford, Mercedes, Nissan, Toyota and Volvo. The cars have some ability to travel without the driver intervening but only in certain situations, such as low speed stop-and-go highway traffic. Slow speeds give the car’s computers more time to process information and react.

Experts vary as to when the changeover to self-driving cars will occur. A transport scholar at the University of Minnesota believes that by 2030 every car on the road will be driverless. Driverless shuttles are already being tested on some university campuses in Europe.

An automotive study by IHS, a global information company, titled “Emerging Technologies: Autonomous Cars—Not If But When” forecasts that self-driving cars that include driver control will be on highways around the globe before 2025 and self-driving “only” cars by 2030. Nearly all of the vehicles in use are likely to be self-driving cars or self-driving commercial vehicles sometime after 2050, it says. The study notes two major technology risks, software reliability and cyber-security.

We do not yet know how the driving public will react to the vehicles that come on the market. For most drivers there will be a steady progression from a minimally or semi-automated car to the next level. A Status Report from HLDI suggests that it could take as long as three decades for 95 percent of all registered cars to be equipped with crash avoidance systems. Forward-collision warning systems have been available since 2000, HLDI says, and if they follow their current trajectory, they will not be available in most cars until 2049.

In addition, some people who enjoy driving and do not want control to be taken from them may resist the move to complete automation. Already there are some who say they refrain from using the cruise control feature because they prefer to maintain control themselves.

The risk of an accident is unlikely to be completely removed since events are not totally predictable and automated systems can fail. In addition, the transition from hands-off driving to hands-on promises to be tricky.

The need for drivers to control the car in an emergency is fraught with questions, not just those involved in the automotive technology. What kind of training will people need to safely handle these semi-autonomous vehicles? How well prepared will drivers be to handle emergencies when the technology returns control to the driver? How will beginning drivers gain the necessary experience and how will experienced drivers stay sharp enough when they are only infrequently called upon to react?

Autonomous cars have been compared to airplanes on auto-pilot. But while a pilot and a driver both need to be able to make split-second decisions, there are likely to be fewer times when this skill is called upon in a plane than in a car and, in addition, the pilot is highly trained in how to interact with the automated system.

The Impact on Insurance

Some aspects of insurance will be impacted as autonomous cars become the norm. There will still be a need for liability coverage, but over time the coverage could change, as suggested by the 2014 RAND study on autonomous vehicles, as manufacturers and suppliers and possibly even municipalities are called upon to take responsibility for what went wrong. RAND says that product liability might incorporate the concept of cost benefit analysis to mitigate the cost to manufacturers of claims. Coverage for physical damage due to a crash and for losses not caused by crashes but by wind, floods and other natural elements and by theft (comprehensive coverage) is less likely to change but may become cheaper if the potentially higher costs to repair or replace damaged vehicles is more than offset by the lower accident frequency rate. The number of vehicle-related workers compensation claims, now responsible for a large but decreasing portion of claim costs according to the National Council on Compensation Insurance, should continue to drop as will the share of healthcare and disability insurance costs related to auto accidents.

Regulation: Insurance is state-regulated. Each jurisdiction has its own set of rules and regulations for auto insurance (and so far for self-driving cars). Basically, there are two kinds of liability systems. In some states liability is based on the no-fault concept, where insurers pay the injured party regardless of fault, and in others it is based on the tort system. But there are many important differences among the states in the regulations that now exist within each category, see report on No-Fault Auto Insurance. Will the auto insurance system change to be more uniform with the arrival of self-driving vehicles and will the federal government play a larger role? If car manufacturers are required to accept more responsibility for damage and injuries, they might push for a greater role for the federal government to eliminate some of the cost of complying with the rules of 51 jurisdictions.   

Underwriting: Initially, many of the traditional underwriting criteria, such as the number and kind of accidents an applicant has had, the miles he or she expects to drive and where the car is garaged, will still apply, but the make, model and style of carmay assume a greater importance. The implications of where a car is garaged and driven might be different if there are areas set aside, such as dedicated lanes, for automated driving.

During the transition to wholly autonomous driving, insurers may try to rely more on telematics devices, known as “black boxes,” that monitor driver activity. Some drivers may object to them based on concerns about privacy. Usage-based insurancepolicies, which depend on data about the driver’s behavior submitted by an electronic device in the driver’s car, have attracted a smaller than expected percentage of the driving population, possibly because people do not want to be monitored. According to the National Association of Insurance Commissioners, use of telematics is forecast to grow to up to 20 percent within the next five years. 

Liability: As cars are become increasingly automated the onus might be on the manufacturer to prove it was not responsible for what happened in the event of a crash. The liability issue may evolve so that lawsuit concerns do not drive manufacturers and their suppliers out of business

RAND has suggested some kind of no-fault auto insurance system. Others foresee something akin to the National Childhood Vaccine Injury Act, a no-fault compensation program for vaccine recipients who suffer a serious adverse reaction when vaccinated. The legislation was passed in 1986 in response to the threat that life-saving vaccines might become scarce or even unavailable if manufacturers, overwhelmed by claims of injury, scaled back or terminated production.

Repair Costs: While the number of accidents is expected to drop significantly as more crash avoidance features are incorporated into vehicles, the cost of replacing damaged parts is likely to increase because of the complexity of the components. It is not yet clear whether the reduction in the frequency of crashes will lead to a reduction in the cost of crashes overall.  

 Automobile ownership appears to be on the decline, and more people in urban areas are opting for public transportation and shared rides. Some people wonder whether when all vehicles are self-driving anyone will actually own a car. Cars may belong to a company, municipality or other group and may be parked away from the center of the community in a location from which they can be summoned by phone.

A study by the University of Texas at Austin of how the advent of autonomous cars may change vehicle ownership found that each shared autonomous vehicle (SAV) replaced about 11 conventional vehicles. The study assumed that only 5 percent of trips would be made by SAVs.

As I close this article up, I have to ask you, the reader:

  1. Are we putting too much trust into computers to autopilot our vehicles with our lives, or are we relying on these companies to keep us safer?
  2. Will autopilot vehicles make us complacent and lazy instead of busy and healthy?

Please comment your thoughts and share with others. Thank You in advance,



Sources and footnotes:

This article is adapted from:

Tim Harford’s book Messy, published by Little Brown https://timharford.com/books/messy/

And written using some Inserts from other sources on the subject matter.

My opinions and views within this article by myself are my views and opinions and in no way does it endorse or refute the automation in today's society.

* William Langewiesche article - Vanity Fair -https://www.vanityfair.com/news/business/2014/10/air-france-flight-447-crash

** Psychologist James Reason, author of Human Error-https://www.cambridge.org/gb/academic/subjects/psychology/cognition/human-error?format=PB&isbn=9780521314190

Insurance Information Institute

+ U.S. Transportation Secretary Anthony Foxx unveils President Obama's proposal for $4B for automated vehicles

Images and/or videos are copyright to the respective owners.

?2016 Raymond Brogan & Company




要查看或添加评论,请登录

Raymond Brogan (PHD)的更多文章

社区洞察

其他会员也浏览了