Autonomous Tech Has Its Risks
The buzz around the IoT (Internet of Things) community is the tragic accident involving a Tesla Model S in autopilot mode that shook many people to the core. Not because it was a car accident because, unfortunately, we hear about human-error or texting, or whatever all the time, but this time this accident challenged the future of autonomous vehicles.
We talk so often about connected cars and autonomous features that can improve safety by taking the human element out of driving a vehicle. However, this latest accident proved that accidents can still happen. In this case, it was fatal.
This highly public event is having a stinging impact on the forward momentum of autonomous vehicles, perhaps in more ways than most people would have anticipated.
Let’s be very clear, Tesla Motors is an amazing company. In fact, Connected World magazine’s editors are continually impressed by Tesla’s vision when the Connected Car of the Year awards come around.
In fact, the Tesla Model S has earned a Connected Car of the Year distinction for several years in a row in the green category because it just blows the competition out of the water. As you can suspect, one of the key components we look at when selecting a Connected Car of the Year is the vehicle’s safety features.
The goal of this blog is not to add to all hype in discussing this sad accident. Nor do I want us all to begin to condemn Tesla Motors or the idea of autonomous vehicles.
Rather, I think it’s important the industry takes a harder look at what this all means. More importantly, it’s time to really look at all the data, and all the facts about autonomous vehicles, where can the industry go from here and what can it learn from this fatal crash in in Florida?
According to Tesla’s blog, which the company posted on June 30, it states, the vehicle was on a divided highway with autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S.
And I quote, “Neither autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S.”
Tesla went on to say that under slightly different circumstances—for instance, if the Model S had impacted the front or rear of the trailer—the vehicle’s advanced crash safety system would likely have prevented the driver’s death.
However, it didn’t. In response to the fatality, the NHTSA (National Highway Traffic Safety Admin.) is opening a preliminary evaluation of the performance of autopilot during the crash.
However, Tesla is quick to point out that this is the first known fatality in more than 130 million miles during which autopilot was activated.
Despite this reality, Reuters reports Tesla shares have dipped as much as 3% in after-hours trading after the news of the crash and investigation began rippling through the media.
Now, it’s only fair to state Tesla’s autopilot is in beta mode and each driver must acknowledge upon activation that the technology is still under development. In fact, in this acknowledgement is verbiage that defines autopilot as “an assist feature” that requires drivers to keep their hands on the steering wheel at all times.
As we have reported many times before there are more than 38,000 people killed on U.S. roads and 4.4 million seriously injured last year alone from distracted driving. Further, it dictates that users maintain control and responsibility for their vehicles at all times. So what does all of this mean? We are getting ahead of ourselves when it comes to perfecting technology. We can’t rely on technology just yet to do the work for us. While driverless cars will help to reduce accidents, we still have much more testing must be conducted.
This appears to be a tragic accident involving a new technology that must be substantially examined before it can be perfected. Unfortunately, that means learning from its mistakes.
Just like anything, autonomous driving will operate on a learning curve—and this is scary, but it’s also 100% necessary. Like anything, we must learn to walk before we can run.
Accidents must always prompt us to take a step back and talk about what could have been better. We must learn from our mistakes when leveraging new and innovative technology. At the same time, we mustn’t stop innovation because we’re afraid.
It’s worth saying, however, that these types of setbacks play a really important role in persuading or dissuading the public of the value of a technology compared to its value proposition.
The NHTSA investigation will certainly add fuel to the fire when it comes to the debate among consumers about whether autonomous vehicles are more safe or less safe.
Among the experts I talk with in the automotive industry, however, safety systems that take complete or partial control of systems such as steering and braking from drivers are, indeed, safer.
At the same time, they’re not safer by default, they’re only safer if they’re designed correctly and tested thoroughly. And, of course, there will always be accidents. We must understand nothing will be perfect. The risk of a tragic accident will always exist the moment we hit the road, no matter what the circumstance.
Just like when it comes to securing enterprise devices, manufacturers really need to cross their T’s and dot their I’s when building autonomous vehicle systems. The more we know about the technology, the safer we can make our roads.
Want to tweet about this article? Use hashtags #IoT #M2M #Tesla #automotive #autonomous #connectedcars #invehicle #automakers #ModelS #NHTSA