Cyber Security and it’s part in the evolution of Motor Vehicles
As a car enthusiast with an insatiable curiosity for cyber, the possibilities and potential threats that car manufacturers could face when releasing ai-based vehicles has always interested me. One of the most prolific scenes involving self-driving cars being remotely hacked was in ‘The Fate of the Furious’. In the scene, a sea of AI-Based vehicles is taken over and used as almost RC Cars by the Operations Unit in the attempt to chase down the Russian Defence Minister. Now obviously there is a high amount of Hollywood artistic licence being applied but could this be a real threat?
If we start with how Autonomous Vehicles are actually built this may help us come to a conclusion upon this question.
As humans we use our sensory functions to operate a vehicle, we look where we are going and listen for other cars and pedestrians to ensure we are safe at all times. Years of driving experience engrained into our memory allow us to react and make decisions based on past experiences from stopping at red lights, noticing a cat under a car or even down to dodging that pothole on your route to the office that’s been there for years, without even thinking about it. These are the characteristics that car manufacturers must recreate in order for the vehicle to execute driving in the same way a human does. Therefore, they need to provide the vehicle with not only sensory functions but cognitive functions (logical thinking, decision-making, memory, and learning) as well.
One of the methods for this is the AI Perception Action Cycle in Autonomous Cars. There are 3 aspects to this Cycle:
In-Vehicle Collection and Communication Systems, this is the equivalent to our sensory system, known as the Digital Sensorium. Using cameras, radars and sensors the vehicle can build a picture of its environment enabling it to assess all aspects of it’s surroundings just like we would. The sensors usually compile of three types all with separate responsibilities, radars are used for long range, VAG and Tesla where the pioneers in using this within adaptive Cruise Control, mid-range is mainly covered by lidar and video cameras to provide an immediate picture of the surrounding area and ultrasonic sensors and short-range cameras for close proximity work, an example of this is parallel parking assistance seen in Chrysler’s ‘Magic Parking’ or VAGs ‘Park Assist’. In-real time this data is then collected and passed through data communication systems into the super-computers, where all of the data is processed, and the valuable attributes are added to the autonomous cloud platform.
Which brings us to the Autonomous Driving Platform – a cloud-based intelligent agent which with the collected data makes use of AI algorithms enabling it to make key decisions. This part is key to our original question, if this stage were to be tampered with the actions of the vehicle could be altered to crash the car or even target certain object fields such as pedestrians. In essence it is the brain of the car, it builds an immediate picture of it’s surroundings while processing data logs and actioning driving the car. The decision-making aspect of this component has been a discussion for a long time, how can the car make life/ death decisions, for example; a car has crashed in front of you and the stopping distance is too short for you to break to avoid impact, at the same time there is a child on a bike on the pavement next to the road you are on and oncoming traffic on the other side of the road. An impossible situation for a human to make a decision, in every possible outcome there will be casualties. In our heads we would assess the outcomes and take into account our own personal choices such as ‘I would rather be hurt than hit the child’, now that’s a conscious choice we make, but a AI agent shouldn’t/ can’t make that decision for us? But that is a discussion for another time. The process for the AI classifying and recognising objects is known as semantic segmentation. Once this data has been processed through various methods including decision tree’s, HD maps and simulations it is stored and used to make decisions that are then passed over to the functions of the vehicle. The architecture for the ADP is shown in the figure below where we can see the correlation between the three stages of the cycle.
The third component to the cycle is the AI-based Functions, based on decisions made by the AI the vehicle can detect and track objects while manoeuvring through traffic and the road without any input of the human driver. Further features are being applied such as voice recognition (as seen in the Movie 2012 with the Bentley), gesture controls and eye tracking which is used to assess the drivers movements and attention to the road, allowing further autonomous features to be enabled in BMWs. This again creates data which allows the cycle to repeat. This data loop is constantly repeating enabling the vehicle to become more intelligent and accurate.
If you are into cars you may have seen some of the concept cars without any steering wheels or pedals. These are known as level 5 Autonomous vehicles and although they are mostly concept there are a few working examples that are being used off public roads. ANA have been testing Autonomous shuttle buses at Haneda Airport to shuttle passengers and staff from terminals to the planes in the attempt to increase efficiency. The project has been such a success there is now talks of these being applied to the Tokyo 2020/1 Olympics!
These use more complex systems mainly a neural network that has the potential to surpass the human brain in discerning patterns and identification. But again, that is another area of discussion.
So now we understand some of the basics of the Autonomous vehicle we can look at the vulnerability potentials within the systems. Due to innovative nature of Autonomous vehicles, there is an obvious lack of historical data meaning that some traditional risk assessment methods are deemed ineffective. There have been a few proposed frameworks specifically for Autonomous and connected vehicles, being vehicles that update the interface over network connections. One of those utilises the use of a Bayesian Network model, which pulls its foundations from the Common Vulnerability Scoring Scheme, offering representation of probability structure and what parameters are to be considered when talking about mitigation of Autonomous Cyber-Risk. This framework was tested with the GPS systems of the vehicles both with and without Cryptography authentication. Toyota released their own security tool named PASTA which is an open-source testing platform that is specifically designed for Autonomous vehicles, with the capability of testing how a third-party feature would affect the vehicle and it’s security.
When considering the potential flaws in the vehicles there is a vast amount to consider, security keys that are situated within the ECUs, Wireless Key fobs which are now able to unlock and start the car without any physical entry of the key and even down to the tyre pressure monitoring systems. All these examples could have damaging effects on the owner and the immediate environment around the vehicle if it was to be exploited.
As we know there are many forms of cyber-attacks that we are presented within static sites but one of main differences between an office for example and a vehicle is that it is constantly in the open. In the past a car-jacker may bring a crowbar to pop the door by putting it down the window then hotwire the vehicle or in the case of the Vauxhall Nova pulling out the hazard switch turning it upside down and bump starting the car! Now a car jacker could intercept your locking signal and copy it before using it to gain physical access to your vehicle, a crime that is increasingly becoming more common. But what if they took it that one step further. Tesla are now researching a system that connects all tesla drivers in a live chat in the local area, if a hacker was to breach this system once in the car, they have potential to take over all the Tesla’s interfaces in the area. Once they have access to the interface what else could they do? Would that be a localised incident or would it event in the access to the wider Tesla network through the wireless firmware update ports?
There are examples of this, in 2015 two security researchers, Miller and Valasek, sent a Jeep Cherokee into a ditch. The Uconnect infotainment system was the target for this attack through Sprint’s cellular network which powers the in car wifi. Using a smartphone connected to a laptop to scan for Uconnect equipped vehicles, they are able to view the vehicles ID number, IP address, make and model and even the live GPS location. Once the target was identified they accessed the entertainment system firmware using malicious code designed specifically for the Cherokee. After breaching this they were able to take over the powertrain and send the car into a ditch from over 10 miles away. This is utilising the stage 3 of the cycle where they were able to overpower the AI-Based capabilities for malicious purpose.
Going back to Tesla not more than 3 months ago a researcher, Lennert Wouters, devised a method of hijacking a Tesla Model S in Minutes with the help of Raspberry Pi. Using a vulnerability in the update process of the Tesla Model X keys fobs. This attack requires more of a social engineering focus rather than a sofa-based attack. The attacker must approach the owner of the vehicle within 5 metres (still social distancing of course) where they then use an older modified ECU to ensnare the key fob. Once connected the malicious firmware forcefully updates the key fob, the range increases at this point to 30 metres but requires 1 and a half minutes of update time. After completing the update, the attacker can then pull the unlock messages from the key fob. This then allows physical access to the car. The next step is connecting to the diagnostic points and mapping the attackers own fob to the vehicle to gain full access.
With both companies now on the Bug Bounty Scheme, with Tesla taking it to the extreme offering you a new car if you present them with a critical vulnerability, security flaws should be spotted and reported quicker and more efficiently.
So in conclusion is it possible to recreate ‘The Fate of the Furious Scene’, personally I believe that with the right level of equipment and man power it could be! Compared to the conventional CPU Units that have the staple components each vehicle is equipped with unique challenges and would take specific codes and hardware for each manufacturer or even each model of vehicle as they each have their own specific vulnerabilities and weaknesses. IE a lot more than tapping on a singular laptop and taking over 100’s of vehicles at once! Over the next few years, I’m sure we will see more successes from the Autonomous Vehicle sphere but also equal learning opportunities, either way it is an exciting time for the Automotive Industry!
It would be great to speak to any one within the industry on this subject as I would love to find out more and hear your experiences.
Marketing and Community Manager | Organiser @ The DevOps Exchange
3 年Great article Matt!