Automotive Cyber Security for accountants and lawyers
The final part of the title is obviously a joke, the goal is to make this article understandable even to non-experts.
We will take a short tour in the safety of your car, and, like Dante, we will start from easily manageable risks up to the last circle of hell: internal cyber attacks.
Security
The term safety, associated with our car, is declined in many ways.
The first, in chronological order, is undoubtedly that relating to burglar alarms; nobody likes having their car stolen, we have gone from the mechanical steering lock to electronic immobilizers and satellite trackers, which we will discuss later but in another context.
In 1974 Volvo introduced the ABS system on the market; today it is mandatory equipment along with seat belts, ESP and airbags. So we are talking about the safety of the occupants, the most important thing. To these systems are added a whole series of other systems such as trajectory control, on-board radar, up to the humble but very useful reverse sensor.
In the 80s, the first on-board computer was born, called the "diagnosis system", and its functions were limited to checking the status of the bulbs and little else. Cars were still largely analog.
Since then, thanks also to the improvement of chip production systems, the share of electronic components in vehicles has grown exponentially.
As early as 1983 it was clear that the increase in on-board sensors would have led to an inevitable increase in wiring, and consequently an increase in costs and complexity, so Robert Bosch GmbH developed the CAN-BUS, a data communication network which with just two wires they connect all the sensors and actuators, which thus become "intelligent".
Today the on-board computer takes the name of ECU (Electronic Control Unit). Born, in its most modern version, as a fuel control unit, it is currently the coordinator of a set of intelligent modules, which are computers too, which have specific tasks, such as the control of the braking system, of the transmission, service management for lights and air conditioning and so on.
As a Toyota essay put it, "Today the function of the wheels in a car is to carry the on-board computer."
All these systems today significantly increase the level of safety of motor vehicles, are very fast and are able to manage braking and stability of the car in a few thousandths of a second.
So far the scenario is quite reassuring, technology at the service of security. Technology which, as a consequence, thanks to advanced design and simulation techniques, has also made it possible to design cockpits that are safer in the event of a crash, collapsible steering systems and much more.
Cyber Security
With the rise of smart electronics, the natural step, similar to what happened for smartphones, was to provide services.
With the advance of Smart Technologies, alongside the processing of sensor parameters, we have witnessed the birth of truly "individual" data management.
Therefore, in addition to the engine parameters, many latest generation cars integrate sensors and connected multimedia equipment, which can analyze driving habits and memorize the places visited, furthermore, as a new trend, biometric sensors are installed for the recognition of the individual; in fact, it is possible to dynamically configure the car in relation to the driver according to a profile acquired from driving experience.
The management of this data does not always take place locally, there are often service providers who also provide infotainment services, the short form of information and entertainment, therefore information on traffic and the provision of multimedia content. They do what is called dynamic mapping.
So let's start with the first concern.
Privacy
Who manages our data? But above all, how do you use them?
We have control over the approach to social media, we consciously decide what information to share and with whom.
On the contrary, when we are in our car, we do not have the slightest visibility of all the data collected, much less therefore we can manage them. In essence, we can't apply good practice to something we don't control.
To deal with this situation, on 28 January 2020, the EDPB (European Data Protection Board) issued the European guidelines: "Guidelines 1/2020 on processing personal data in the context of connected vehicles and mobility related applications".
The document, which I invite you to read, is very articulated, it analyzes in detail all the aspects of the management of the data processed inside the car, those exported to external service providers, the biometric ones and finally also those related to behaviors that can detect crimes and infringements.
It also provides interesting guidelines on privacy by design and by default, i.e. designing a vehicle with privacy in mind. Finally, it rationalizes the consent to privacy which is currently practically absent in the use of motor vehicles.
But this is just a recommendation, even my mother, when I was little, with poor results, advised me not to run so as not to sweat.
Something more incisive is represented by the UNECE R155 regulation, issued by the United Nations Economic Commission for Europe.
The core of the regulation is the CSMS (Cybersecurity Management System), therefore the aspects strictly related to cyber attacks, but it also contains a requirement regarding privacy:
“The vehicle manufacturer must demonstrate that the processes used as part of its information security management system will ensure the monitoring referred to in paragraph…”
He must:
“… include the ability to analyze and detect cyber threats, vulnerabilities and cyber attacks from vehicle data and vehicle logs. This capability must respect the right to privacy of car owners or drivers, especially regarding consent.”
This regulation is destined to become an obligation for the homologation and registration of the next vehicles.
The interesting thing is that this regulation shifts the accountability of security to vehicle manufacturers, therefore clearly identifiable subjects, as opposed to cloud service providers, often ethereal even de facto.
Finally, the ISO/SAE 21434 standard is entirely dedicated to the cyber security of vehicles and, together with the ISO 26262 dedicated to the safety of users and vehicles, completes the set of common rules necessary to guarantee IT security in the automotive sector.
Risk of attack
The main parameter related to an attack is its risk factor. I am introducing a formula that I have elaborated, the only one I promise you. It is extremely simple and intuitive.
Expresses the risk that an attack may occur depending on technical feasibility and plausibility.
R = P x F
that is, Risk = Plausibility x Feasibility
Plausibility is a very delicate parameter and depends on the context, including the sociological one; on a hypothetical scale we start from the "simple pleasure of doing it" up to the intention to destabilize the social life of a foreign country. The main question we have to answer is cui prodest?
The technical feasibility, in the simplest cases, can be worth 0 or 1, in reality we will associate intermediate values if the attack is technically feasible but has a very high cost or complexity; for example, the need to have an extremely equipped laboratory (>100k€) would reduce the number of people who could try their hand at the attack.
In other words, if the technical feasibility exists, the risk associated with an attack is equal to the probability that it will occur.
External cyber attacks
If someone, out of laziness, could say "I prefer to have many services, I have nothing to hide anyway", here the matter becomes delicate because it involves the physical safety of the occupants.
Paradoxically, the electronics that make our car safe, with the advent of Smart Technologies, become its Achilles heel.
Every connection to the outside becomes a potential gateway for hackers.
The automotive industry has always been very resistant to this type of problem, often adopting the "security by hide" principle, that is: we increase security by hiding the details. Very short-sighted behavior indeed, I assure you that today it is possible to decode everything, so the Feasibility linked to the lack of information quickly becomes 1.
Back in 2015, Charlie Miller and Chris Valasek, in a controlled experiment, sent a Jeep Cherokee traveling on the highway off the road, attacking the internet / wi-fi system.
In 2021, again in a controlled experiment, Weinmann and Schmotzle hacked a Tesla Model 3 by bypassing the internal firewall and thus managing to open the doors (the disturbing thing is that they probably would have been able to do it even with the car in motion).
Fortunately today there has been a clear awareness and cyber security is one of the main concerns of the automotive world.
领英推荐
The goal is only technical and very simple: to make the Feasibility of an attack zero or close to zero.
First, a very in-depth analysis is made of what is defined as the Attack Surface, i.e. the set of potential illicit access points. The process is very similar to an FMEA analysis (Failure Mode and Effect Analysis) where in output we have a list of criticalities and what these can entail.
Subsequently, the Penetration Tests are carried out, i.e. a series of cyber attacks aimed at testing the security measures introduced.
The bottom line is that Attack Surface Analysis and Penetration Tests should not be performed by anyone involved in the security implementation, but by third parties.
Hackers, for their part, analyze the links in the chain always looking for the weakest one, which may not be the most critical one, the important thing is that it is the "most exposed", then inside the chips everything is in communication.
How to discover the vulnerabilities of a connected system?
Forget for a moment the image of the young hacker with a sweatshirt and hood, it's a stereotype that belongs to Hollywood. A laptop equipped with a WI-FI scanner is not enough to identify vulnerabilities, at least not anymore.
With the improvement of firewalls, a brute force attack (software that tries all possible combinations) no longer has any effect, we need to analyze in more detail what is behind the scenes.
First of all, we need to identify the MCU (Micro Controller Unit), which at 99% will be a MIPS, ST Microelectronics or NXP Semiconductors brand CPU. The datasheets of these chips exist on the network and therefore the internal architecture and the routing to the pins are known. If it goes wrong, we could find ourselves faced with an FPGA (they are chips that manufacturers can customize), in this case everything is more complicated and it will take a lot of patience, the routing adopted by the manufacturer could be arbitrary, even if, I won't go in detail, but there are recommendations and habits on how to arrange the pins.
Once the MCU has been identified, it is necessary to communicate with it. In all the boards there are terminals called "probes" which allow the programming and testing of the chips during the production of the ECUs.
Connect the SEGGER (which is a universal programming/debugging unit for micro-controllers costing around €2000) and try to connect using the various protocols available.
Once connected, it is possible to dump the firmware and have fun with GHIDRA.
GHIDRA is a free reverse engineering software produced by the NSA (National Security Agency) to support cyber security, actually also highly appreciated by hackers.
Now you need a CAN sniffer and a CAN sensor/device simulator. The first is a software that captures and analyzes CAN-BUS traffic, there are many on the net, but each Hacker writes his own. The second is a microprocessor card that allows you to "impersonate" any CAN device, it can be found on the net for a few tens of dollars.
At this point, with a lot of patience, it is necessary to do the mapping of the ECU and the auxiliary modules and then analyze the behavior of the telegrams on CAN to look for vulnerabilities through toxic behaviours.
As you can see, these activities require time, money and specific knowledge, perhaps a little too much for the "just fun of it".
It makes me think that the purpose of this commitment is another, I imagine that the result of these analyses, the vulnerabilities identified as well as the decrypted dumps of the control units could have a market on the dark web or even be commissioned.
We have reached the last circle of hell and we talk about:
Internal Cyber Attacks
We are truly in a minefield here, it is difficult to talk about it without being accused of conspiracy theorists, but our duty is to analyze the technical feasibility of this attack, and if the plausibility factor is zero, all the better.
The internal attack is the malicious behavior of an ECU in which a modified chip was installed during production.
We have no hope against this attack if we don't think it might exist.
In other words, imagine that a company producing custom chips, under pressure from a government or a terrorist group, wants to distribute modified chips to ECU manufacturers.
Would he be able to do it? what kind of modifications could it make? And how could it implement them without being discovered?
The final objective could be to take control of the ESP and ABS systems following an external command, and, for example, decide to lock the right front wheel when the vehicle speed exceeds 120km/h.
How to implement this? Let's make some hypotheses together.
Let's immediately exclude the simplest way, that is the modification of the firmware, that is the software program of the control unit; this is almost always loaded into the processor directly by the ECU manufacturer, moreover the modification leaves a trace; there is a check word that verifies the integrity of the software and that would immediately unmask this clumsy attempt without even having to disassemble the code in ROM.
Suppose we have obtained, through intelligence techniques, the source of the main core of our chip, or the project of the main macro-block, or with a lot of patience and money we have directly reverse engineered the die.
How to change the behavior of an MCU not knowing in advance the firmware that will be loaded?
A creative approach is required.
The simplest and most immediate idea that comes to mind (but certainly an expert could do better) is to insert a ghost coprocessor inside the chip; we are talking about a unit with very few features which would therefore be extremely reduced.
This coprocessor would not have routing to the external pins (which are all already assigned) but would simply be connected to the CAN like any peripheral and could communicate with the main processor via a shared memory cell.
The obvious question you may be asking is: how do we change the behavior of the main core without knowing the firmware?
Simple, we don't modify the firmware, but the microcode of an instruction. In other words, we recode the behavior of an already existing instruction to make it perform a very simple operation in addition to the original code: check a cell, if it is active, mask all the interrupts and decrease the program counter, thus remaining in a perpetual loop.
Obviously we will choose the instruction appropriately among those of less common use, thus avoiding the load of a register.
The ghost coprocessor could now become the CAN-MASTER and do anything.
The op-code of the instruction would remain the same, so the firmware, under forensic analysis, would not be modified.
Would it be possible to discover this change by means of a sample analysis?
Not instrumentally, unless you already know where to look, a few more clock cycles, in a sporadic instruction, for checking the cell, would be difficult to detect. Surely the ghost coprocessor would be visible to radiographic or thermographic analysis, but there should be a good reason to do it, it is certainly not the common practice.
Is it complicated to put a ghost processor on a chip?
Today mostly chips are "composed" starting from library macro-blocks, they contain several million transistors, don't you think they are drawn one by one? It is a task that a chip maker is absolutely capable of doing.
Even FPGAs, which are user-designable chips, have very easy-to-use tools; for example, inserting a MicroBlaze coprocessor inside an AMD xilinx Spartan 7 FPGA is a very simple task.
We select the processor from the library or import it, drag it into the project, parameterize it and route it with the main processor and/or I/O pins.
In our case, not having I/O pins available for our ghost coprocessor, we can think of downloading its firmware via CAN by allocating extra codes.
Finally, is it possible to change the behavior of an instruction?
It's not super simple but it's definitely within the reach of a chip maker. And if the architecture is the new RISC-V (and the largest chip manufacturer for ECUs is migrating to this architecture) it is even simpler; Please note that the RISC-V project is open source, academic, and completely open. There are dozens of researchers and other expert designers who would be able to do this in no time.
I will talk about RISC-V in a future article.
The technical feasibility index of this attack, after the acquisition of the chip project, for which it is only a matter of time/money, is unfortunately quite high.
Is it plausible that this could happen?
I would say no, but I would have answered the same way if in 2019 they had asked me the plausibility of an attack on a foreign country in the heart of Europe…
How to prevent such a scenario?
Today more than ever, there is a need for capillary control over strategic electronics suppliers, avoiding "convenient" suppliers.