The Blurring of Safety and Security for Embedded Devices
The Blurring of Safety and Security
By Andrew Caples
I recently mused over the converging of the boundaries defining safety and security for embedded systems, and how the lines separating these two very important concepts are becoming blurred, or even nebulous to some extent.
In the past, it seemed as if “safety” and “security” were two distinct and separate topics. Safety always appeared to have well-defined requirements that were associated with rigid processes based on recognized and accepted standards such as IEC 61508, IEC 62304, and DO-178C. Security, not so much. Security appeared to be based on soft requirements, implemented using a variety of processes (some more rigid than others) and without universally accepted standards. I never envied the security engineers, as their task looked harder relative to the safety engineer. (I’m sure the safety engineers would take umbrage with that comment.)
Safety vs. Security
But one way to look at it, security could be considered more challenging because it appeared to be a negative goal. After all, safety deals with specifying detailed requirements, building software to those requirements, and finally testing and documenting what was completed as evidence that the requirements were met. If the system does not meet the requirements (a deviation between the application and specification) then it can be considered a bug and fixed. A lot of work for sure, but the process seemed well enough defined.
Security on the other hand, seemed a little more challenging as specifying requirements for possible ways in which a device might be exploited in a concise requirements document was not easily accomplished. Maybe for that reason, on many occasions, a concise security requirements document wasn’t generated at all. And without a specification document, topics such as coverage and completeness make testing much more difficult. And then there’s the probability distribution. For safety, if there is a one percent chance of something going wrong, that may be okay if it still meets the specification. For security, not so fast, as the probability distribution may not be in our control. For example, if there’s a one percent chance of failure, how does one factor in that an adversary could focus 100 percent of his time on making that one percent failure happen all the time?
Today, it’s hard to have a safety discussion without encroaching on security, or vice versa. This will only continue as essentially every device will be connected in one fashion or another; and thus, subject to possible exploitation that could lead to device failure, which is both a security and safety issue. For example, consider how the exploitation of a security vulnerability in an infusion pump might result in a safety hazard for a patient. As safety and security meet in the real world, the same is occurring in development, at least from a process perspective. There are similarities in the development processes between safety and security. For safety, standards such as DO-178C or IEC 62304 define the Safety Development Lifecycle (SDL), which serves as a framework to meet regulatory approval. Meeting safety requirements demands proactively managing the hazards and the associated risk. Tools such as Fault Tree Analysis (FTA) can be used for hazard analysis. When it comes to risk, the goal is to manage the potential and severity of harm. Thus, in order to meet safety criteria, risk management is required to determine the level of risk that is considered acceptable.
Determining Safety
One way one to think about safety is the absence of unacceptable risk. Although, there are various models that can be used to determine risk, the common thread in risk is a combination of the probability of an occurrence and the severity of harm.
The safety formula looks something like this:
Risk = Function [Probability, Severity (harm)]
Once the hazard and risk analysis are completed, the device can be classified based on the severity of the potential harm using industry specific methodology. For medical devices, the IEC 62304 standard provides the following Software Safety Classifications:
- Class A: No injury or damage to health is possible
- Class B: Non-serious injury is possible
- Class C: Death or serious injury is possible
And, of course, there is a need to define safety requirements to mitigate the risk. This process aligns extremely well with security development with the difference being managing vulnerabilities rather than risk (Figure 1).
Figure 1: Safety focuses on hazard, while security focuses on vulnerability.
Determining Security
Guidelines and industry-specific standards for cyber security are becoming more commonplace such as FIPS140, NIST 800-53, and IEC 62443 to name a few. To manage vulnerabilities requires an assessment that can be completed with tools such as HAZOP (Hazard and Operability analysis). The risk for each vulnerability must be assessed using one of a variety of methodologies that analyzes the threat, vulnerability, and severity.
The security formula looks something like this:
Risk = Function [Threat, Vulnerability, Severity]
When determining risk, if the assessment shows that the risk is not acceptable then risk mitigation is required and a new vulnerability analysis should be completed.
Fortunately, there is significant overlap between the development practices for safety and security. The systematic and well understood processes for safety looks a lot like the processes that are becoming widely embraced for security. In the end, this may go a long way in making the security engineer’s task to defend the embedded device more tenable.