Standards: ISA and IEC and OG, Oh my

The cybersecurity space for control systems doesn't have a shortage of frameworks (I had a hard time choosing the above title over reference the xkcd on standards proliferation).

(Fig 1. Aren't we special? With thanks to Randall Munroe)

Many of these frameworks, however, are policy-based (For reference, today's inspiration was OG86, published by the Health and Safety Executive; it's not the only example). Meaning they don't actually tell you how to determine if a particular system is secure or not.

It's easy for technically-minded people like me to scoff at such "high-level" regulation, but it is a result of a real problem facing regulators: change. Imagine the following scenario:

  • You have a plant that is supplied with, and pipes around, some Hazodium with corrosive impurities. Imagine the stuff that gave the Joker his signature look.
  • A technically competent person writes into law: the filters in your Hazodium, catching corrosion debris (and corroding themselves), must be replaced every 12 months. This is a good law: at the time, there's a good chance a filter will be gone before then.
  • Fifty years pass. Your pipes are made of better steels; there's less corrosion product to catch. Your filters are better. Your input is more pure: overall, if you spend a little more you could buy filters that last the life of the plant, which given a free hand, you would.
  • But you're still changing (now perfectly pristine) filters every 12 months. Maybe, on some parts, you only have filters so you can change them - after all there's no longer the impurities and corrosion product to require filtering. But the law says you have to have them so you can change them. And here's the thing: yes, it's cheaper, but it's also much safer to not expose your workers to Joker-making chemicals.

So overall, yes, there's a good argument for high-level regulations that simply say "do X right" and delegate the implementation. It still has the whiff, however, of security theatre. Especially when the delegated standards are a) also policy-based rather than implementation-based, b) cost money, so fewer people read them. There's a real danger that such an approach just obfuscates, rather than eliminates, bad practices. It doesn't really flow here but I have to mention the "Safety Case on a Page" and bowtie diagrams, referenced to me from one Sir Charles Haddon-Cave QC when reading the Nimrod report (https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/229037/1025.pdf). It's heavy reading, but you won't regret it. There's a real effort made to make high-level safety concerns comprehensible, so that the smart lay person can genuinely contribute rather than being an asset to be managed.

I propose the following test for standards - let's call it "security standards fizzbuzz". Imagine I have a system implementing the following user verification algorithm (in any language), with a verified user controlling a real, reasonably important actuator. This is very similar to a real verification algorithm run by real baseboard management controllers, which in turn means it's quite probably in PLCs somewhere.

Pick your favourite ICS security framework and demonstrate why said system is in breach. The fewest documents read, and the fewest GBP spent, wins.

def validate_pw(entered_pw, user_pw):

???for entered_char, user_char in zip(entered_pw, user_pw):

???????if entered_char != user_char:

???????????return False

???return True        

Fig 2. A not-very-good password verification algorithm. And yes, "you must use two-factor" is a reason why this is in breach.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了