ICS security is a dumpster fire. So where are the hacks?

Computers have been used in chemical plant since basically the point at which computers* existed as a meaningful thing, which is to say, roughly post-WW2. Like most technology, war was one of the main drivers - firstly to brute-force ciphers, then to make better ciphers, then to guide missiles carrying thermonuclear warheads and work out how big a boom they'd make. I still use the language designed to answer that last question. I'd also, though I'm far from expert in this field, like to mention at least the design philosophy of the F-16 Falcon fighter jet, whereby instead of designing a plane a human flies, they thought of it as something a human commands, and the lower-level details are abstracted into a control system. You might say an Industrial one. The language used is that of plant control, where it just so happens that the setpoints are things like "pitch rate".

Anyway. If you talk about ICS** a common meme comes around, which basically amounts to "this stuff is old, and doesn't get updates, so vulnerable". Weak hash functions [link], simple lack of authentication [doi] or the assumption nobody would find an obscure URL (cves 2020-7485 and 2020-7491), they all appear.

Here's a rough timeline if what you could do with computers and an offensive mindset. I'm going to blend in and increasingly focus on ICS events. These aren't my best sources, and I've not included some incidents that lacked intent, namely the infection of the US and German nuclear plants with e.g. Conficker and the discovery of android malware on Airbus planes [link]. I've also not covered TRISIS for various reasons, and two significant energy incidents in the US (CISA alerts TA18-074A and AA20-049A). PM me if you want more.

1930-40s: if you have a Bombe, life is pretty good, you can read some Nazi codes. But you probably don't have a Bombe.

1950s-60s: You can whistle into phone lines and get free long distance calls. At least one magazine still in circulation references this trick.

1987: You can, with a powerful transmitter, push a lo-fi version of Max Headroom over Doctor Who [summary]. At least one person noticed, which, given the content it was overwriting, is surprising.

1988: You can, if you understand C and Unix, write a worm by exploiting buffer overflows (aka the Morris worm). The rather useful explanatory paper "Smashing the Stack for Fun and Profit" was 8 years later [link].

1990s: Hackers, WarGames, and Jurassic Park are all released. A golden age, and Terminator 2 references neural nets, so clearly that concept is making the mainstream, 30-odd years after Rosenblatt first mentioned it.

1999: if you are a disgruntled contractor in Maroochy Shire, Australia, who still has access keys to a sewage plant, you can use them to play merry hell. 250 kgal of raw sewage was released into the environment in that incident [background].

2000: You can, if you understand how Windows displays file extensions (or rather, how it does not), write high-level Visual Basic code and deploy some extreme viral marketing [summary].

2003: Remember buffer overflows? IF you understand Microsoft SQL Sever and the advantages of UDP, now you can take down the whole internet with them [link, but again this one is well known.].

2008: If you have an IR remote, looks like you can derail some trams in Poland [secondary source, the primary source was The Telegraph, now paywalled. The existence of a single primary source raises suspicion].

2008-10: If you understand Siemens PLCs and can develop multiple Windows 0-days, you can... frustrate, but not destroy, an Iranian enrichment program. You probably don't (see Countdown to Zero Day).

2010s: If you understand the potential for Office macros, and can deploy disk-killing software and malicious serial-to-ethernet converter firmware (not a trivial task), then you can mess with the Ukrainian power grid. I, of course, recommend Sandworm by Andy Greenberg (what is it with computer geeks and high-concept sci-fi?) for greater detail, and Robert Lee's CS3STHLM talk).

2014: You can maybe take down a German steel mill. The only primary source I've found for that is here. I'm not totally sold this isn't a misinterpreted description of what could happen.

2015: If you understand SQL injection, you can access an AS/400 controlling water supplies and make off with customer data. This is one of a few systems that effects a real chemical product's composition [link]. Note that "Kemuri" is an anoymized company, like the above steel mill incident.

2016: If you have a massive botnet you can make some IoT-reliant heating systems in Finland useless with a DDoS, which I'm sure the residents appreciated [link].

2016-17: If you understand Windows enough to generate some 0-days, you can deploy ransomware against the NHS and shipping giant Maersk. You probably don't, but someone leaked the 0-days, so... use them.

In fact, this really is the era of ransomware. You can write ICS-specific ransomware ("ekans"). Have a look at the files - all .exes. So once again, you're really writing Windows malware. Or you can deploy ransomware and stop an aluminium plant.

2019: We never really found out what happened with India's KKNPP nuclear plant, but they're blaming an APT and, once again, we see mostly Windows domain-controller strings in the malware [link].

2021: If you understand how to use Shodan to find TeamViewer connections and can guess a password, you can alter lye concentrations in drinking water in Oldsmar, Florida. Though probably not to 11,000 ppm, and the human operator managed to immediately counteract. [source. Side note: I have to complement Sheriff Bob Gualtieri on his delivery here.]

Also 2021: If you have a Windows-based ransomware toolkit, you can take down a major US gas pipeline [link, but fill your boots, there's no shortage of reporting on this one]. No explosions, no product.

---

So, what are we noticing, and what are we not?

Firstly: almost none of this malware is PLC-specific or needs to know about these crusty old ICS protocols. It's mostly Windows-based, and don't worry this isn't an anti-Microsoft screed.

Instead, it backs up the obvious market-based hypothesis: most companies relies on Windows, in some manner to function. So threat actors target that. This is similar to the California effect, whereby if you want to sell a car in the US, you have to meet Californian emissions standards - and it's not worth developing a separate high-emissions car for everywhere else, even if in theory it could be more marginally profitable.

Secondly: even with the might of nation-state actors, and everything up to nuclear plants being affected, the nightmare scenario - something like bhopal - hasn't happened.

This might just be because safety inspectors are actually capable, and won't take "we put machine learning on it!" as a vouchsafe. . Simply put: the design of part A should assume part B breaks in the worst possible way. We don't design fuel depots to explode, but we still assume they will when it comes to siting them. The Buncefield explosion rings particularly true for me here - instrumentation failed in the worst possibly way and millions of worse went to school under an enormous black cloud - but not much worse than that happened. That's not luck.

The Health and Safety Executive in the UK have indeed expressed interest and published guidance for secure software deployment. This is particularly interesting as control flaws in software - simple accidents - have definitely lead to loss of life.

For the same reasons, we've seen a lot of Denial of Service, you shut down a plant that isn't provably safe before regulators do it for you.

Security folk and safety folk are both good at their jobs, but [expletive deleted], safety and security don't talk. More than once I've seen the two fields explicitly contradict each other. Safety has some blame for this; too many standards are proprietary, and not just that, but designed so all buying one standard (for several hundred pounds) achieves is that you need to buy two more. And you're surprised people don't read them.

Thirdly: if hacktivists could take offline a fossil- or nuclear- power plant by cyber means, well, they don't seem to have bothered. In the only two potential cases found, the threat actor had "little apparent knowledge of how the flow control system worked" [link for Kemuri, the same press conference for Oldsmar]. This simply isn't consistent with a professional threat actor with a specific goal in mind.

The industry security exists in two lands: one in which human threat actors exist, and one that is a land of orcs and wizards.

* I'm excluding human computers here, who were of course frequently women. Link.

** Generally speaking, if you're using VxWorks, Allen Bradley, Rockwell, Honeywell, or Siemens, you're using ICS equipment. If you're speaking modbus, dnp3, or profibus, you're using ICS protocols.

要查看或添加评论,请登录

Martyn Smith的更多文章

社区洞察

其他会员也浏览了