PLCs - Will Evil Be Realized?

PLCs - Will Evil Be Realized?

Novel approaches to hacking OT systems challenge our assumptions about who might exploit a system, and how. In some cases, such as #EvilPLC, the tactics and techniques get a lot of publicity (at least attention in professional forums such as this). But this does not always correlate with the realistic likelihood of the attack. In the case of Evil PLC, I was surprised to see one key aspect unmentioned in Claroty’s white paper…

Switching the PLC from Run mode to Program mode

Scenario 1

If the Evil PLC exploit(s) require the target PLC to be switched from Run mode to Program mode to implant the malicious code, then its usage in operating plants is unrealistic (assuming the IACS systems are running 24/7/365). Because operators would immediately be aware of a production stoppage, and the PLC would be a natural starting point for their investigation. How would they respond to an apparent PLC reboot? Here’s a realistic chain of events…

  1. Operators immediately notice that everything operated by the PLC stops. Particularly since process startup often involves a number of manual steps and so it will not simply resume on its own.
  2. Connect an engineering laptop or engineering workstation (EWS) to the PLC and review the hardware’s event log to determine why the systems appeared to reset. This action does not require an upload from the PLC (thus the exploit payload is not delivered to its target).
  3. When the event log indicates that the PLC was switched from run mode to program mode, accepted a new download, and switched back into run mode, they will investigate the source of the unauthorized download. Depending on their engineering capabilities and contracted support channel, it might only take minutes for them to ascertain that it was not one of the handful of authorized persons.
  4. If they cannot ascertain who downloaded, or why, or observe that it was apparently an unknown entity (perhaps via a direct internet connection), then they are likely to re-download their own known-good logic program and consequently overwrite the exploit which had been planted. Again, the exploit is not delivered to its target.

Usage in operating plants is unrealistic

It would seem that for the level of targeted reconnaissance and collection necessary to design and implant the exploit, there is a fairly low chance of it succeeding. And depending upon the sophistication of this “drive by” internet-bourne tactic, there is very little time between delivery, alert, and investigation. It gives the hacker very little time to cover their tracks… and it’s conceivable that they could even still be connected remotely to the PLC ((re)attempting the implant) as the investigation begins. How many threat actors would choose such time pressure, being caught in the act, and the low odds of success?

Scenario 2

The second scenario is a plant whose operations (whether discrete manufacturing or batch processing) might in fact have windows of opportunity where a PLC reboot would not impact a process. In this scenario, we would like to see a proactive owner following Secure PLC Coding Practices such as practice 17 - Log PLC uptime and trend it on the HMI - know when the PLC has been restarted.

Any facility with a robust management of change (MOC) process, or who monitors PLCs for reboots or faults, will alert on the nefarious download. Of course part of their response will involve whether to perform an upload to the EWS or simply overwrite whatever might have been put there without authorization. Similar to the first scenario, there are reasonable odds that investigation and recovery might not involve an upload / might not result in exploit delivery.

Secure PLC coding practices, and MOC processes, lower the odds of the exploit being delivered

Scenario 3

Perhaps the most interesting scenario involves an authorized person with the correct knowledge, tools, motivation, and timing…

Evil PLC’s niche use case might be the disgruntled engineer working for a system integrator on a capital project which has not yet been commissioned onsite. In the hours before they quit their employment, they embed the exploit in the target PLC. Knowing that their replacement engineer might upload the project file to the engineering workstation.?

  • If they also delete offline project files from the EWS, then the next engineer might perform an upload and the exploit could be delivered. But deleting project files would likely be shown in audit logs and there’s a high likelihood that the perpetrator is identified and quickly held accountable.
  • If the perpetrator doesn’t delete project files (and avoids leaving that audit trail in their name), then their replacement engineer might just as likely continue their work offline (not uploading), and/or download the last/current known-good logic package to the PLC (thus overwriting and nullifying the exploit).

This tactic could seem tempting to a disgruntled systems integrator… but they will be the first ones investigated

Conclusion

At first blush, it seems unlikely that the Evil PLC tactics would be chosen. Other tactics (e.g. phishing, watering holes, insecure remote access, or even casual usage of infected USB media) could offer more system-wide avenues, and higher potential success, for the average threat actor.

I’m interested in hearing other opinions. Feel free to share your thoughts.

#icssecurity #plcsecurity #otcyberpragmatist #otsecurity #otcybersecurity

Martin Scheu

OT Security at Switch CERT | ics-cyber.ch | Autism Warrior for my Son

2 年

Interesting research, but as you stated, unlikely that it is used. Also depends on the system, there are PLCs / DCS where you can upload a new program without a PLC stop, if the code has not changed certain parts. To be proofed if their inject would slip through it. Further statement of the research that the first step is to upload the PLC program to the EWS is unlikely, as an engineer I should have the latest an code on my EWS / DCS. So when going online I will see that the code is not the one which should be and often I can not go online because of it. Then I proceed with Szenario 1, step 4, just download "my" know good program in order to be able to go online.

回复
Dale Peterson

ICS Security Catalyst, Founder of S4 Events, Consultant, Speaker, Podcaster, Get my newsletter friday.dale-peterson.com/signup

2 年

Similar to Chris Sistrunk's and Adam Crain's Project Robus in 2014 that found vulns by response fuzzing from the PLC to the device sending the request, the biggest concern would be SCADA systems. Particularly those with unmanned field sites. Most asset owners that have worked seriously on their ICS for 3-5 years have made it hard to get at the control center. These attacks from the PLC in the field are a way to get in. With these clever methods even a properly configured firewall at the field to control center entry point wouldn't stop these attack (unless perhaps it was doing ICS protocol DPI). We generally don't worry about physical access at a field site leading to a cyber attack on the field site. It's easier just to change/destroy things physically. If this physical access to an unmanned field site can compromise the control center or other field sites it isn't a small thing, and actually something a skilled attacker focused on a hard target might try.

Hi J-D Bamford, P.E., as previously mentioned, to the original Claroty post, I agree very much with you, although my analysis wasn't as deep as your's ... but you covered my thoughts exactly ??

Vivek Ponnada

SVP Growth & Strategy @ Frenos | OT Security | GICSP | Texas MBA | Former Nozomi | Former GE | Public Speaker

2 年

Agree with you, the likelihood of a compromise happening from top down the Purdue model is a lot higher due to phishing, lateral movement, insecure remote access etc. Research such as this is likely helpful to improve the PLC feature set, authentication to EWS etc., acknowledging that a real-world threat scenario is highly unlikely. In relative terms of likelihood, probably similar to scenarios with compromising process values at Level 0. Not impossible but there’s so much more low hanging fruit elsewhere from an attacker perspective.

要查看或添加评论,请登录

J-D Bamford, P.E.的更多文章

  • OT Vuln Mgmt - An Underserved Market

    OT Vuln Mgmt - An Underserved Market

    The OT Cyber Pragmatist is back to discuss an under-served OT security market category. Building off Dale Peterson's…

    26 条评论
  • OT Bad Actors?

    OT Bad Actors?

    The OT Cyber Pragmatist would like to know more about Fortinet’s definition of “insider breaches: bad actor”. And would…

  • IIOT Cyber Risk Scenario 3 (part b)

    IIOT Cyber Risk Scenario 3 (part b)

    In the previous OT Cyber Pragmatist article, we introduced the scenario of an IIOT gateway being chosen to connect data…

    2 条评论
  • IIOT Cyber Risk Scenario 3 (part a)

    IIOT Cyber Risk Scenario 3 (part a)

    In a recent OT Cyber Pragmatist article, we reviewed two examples of basic IOT functionality which could be used in…

  • IIOT Cyber Risk (Part 1)

    IIOT Cyber Risk (Part 1)

    In recent OT Cyber Pragmatist articles, we discussed industrial IOT (IIOT) functionalities as a means to categorize…

    9 条评论
  • IIOT is… (part 2)

    IIOT is… (part 2)

    In the previous article, we discussed examples of industrial IOT (IIOT). It was great to see the community engagement…

    7 条评论
  • IIOT is… (introduction)

    IIOT is… (introduction)

    Today in my OT Cyber Pragmatist series, we'll begin our discussion of Industrial IOT (IIOT). Industrial IOT involves…

    19 条评论

社区洞察

其他会员也浏览了