Memory-lane: deliberate backdoors and ineffective bug fixes

Recently, I was reading an article on LWN about Cisco trying two CVEs with something less than a real remediation, but it also brought forward old and new memories that I thought I’d rant on:

  • Correct problem solving to find valid solutions (cough it seems even Boeing is struggling with this today)
  • And the human/vendor desire to add a backdoor for “special” fixes/diagnostics

While I can't say where I have seen this specifically, we can look to a number of similar issues plaguing various home routers, debug/programming interfaces left on PCBs, and certainly, even some PLCs (Siemens clapping monkey debacle). And if I were to stretch it further, it might even look like how Operators leverage default/unlocked admin access for those "break glass conditions". 

Now with my executive hat on, I'd probably cringe when any number of new critical vulnerabilities are published. In fact, the last thing I'd want is additional CVEs only to wind up in the "swamp" most organizations likely own. But, of course, my preference would be to focus on what is remediable not just after the fact (although that's a big aspect of OT Sec that we do right at Verve), but also, by managing and monitoring the risks that lead to the eventual and damaging OT cyber incident. Fortunately, these concerns about secure products/implementations are starting to make their way into procurement contracts...

Now with both executive and development hats on, "fixing the fixes" can generally be done in two ways:

  1. Better engineering and engineering by consequence (and this would solve half or more of common software vulnerabilities)
  2. Closing the loop between identification and remediation from within an organization’s deployment with additional layers of security & controls

Now obviously the first part of the solution (and likely cheaper) resides on the vendor, developer and integrator side of things. And the second, unfortunately, lays upon on the organization that has deployed X vendor’s solutions. 

If a poor fix was made for a safety critical system in an aircraft, we would be chasing X vendor to provide a solution unless they fixed it properly, and conversely, if our car has a known deficit (front wheel drive (FWD) in the snow - which is not fixable without extensive modification), then we resort to protecting it with a garage, adding more care when operating, or high performance snow tires. This latter example using a small economy FWD car is more akin to the real OT world; it's been bought, used, and it is not going anywhere anytime soon - just like our already deployed PLCs etc...

To protect the vulnerable or inherently insecure systems, clearly we will need to:

  • Monitor deployed solutions adequately (and their protecting layers)
  • Manage configurations and changes vigilantly
  • Fix vulnerabilities where possible (either fixes exist, or we can't patch due to OT constraints)
  • And add extra layers of protection to ideally prevent any malicious activity from penetrating and negatively affecting the OT processes

Then for the software side of things, we too (and the organizations/persons that work with CVE-related processes) must also validate any remediations IF they are made ensuring an end-to-end process. While that isn't there for a majority of software applications or industries, the efforts and responsibility are upon the engineers, QA teams, and product owners to ensure the fix is adequate - after all, you never know when litigation is around the corner or who is at fault when the blame game comes to town... it's all about "reasonable" due-diligence and efforts embedded into holistic secure dev/VM programs. Is the gap in process or development within the business culture itself, lack of engineering principle/ethics, or related to incompetency itself? The best thing we can do then is to introduce automated destructive testing if possible and pull humans out of the loop if possible I guess ;)

Now to circle back to my title: fixing Deliberate BackDoors and incorrect fixes, we can see it's quite complicated as well.  It seems that the correct solution as an organization that develops software, is to adequately secure the upgrade path from both hardware and software perspectives, this means that proper cryptography/public key infrastructure (PKI) and software best practices (including secure continuous development/continuous integration) must be applied. Or even the concepts of "secure" code to be apart of software development curriculum organisationally or academically... Sadly though, this is not the case and StackOverflow too succumbs to quick and dirty examples...

Needless to say, proper engineering may mean either removing the ability for quick and dirty fixes/functionality or removing these little backdoors altogether by embedding the diagnostic/features as part of the real product. This would benefit the user, and the product by reducing potential attack pathways by a factor of at least 1. No more backdoors, only one update/upgrade process, and only one way to improve the product experience.  It's not that hard... seriously.

For the naysayers arguing about the convenience and necessity of these remote access/local backdoors - I argue that they don't even need to be present if the system has been architected correctly in the first place; there is always a way to make an upgrade process work flexibly enough... I guess I won't win against business or human motives, so bring on effective VM programs that make sense in the OT and converging IT/OT realms (e.g., Verve's closed loop VM strategy)

To conclude my little rant on fixing fixes right, and removing backdoors in software/hardware... just don't let it be done in your organization, and as an owner both product-wise or end-user, assume backdoors exist, and at least ensure multiple layers of protections to help prevent misuse of these entry points.


Sent from my cell; 438 394 2868


要查看或添加评论,请登录

Ron Brash的更多文章

社区洞察

其他会员也浏览了