On the supply chain and the elephant in the room
Yesterday, I posted a short publication in which I said that
Supply chain is the elephant in the room and we need to talk more about that.?
Talk about prevention, not detection. Sorry, guys.
To develop it a little more, I added that
we should start thinking of third party software and hardware to be insecure by default
and that it should be enforced the obligation of software manufacturers to perform and publish, to an extent, serious, regular and deep pentesting for critical applications they sell (and their updates). And even then, any 3rd party software or device should be considered insecured by default unless the opposite proven.
In a very insightful commentary, Andrew (David) Worley gave some useful information on some existing controls such as #SOC2 and some promising others that I didn't know of, such as #SBoMs and #DBoMs, and asked me for some details on that deep pentesting I talked about.
Then I wrote a +700 word commentary that LinkedIn considered to be too long for a commentary or a post. So I came to write this.
Regarding infosec "general" assessments
I know #SOC2 and similar assessment reports, but as I see it, unless it's paid by a third party (a client, a partner), I've done it myself or a colleague I know well, or I am provided with the full list of evidences, I have some reservations about those type of assessments. And the same applies for any other general control assessment such as #ISO27002, of which I've done quite a bit.
In the first place, I've always found the fact that an organization pays for their assurance reports a little bit concerning.
This is an extract from Ross Anderson paper, "Why Information Security is Hard – An Economic Perspective":
"For all its faults, the Orange Book had the virtue that evaluations were carried out by the party who relied on them – the government. The European equivalent, ITSEC, introduced a pernicious innovation – that the evaluation was not paid for by the government but by the vendor seeking an evaluation on its product. This got carried over into the Common Criteria. This change in the rules provided the critical perverse incentive. It motivated the vendor to shop around for the evaluation contractor who would give his product the easiest ride, whether by asking fewer questions, charging less money, taking the least time, or all of the above. To be fair, the potential for this was realized, and schemes were set up whereby contractors could obtain approval as a CLEF (commercial licensed evaluation facility). The threat that a CLEF might have its license withdrawn was supposed to offset the commercial pressures to cut corners.
But in none of the half-dozen or so disputed cases I’ve been involved in has the Common Criteria approach proved satisfactory. [...]. The failure modes appear to involve fairly straightforward pandering to customers’ wishes, even (indeed especially) where these were in conflict with the interests of the users for whom the evaluation was supposedly being prepared."
Secondly, I have reservations about those type of assessments because, whether of their nature or to be cost-effective, they stick to the surface, and let me explain that.?They will check that the organization has a vulnerability management process in place, perform regular audit assessments that are properly managed, or have IAM controls in place, among many other controls.
领英推荐
And, considering the status of infosec in many organizations, that's a lot. But they won't see that there are a handful of vulnerabilities that have been waiting in the queue for months or that there are a dozen generic users without any sort of accountability. And that's part of the problem of cybersecurity.
Again, Ross Anderson paper comes to mind:
"So information warfare looks rather like air warfare looked in the 1920s and 1930s. Attack is simply easier than defense. Defending a modern information system could also be likened to defending a large, thinly-populated territory like the nineteenth century Wild West: the men in black hats can strike anywhere, while the men in white hats have to defend everywhere."
In short. Too many users, applications, systems, laptops, network segments, vulnerabilities, communication patterns, firewall rules, updates,... too many things to control. As we know, the?devil is in the details, and that summarizes up the problem we face in infosec.
However, I know #SOC2 and #ISO27002 are useful and to some extent, they are a good way to measure the cybersecurity status.?I didn't know about #SBoMs or #DBoMs, but they definitely look promising. I'll took a look into that.
The pentesting proposal
So let's go back into the pentesting idea I mentioned, that is something that crossed my mind, and I haven't developed it much, to be honest (note to myself: pending task).
However, disregarding that lack of development, that doesn't mean that I don't hardly believe that:
Going back to the pentesting part, I would advocate for detailed and regular pentesting on the manufacturer, but mainly (as I am assuming the manufacturer software cannot be trusted), on the client side, automated continuous pentesting and monitoring of any change of behavior (mostly communication patterns), and a manual comprehensive pentesting for every new update or patch. I wouldn’t get into reversing, but as close as you can without getting into intellectual property issues. The thing is that that's a lot of money and not many organizations can afford that.
So, an potential alternative could be (be aware, we are just following the same "crossed my mind" pattern) a set of unrelated and independent third parties not linked (nor paid) to the manufacturer, but paid by the potential clients: government, big corps, and SMEs, that perform comprehensive and deep testing of the software.
Yes, I am aware we are (falsely) outsourcing the supply chain risk to third parties, and that creates more problems, so the point is that you don't rely in just one organization, but several from unrelated parties. So you have multiple control points.
With that, clients should be able to perform affordable cybersecurity testing of the product/update. I know all this is just fantasy, but I am a big fan of Queen's Bohemian Rhapsody.
In some way, it's similar to some controls browsers and antimalware apply before downloading files. It's like:
I'll check this for you, because we cannot trust the source, even if it comes from your mother.?
??Security?Privacy?Risk?Technology??
3 年This was a good read. Similar 3rd party source code escrow, and 3rd party audits etc. Could be turned into a trust score. When you're talking about multiple sources vetting supply chain resources from different vectors, could definitely pull results from those various tests/reviews/audits to help calculate a score. We humans like to quantify things so we can make numerical decisions. It's a critical vuln with a CVSS of 9.3. In this case, we'd want to see a higher number to trust something, lower number means question it. This auditor, this pen tester all assessed the product with an average T score of 3.8. That's low. We probably don't want to do business with this product. But that one over there has an average of 8.6. That might be more appealing to our appetites. I like the way you're building out this idea so far Manuel Benet Navarro I think there's enormous potential. You said you weren't familiar with S/DBoMs, I'd highly recommend you look into work by Chris Blask