Responsibility, Resilience and the Aftermath of an Incident

Responsibility, Resilience and the Aftermath of an Incident

Undeniably, it’s been a big month for cyber ‘events’ - and they say the summer period is quiet, eh??

In cybersecurity, periods of time are often marked by big cyber incidents, like WannaCry or MOVEit. There’s always the pre and post of said incident, along with musings and reflections in the immediate aftermath and then, later, on anniversaries. Experts ask: what can we learn from this? Why did it happen? What are the implications? In many ways, this is similar to a black box investigation after a plane crash. It’s human nature to want to know why and how and, critically, what can we do to build back better, safer.??

Although we have seen a few major tech incidents this year already, like the Snowflake supply chain attacks, the Crowdstrike IT Outage will, no doubt, be one that’s remembered for years to come. It’s important to stress that this was not a cybersecurity incident, rather a defect in a software update that affected a cybersecurity vendor. Nonetheless, the incident shone a light on the reliance we have, as a society, on certain softwares and large entities and why, ultimately, it’s essential that we have to build software securely by design.?

A lot of the world’s infrastructure is dependent on a handful of companies, as highlighted by this outage. It means that there’s a single point of failure, a fragility point, when it comes to vulnerability. With this, comes third party risk.?

In the wake of the Crowdstrike incident, the question of the exposed Microsoft kernel comes up once again. At present, the Microsoft kernel allows for third party integrations directly into the system. However, back in 2006, Microsoft tried to restrict third parties from accessing the kernel in Windows Vista, but was met with pushback from EU regulators and cyber vendors. In the case of Crowdstrike, as outlined in its preliminary post-incident review, a bug in its testing software that resides in the kernel of the operating system was to blame for the widespread outages, affecting Microsoft users.

Microsoft is working, once again, to restrict access to the kernel, like Apple successfully did in 2020. Apple no longer allows developers access to the kernel. As tech is such a fast moving industry, it’s essential that we revisit old decisions that might have reflected the wisdom of the time but need review as things develop. Although the question of kernel access is not wholly a security issue, rather an engineering one too.?

This brings us back to CISA’s Secure By Design pledge, which encourages companies to put their money where their mouth is when it comes to engineering securely. Whilst currently an opt in, it is imperative that we have a measure of industry leaders and benchmarks of what good looks like in the aftermath of big events. This can only come with regulation, government bodies, and interpretation of the law. In the last edition of Hacker Headspace, for example, I explored standardisation and collaboration within the cyber community broadly as a way to build resilience.

It’s essential that we build security by design, despite friction between engineering and security teams. There is difficulty - and great expense - in retrofitting. Equally, it’s economically unfeasible for many to rebuild. The software development community needs to urgently stop building things in unsafe languages. It’s inexcusable to build in ‘memory unsafe’ when it comes to new projects and frameworks. Schemes like the Secure by Design Pledge aim to support organisations in how they can securely develop new software.?

My final thought: It is not fast or cheap, but it is irresponsible to not build safely.?

要查看或添加评论,请登录

ACDS的更多文章

社区洞察

其他会员也浏览了