Cyber Security - The Design perspective: Mapping Organisational controls to Operational Frameworks
In my last article, I noted the evolution of risk in Public Cloud environments to bring that topic to a logical close and my previous topical post was the recent attention to the need for external attack surface management (EASM) capability in the industry, as more organisations more into Cloud with hybrid existence.
In a recent tryst with destiny, I was recently asked to map the organisational security controls in the Public Cloud against the Lockheed Martin Cyber Kill Chain. The exercise was initiated in an endeavour to understand, for the rapidly rising threat landscape across many enterprises, how the organisational controls can be utilised to mitigate and minimise potential impacts of a compromise or breach. In contemporary parlance, as the saying goes, “it (a data breach) is not a question of if, but when.” Naturally, one must take the approach of a successful attack which invariably subscribes to the Lockheed Martin Kill Chain model.
There is a very important lesson I learnt during this exercise that cannot be understated – the risks that pertain to an organisation are defined by the business based on its appetite for such. Organisational governance should be able to periodically assess if such a posture is acceptable and is commensurate with the threats & risks contrasted against the business ambitions & interests. It is the responsibility of the engineering teams to develop necessary capabilities. Adequate care should be exercised to ensure that the architecture and its design can provide balance to evolving technologies and business ambitions. And it is operational capabilities that can help enhance the efficiency of such coverage through tuning derived via intelligence [1]. And it is essential that all three streams work hand-in-glove for delicate and sustained equilibrium. It is my understanding from discussions that contemporary requirements gathering exercises in developing such capabilities has demonstrated a trend with overwhelming focus on end-user requirements. This may generate long-term consequences.
The motivation in conducting an exercise of this nature is to determine the controls that mitigate the impacts of an attack or breach during each stage of its successful propagation through an enterprise. Such an exercise can serve to achieve goals, twofold; one to identify if there are existing security gaps with reference to the Lockheed-Martin Kill Chain when a specific actor objective is being perpetrated against an organisation, and second, to scope strategic goals and the means to get there, when patterns in the trends on Cloud based attacks begin to emerge.
Na?ve is one who interprets the existence of a capability as a guarantee for protection.
Notwithstanding, it can be useful from a strategic standpoint to recognise the deployment of controls, without being too focussed on the operational efficiency.
One could argue that the model is rather primitive; Chrissy Kidd points out that there has been some criticism that the model is outdated and presents a endpoint based malware infection method on a perimeter based network, which no longer holds true. There are also arguments to indicate that the model is rather inflexible and does not allow certain steps to be skipped, as has been observed in a few cloud based data breaches in history. Mr Reeman opines that the model has never truly been adopted as it presents a sequential approach, not often seen in the digital world.
领英推荐
In recent times, Unified Kill Chain has also emerged, which coalesces the stages of an attack in the Lockheed Martin Kill Chain with the various tactics observed under the MITRE ATT&CK operational framework. The Unified Kill Chain can be utilised into the ordered arrangement of phases in attacks from beginning to their completion, by uniting and extending these existing models. The Unified Kill Chain, in its eighteen stages, can be used to analyse, compare, and defend against targeted and non-targeted cyberattacks. But this can be an onerous ask. Notwithstanding, this unified model can be useful to study some of the past data breaches and attacks. However, in order to provide tangible outcomes to an enterprise, largely the Lockheed Martin Kill Chain is useful, which it may require some adaptation, vis-à-vis Cloud based environments; and to do that, one requires a systematic understanding of the services offered by the various cloud service providers and their equivalence relationships.
Multiple cloud service providers have presented their own cloud service equivalence tabulations which have been referenced in this work, most notable are those from Microsoft Azure and Google Cloud. It is interesting to note that services across these providers are not packaged identically. For instance, while AWS utilises the GuardDuty service for malware and threat detection across services, Azure utilises Sentinel and Defender services to varying degrees, depending on the nature of the threat. Further, some service providers also expect one to stream events to specific event message buffers, without which these services may not activate. While this can be expected from competing providers on a competition standpoint, it can often put one at a loss while seeking to obtain clarity in service equivalence, particularly if cross cloud migration is contemplated or more routinely, if risk reporting across multiple environments require normalisation. During this effort, I mapped the various CSP-developed (cloud service provider) security controls against a set of data security risks. This is tabulated hitherto; the services listed were current as of January 2024.
Until next time...
[1] One may note that human behaviours and practices play a significant role in determining the efficiency of such controls in an environment, however this is beyond the scope of this article.
Computer Scientist and Network Expert.
12 个月WoW. What an insight. Highly beneficial to the professional community. Proud to repost!