Hype vs Reality: the offensive application of AI across the cyber kill chain. Part II.

Hype vs Reality: the offensive application of AI across the cyber kill chain. Part II.

In this second part exploring the application of artificial intelligence (AI) against the cyber kill chain, I introduce the concept of AI, the Lockheed Martin Kill Chain and discuss why AI-enabled cyber-attacks could pose an existential danger.

At a conceptual level, AI can be described as a set of capabilities used to perform tasks that have historically required skills considered uniquely human [1]. This includes being able to analyse and understand data, use language to creatively solve complex problems, and perform rationale actions that mimic those of a human being [2]. In applying this definition to offensive cyber security, true AI needs to have a unique set of characteristics. One, it can act as an agent which is associated with its environment. Two, as an agent, it can act and interact to influence that environment. Three, it works as an autonomous agent towards a goal. Four, the activities of an autonomous agent are continuous ‘over time,’ and last, the agent must possess the ability to learn and modify itself or ‘morph’ [3].

If we take these characteristics, and view true AI as an intelligent, decentralised, interactive and cognitive agent then it stands to reason that the weaponization of AI could pose serious threats to society and national security, should it go rogue. To understand where AI can be weaponised, it is appropriate here to introduce the Cyber Kill Chain (CKC). First developed by Lockheed Martin in 2011, the CKC represents a sequence of actions that an attacker will go through to achieve their ultimate objectives [4]. The sequence starts with reconnaissance and moves to weaponization, delivery, exploitation, installation, establishing command and control, and finally, acting to achieve the objectives [5]. The MITRE ATT&CK Matrix also presents a granular version of this model.

Cyber Kill Chain? | Lockheed Martin

The malicious application of AI across the CKC is concerning. Over the last few years, cyber-attacks have already become more frequent, costly and destructive. In Australia, a cybercrime is now reported every seven minutes [6] and the financial losses can range from tens of thousands into the millions of dollars [7]. Cyber-attacks against institutions such as Medibank [8] and Optus [9] illustrate that no sector is immune. These institutions were impacted to the tune of AUD $46 million and $140 million respectively. But more concerning is that cyber-attacks have continued to grow with new attack vectors, and that the sophistication of these attacks is being driven by intelligent technology [10]. Joint academic and industry research since 2018 reinforce this view. The findings indicate that as AI capabilities become more powerful, they will become more widely adopted by adversaries [11]. The subtext is that AI will introduce new cyber threats, the character of these threats will become more targeted and effective, and as the cost of attacks decrease cyber-attacks will increase.

The consensus suggests that the new generation of cyber threats will be better, faster, smarter and feature permanently as part of the contemporary landscape. But the notion that AI could be weaponised to effectively traverse the entire CKC, and do so with relative autonomy, warrants more detailed exploration.

Stay tuned for the next part in this series as I explore the offensive application of AI against the Reconnaissance stage of the CKC. I welcome commentary from any researchers, practitioners and enthusiasts who would like to share their views.


Adam Misiewicz is an experienced cyber security consultant and the General Manager of Cyber Security at Vectiq - a Canberra-based services company.


[1] Chomiak-Orsa, I., Rot, A. and Blaicke, B., 2019, August. Artificial intelligence in cybersecurity: the use of AI along the cyber kill chain. In International Conference on Computational Collective Intelligence (p. 407). Cham: Springer International Publishing.

[2] Ibid, p. 407.

[3] Guarino, A., 2013, June. Autonomous intelligent agents in cyber offence. In 2013 5th International Conference on Cyber Conflict. (CYCON 2013) (pp. 3-4). IEEE.

[4] Dalziel, H., 2015. ?Cyber Kill Chain (Chapter 2)”. Securing Social Media in the Enterprise, pp.7-15.

[5] Ibid, pp. 7-15.

[6] Australian Cyber Security Centre Annual Cyber Threat Report, July 2021 to June 2022. Access: https://www.cyber.gov.au/about-us/reports-and-statistics/acsc-annual-cyber-threat-report-july-2021-june-2022

[7] Ibid.

[8] Australian Cyber Security Centre, 2022, Medibank Private Incident, Available at: https://www.cyber.gov.au/about-us/alerts/medibank-private-cyber-security-incident

[9] Australian Cyber Security Centre, 2022, Optus Data Breach, Available at: https://www.cyber.gov.au/about-us/alerts/optus-data-breach

[10] Sharif, M.H.U. and Mohammed, M.A., 2022. A literature review of financial losses statistics for cyber security and future trend. World Journal of Advanced Research and Reviews, 15(1), (p. 140)

[11] Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B. and Anderson, H., 2018. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. (p.5)?arXiv preprint arXiv:1802.07228.

要查看或添加评论,请登录

Adam Misiewicz的更多文章

社区洞察

其他会员也浏览了