What if AI is used to autonomously run cyberattacks without human interaction?

What if AI is used to autonomously run cyberattacks without human interaction?

Artificial intelligence (AI) is a powerful tool that can be used for both good and evil. In the context of cybersecurity, AI can be used to develop new and more effective ways to detect and respond to threats. However, AI can also be used by malicious actors to launch more sophisticated and devastating attacks. It is a topic I am monitoring closely!

During last week’s Fortinet Tech Seminars in The Netherlands, we had a great exchange of information with our customers about the use of AI in cybersecurity. And together with colleague Ronald den Braven we explored this phenom.

With new, sophisticated deepfakes in all sorts of languages showing up every week it’s not hard to imagine the impact on the social engineering side of cybersecurity. The impact, however, is much greater than that. Various ransomware groups are already using and are (constantly) training their AI/ML models to help streamline their process. This process is not very different from how the public LLM’s (Large Language Models) are trained. As a result, cyberattacks move faster, are more effective and can cause way more damage to their targets.

So, what’s next? It is difficult to predict the future of AI in cybersecurity. It is clear AI will play a major role in both defensive and offensive security. I’m willing to say the impact is far bigger than many of us imagine today!

An example:

What if AI is used to autonomously achieve goals without any human interaction? I know this is already possible with a number of popular public LLM’s that are capable of chaining AI’s together. It is possible to have AI create a “to-do” list based on what the goal is. It can subsequently work through these tasks and/or create new tasks to work towards that goal until that goal is reached.

Now apply this to offensive security:

What if that goal is to extract data from an organisation? Or hold it ransom? What if there is little to no human involvement anymore, including ransom negotiations? How fast would these attacks occur? At what scale? How difficult would it be to detect them? What tools would you need?

Maybe these autonomous attacks are already happening, hard to tell!

That said, as a cybersecurity engineer, I think it’s critically important we improve detection mechanisms, strengthen our defenses and automate tasks where possible. AI -must- play a key role here as well.

How exactly this takes shape will differ per organisation but at the very least, I would recommend every organisation to;

·????? Not stay reactive and get buried under huge amounts of incidents, rather have (AI based) mechanisms in place to help understand where the biggest risks are so you can apply resources efficiently.

·????? Have (AI based) behaviour-based detection mechanisms on both endpoints and network level. After all, AI in offensive security is heavily used to adapt its behaviour to remain undetected.

·????? Counterattack and mitigate, have tools in place to detect and respond breaches in progress as quickly as possible.

·????? Be ready for audits and reporting, have strong integration between the above tools to allow for automatic/autonomous remediation and governance when needed.

If you missed the tech seminars or want to know more about how to deal with AI attacks, what to prioritize for your organization, feel free to reach out.

?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了