What Trend Micro gets wrong about Rogue AI
Thomas Kranz
CISO | Golden Globee? Winner Cybersecurity Consultant I Forbes Technology Council Member I Author of Award-Winning “Making Sense of Cybersecurity” | Non Executive Director
There's been an interesting series of blog posts from Trend Micro about Rogue AI, and they're definitely worth your time to read through.
They raise some interesting discussion points, but what I want to address is their final post, which talks about what the "Security Community" is missing.
You can read this post (as well as links to the others) here: https://www.trendmicro.com/en_us/research/24/j/rogue-ai-part-4.html
The series is an interesting read, however I want to call out a number of areas in which I think the authors have gotten things wrong.
There is no such thing as rogue AI
As I spoke about in my talk at HOPE earlier this year, the security industry has long been used to new technology being dual use: a tool that can be used to defend your organisation is also great at attacking an organisation. AI is no different: probability scoring and event correlation makes AI solutions incredibly useful for embattled cybersecurity teams. Cybersecurity defence has been a big data problem for a long time, and we're now finally seeing AI solutions that help defenders address this.
Equally, we've had automation in the attack space for years. From password crackers to scanners to advanced frameworks like Metasploit, anything that can be used to automate and speed up the process of finding and exploiting vulnerabilities is a boon to attackers. And these tools are also great for defenders: over the years we've ended up with an automation arms race to see who can can be the quickest and most efficient at finding vulnerabilities.
Rogue AI doesn't exist: there are simply dual-use AI tools that can be used for either attack or defence.
AI is not a unique or different threat
At its core, AI solutions are just software. In the main, they are badly written and poorly secured, but they still remain software - specifically, a SaaS solution we can integrate into our existing enterprise portfolio.
We don't need to do anything special to defend against AI attacks, or protect internal AI tools. We already know how to do threat assessments and risk management.
The problem here is not that "rogue AI" is something new or different - its that organisations do a consistently poor job with threat assessments for their software and services, and risk management in general.
领英推荐
As with most of the challenges in cyber security, we don't need to be doing anything different: we just need to do the basics better. BCG's Mature Risk Management in Uncertain Times report highlights how important effective risk management is.
Using AI to attack is not a new threat model
Time and again we see the hype around AI trying to position it as some new game changing paradigm-shift in technology - both for attack and defence.
This is nonsense. AI solutions are machine learning and automation, something that has been around for decades. Sure, they have been refined, and the scale of the data used to train their models is exponentially larger than it was just 5 years ago, but fundamentally current AI solutions remain authoritative statistics engines. And they are only as good as the data their model has been trained on, and the data we feed into that model.
Slapping the AI label on a product or tool is largely a marketing ploy (AI in your phone anyone?), and far too often such tools are used to justify hiring less experienced cybersecurity staff, and paying them less. Which leads to less effective threat assessment and risk analysis. We've seen how that works out, with the Sony PSN hack from almost 15 years ago the biggest example (https://www.gamespot.com/articles/sony-laid-off-security-staff-prior-to-psn-data-breach-claims-lawsuit/1100-6321043/ ).
For attackers, AI is not a new paradigm, it is an evolution. AI for attackers makes the entire attack process faster, easier to deploy, and easier to scale due to automation. But the exploits they use remain the same as organisations have failed to deal with for over four decades: poor authentication, poor monitoring, poor code quality, and treating security as a checklist after-thought rather than a fundamental part of the business.
The security community is not corporations or academics
Trend Micro's final post looks at the efforts from OWASP, MITRE, and MIT around defining rogue AI and AI threats. Sure, these three organisations have made some big contributions to cybersecurity - but they do not even remotely represent the security community. At best, they are a tiny fraction of the research and knowledge sharing that takes place - the bulk is made up of hackers and security researchers, the majority of whom have been well aware of the potential and threat of emerging AI tools for years, and who have been openly talking about and sharing this knowledge.
The narrative that the people who sell you cybersecurity products and solutions are the ones who define the threats, and that deploying their tools will mitigate those threats, is a dangerous one, as I discussed with Hiscox 5 years ago (https://www.hiscoxlondonmarket.com/blog/all-gear-and-no-idea ).
Ultimately, Trend Micro are a corporation who want to sell security tools. Its in their own commercial interests to big up AI because - obviously - they are selling AI based tools.
The problem is that if we allow corporations to over-hype threats without challenging their claims, we end up continuing the FUD cycle, where executive leadership teams end up buying tools and services to defend against threats defined by "fear of the new" rather than a qualified and considered risk assessment.
Good threat assessments and risk management continue to be one of the most important - and basic - areas of cybersecurity defence, and yet are still one of the biggest areas leadership teams fail to address properly. Instead of telling executive leaders to buy tools, we should be educating and supporting them to build good risk management into the core of the business. We'd see far fewer breaches if we did.