Why AI fails spectacularly at cybersecurity
Before you judge me for that headline, consider this:
If Artificial Intelligence is so powerful, why are vendors now offering human threat hunting as an additional service to their automated cybersecurity solutions?
The simple fact is: If Artificial Intelligence (AI) was truly as good as some vendors claim, humans would not need to be involved in?cybersecurity?at all. Instead, we would be hearing less and less about cyber-attacks in the news as time passes.
But the opposite is true.
Let’s dive into why that is. First, we’ll look at what AI is and what it’s good for, why it’s insufficient for cybersecurity, and what you should consider to protect your organization.
What is AI?
Artificial Intelligence is just a mathematical formula.
It’s a formula that can learn and improve … to a point.
No, it’s not going to just come alive one day and solve all the world’s problems or take over our lives. This is the concept of AI Singularity — where AI suddenly wakes up one day and cures cancer, solves poverty, saves the planet, and we all live forever from that moment on in a perfect utopia. It’s a juvenile belief at best and dangerous at worst. (It’s dangerous because if we think AI will save us, humans will give up on solving problems, and nothing will ever improve.)
But I digress.
So, you start with a math formula. Then, to get AI to behave the way you want, you need to feed data into it. This data contains the rules, the boundaries, and the instructions for the task at hand.
The dataset is what makes each AI program unique. For example, it’s what differentiates ChatGPT from Jasper Chat and Open AI Playground.
AI is only as good as the dataset it is given.
And that data is created by humans. And as we know, we all have some level of unconscious bias, which?affects the AI.
Where does AI thrive?
Ironically, Artificial Intelligence isn’t all that smart. Yes, it can learn and improve through machine learning, but only within the parameters of the given dataset.
However, AI is wonderful in specific situations where there are clear, well-defined rules and boundaries.
In all these examples, there are rules and very specific guidelines. And AI can practice a million times more than any human could ever hope to. So, it learns and improves within the clear boundaries it has been given.
But if the rules change, even a little, AI accuracy deteriorates quickly.
Where does AI start to flounder?
In general-purpose situations, AI does not produce better results than a human.
Think about the simple task of classifying images: A human can easily distinguish between animals, people, and objects.
But for AI to do the same requires an excessive amount of effort. And the images must be pristine for it to be accurate. If you introduce the slightest deviations, the results are far from impressive. Even a tiny number of modified pixels can mean the difference between AI?mistaking a remote control for a golf ball.
There are lots of examples of?how AI has failed?over and over again.
Plus, AI is a black box.
You can never find out why it did what it did. It simply followed the rules you gave it and produced a result. There’s no explanation. No reasoning. No critical thinking. It’s up to you to figure out what happened.
To make it work more effectively, humans must do a lot of tweaking and adjusting. But you can only do so much because it cannot think for itself or infer from context.
This is why CYDEF chose not to base our endpoint security solution purely on AI. As a society, we cannot afford to wait for the technology to catch up.
Why is AI not well-suited for cybersecurity?
The Information Technology (IT) space is anything but well-defined. No two IT environments look the same – they don’t have the same devices, operating systems, applications, patches, or permissions. And you then add users to the mix!
There are an infinite number of variables.
This chaotic, unpredictable environment is a criminal’s dream and an AI developer’s nightmare.
Not surprisingly, cybercriminals don’t follow rules. They’re always looking for new ways into a system, and they don’t care how they do it as long as they don’t get caught.
Sophisticated attacks often don’t initially look like an attack. Instead, they start with pre-cursors to attacks, which an AI will often score as a low risk. These seemingly innocent movements provide minimal levels of access. But once criminals get in, they work to increase their access, sometimes a little at a time, making minor adjustments, making lateral movements in a network, until they eventually find a way to gain the privileges they need to launch their attack
For a ransomware attack, this could mean gaining privileged access to as many devices as possible to disable security measures and backups, exfiltrate valuable information, and then encrypt all your files. Depending on your defenses and system configurations, this can take hours, days, weeks, or months.
By the time the individual activities look like an attack to the AI, it may be too late.
How can we expect an AI to predict how a criminal will bypass your defenses?
Unfortunately, creating AI models based on past malicious behavior isn’t a good indicator of future malicious behaviors.
Does that mean AI has no place in cybersecurity?
Not at all. AI is excellent at stopping threats we already know about.
This is where threat intelligence comes in. Threat intelligence is the knowledge about existing threats, analyzed by the cybersecurity community.
Your antimalware (AV) is an excellent example of the efficient use of AI. Humans feed the AI with threat intelligence, and your AV blocks those threats as they appear, in real-time, with a low rate of false positives.
It would be ridiculous for humans to do that kind of work. AI can do it much faster and much more accurately because there are rules and specific profiles the AV is looking for. The AV can be taught which actions to perform to stop and contain a malicious program before it can accomplish its objective.
This is where AI thrives.
Many (most?) of today’s endpoint security solutions, like EDRs (endpoint detection and response), are built around threat intelligence. This makes them great at detecting?known?threats, but unreliable with unknown or new threats.
Many vendors claim they have the best AI-powered cybersecurity solution on the market.
But suppose AI was as good as they say. In that case, we’d be on our way to completely eradicating the need for humans in cybersecurity, or at least we’d see a significant decrease in successful cyberattacks.
The reality is that when AI produces less-than-ideal outcomes, meaning it can’t reliably automate a response, vendors have two options: ignore the result or create an alert for a human to review.
Even ChatGPT agrees that AI is not sufficient on its own.
So, what’s your best option?
No solution can claim to keep your organization 100% safe from cyber threats. And despite the claims of “all-in-one” solutions, there is not one vendor that can do everything.
The reality is that you need to create a layered, defense-in-depth environment using solutions that complement each other.
There’s an alarming gap in endpoint security.
Problem #1 – AI cannot outsmart humans
As we’ve explored in this article, automated tools alone cannot outsmart human criminals. To stop cyber criminals, we need humans to review the telemetry collected. But can that be done effectively? This brings us to our second problem.
Problem #2 – A significant percentage of possible threats are ignored
Most endpoint solutions use risk scoring as a way to prioritize alerts that should be reviewed by cybersecurity experts.
However, the volume of alerts means that many organizations will define a risk score below which they?don’t?investigate alerts. Whether that risk score is 70%, 50%, or 30%, it means not every alert is investigated.
It also implies that organizations and the experts using the tools MUST trust the tool. But what if the tool makes a mistake?
Problem #3 – Most SLAs are misleading
Managed service providers like to say they respond quickly to incidents, and may even offer a Service Level Agreement, or SLA, to prove their point.
However, the devil is in the details.
For example, an SLA based on a response time isn’t useful if it’s not combined with an SLA covering the investigation and threat hunting performance.
What do I mean by that?
As you can see from the process above, there are multiple actions that need to take place before an incident is confirmed and you are notified.
The critical piece here is that incidents are?confirmed by humans: A notification and response are initiated AFTER human intervention. If the “Notification SLA” is 15 minutes for a high-severity incident, all this means is that a human has 15 minutes to inform you once they get to Step 6.
How long does an email take to reach you? Is this a useful SLA? The same goes for a response time SLA, as it starts at the same point.
But why don’t they start to measure the time it takes to investigate, or do threat hunting, from the point where they receive the telemetry in Step 2?
Because they can’t or it’s not to their advantage; they don’t know when they will have the needed threat intelligence to hunt for new threats. This means a threat may exist in your environment for minutes, hours, days, or even months.
Going back to the SLA, the provider needs a repeatable process with manageable outcomes, otherwise there’s a lot of risk of presenting results that don’t meet expectations. Add the fact that SLAs have penalties attached to them, and it’s not a viable option in many cases.
Problem #4 – Ignoring more low-scoring threats make a team?seem?more productive
You need people with the right skills to get the most out of these solutions. While this is true for any technology, the amount of knowledge and experience required means operational costs are higher.
How do you stretch your budget? Go back to problem #2 and adjust your risk score to something higher, and voilà! Your team can monitor more endpoints.
But what are they missing?
Therein lies the gap.
CYDEF solves these problems.
And, in my admittedly biased opinion, the solution is mind-blowing in its simplicity.
Here’s what we’ve done to solve the problems listed above:
Solution #1 – Combine AI and people by using their strengths
We’ve built our SMART-Monitor platform from the ground up to enable people (yes, people) to make the final decision about what’s good, malicious, or undesirable. Our AI technology is used to present information in the most efficient manner possible to humans for making the final decision. We also believe that antimalware solutions can stop known threats on endpoints.
Our objective: To catch what antimalware, and all previous security layers, missed.
Solution #2 – We collect less data
Modern operating systems and applications can generate so much telemetry. It’s easy to believe that if it exists, there’s a purpose, and we should try to use it. However, not all telemetry can be used to confirm if an activity is legitimate.
We’ve identified specific telemetry we can use to understand what activities are happening on a given system. These activities (application and process behavior analytics) are then reviewed through automation: Why ask someone to look at an activity we’ve already seen before? Fewer activities to review means less work to be done.
Minimizing the data generated has other advantages: smaller footprint on the endpoint (RAM, CPU), and less bandwidth required. In the cloud, reduced processing and storage, and improved response times.
Overall, these reductions contribute to a more cost-effective model.
Solution #3 – We do threat hunting differently
Threat hunting is defined by some as “an art” (including?IBM), meaning the process and results vary depending on someone’s knowledge, skills, and time.
But art doesn’t guarantee consistent results.
So we’ve built a process, which is part of the SMART-Monitor platform. Don’t worry, you don’t have to learn how to use it.
If you don’t have to learn it, why are we talking about it?
It’s what makes us efficient, effective, and affordable. We have a rigorous, repeatable process that lets our team review ALL unknown activities and classifies them as expected, undesirable, or malicious, and perform incident response activities as required.
But you don’t need to take our word for it. You can verify what we do:
Solution #4 – Performance objectives that make sense
Contrary to our competitors, we measure how much time it takes our team to investigate all the anomalies found on your endpoints. This means we can build incident response metrics that match your risk profile. Does the cost of our service vary based on this metric? Yes, it does, but for now, we believe it’s better to offer complete coverage at different speeds and be transparent than to create metrics that don’t actually help you.
Cybersecurity is a journey
While ransomware, malware, and phishing make headlines, our experience shows that over 90% of the incidents we identify for our customers are related to what we categorize as policy violations. These are activities that shouldn’t happen in a business environment, on corporate systems: crypto mining, games, spyware, and so on.
Why do we take the time to document incidents and ask our clients to address them?
Because there’s a pattern: Your security policies and awareness training inform your staff on what they can and cannot do with the devices you provide them. But does that always work out perfectly?
You may have Windows group policies and other technologies to control devices, but do you know if they’re working? Do you know if your automatic antimalware and system updates are being applied to ALL of your devices?
We help customers see the trends:
Our objective is not only to stop malicious attacks, but to help organizations improve their security posture over time by providing trends and reports on what’s happening on their devices.
These trends help our customers decide where they need to invest to improve their security posture. It might be in improved device controls, process improvements, or staff training.
Want to learn more?
Whether you’re a service provider, large enterprise, or SMB, learn how?we can help you?protect your organization, and see for yourself why we have a 98% customer retention rate.
This article was originally published here https://cydef.ca/blog/why-ai-fails-spectacularly-at-cybersecurity/ by Steve Rainville, CEO, CYDEF.
Passionate, data-driven digital marketer with an extensive background in training
1 年Problem #2 is scary. I don't think most people realize that's how it works.
Senior Info Mgmt Architect at ADGA | Made in Canada, for Canada
1 年Great observation Steve Rainville "If Artificial Intelligence (AI) was truly as good as some vendors claim, humans would not need to be involved in?cybersecurity?at all."