Observations from Black Hat
Steve King, CISM, CISSP
Cybersecurity Marketing and Education Leader | CISM, Direct-to-Human Marketing, CyberTheory
The thing that jumps out immediately as I walked around the exhibition floor at this past week’s now huge Black Hat cybersecurity conference in Las Vegas, is the clear messaging that machine learning and artificial intelligence are the new saviors for cybersecurity.
Oh, and the other thing that can't be ignored is the amazing absence of women in the cybersecurity community.
I certainly understand the info-security industry’s hunger for algorithms and any panacea in this nightmare storm of incessant cyberattacks. It is even more understandable when you consider that we haven’t even begun to attach to the Internet all of the stupid devices that we plan to in the next 5 years. Combined with the absolute absence of newly trained cybersecurity analysts and the weird dearth (acute insufficiency) of available talent in the existing marketplace, we should be terrified.
The notion of being able to leverage machine learning and AI to help automate threat detection and response is like a narcotic. The promise of easing the load on existing analysts and the potential for real-time detection of badness that far outperforms anything we have seen before is understandably breathtaking.
But, and the caveat here is that this might simply be the cynical response from someone who has spent far too much time in the trenches of both cyber-warfare and business development, it feels a lot like many of these products are being rolled out to satisfy a market demand that was self-perpetuated by the very companies doing the rollouts. If I were a customer starved for help and thus fully committed to the AI/ML solution swan-song, I might also fall for the magic of one dazzling dexterous hand performing feats of amazement while the other hand remains hidden behind the curtain.
(Did I mention the glaring absence of women?)
Many of the products being rolled out rely on “supervised learning,” which requires the vendors to select and define data sets that the algorithms are already trained to detect, differentiating between code that contains or represents malware and code that isn’t or doesn’t. The question that the vendors have universally stumbled over has been, “Does the training data contain anomalies or is it so clean that if a cyber-attacker switched labels, the algorithms might miss a real attack because they assumed that clean code is clean code?”
Another tricky question is “If the bad guys could replicate your data models and then remove the tags, would you be able to detect the switch?”
“Uhm,” was the typical response, followed by “we are still in extensive beta testing mode, but so far we are hitting 99%.” Whatever that means.
(I think I could count the women there on one hand)
The apparent danger with most of these algorithms is that there seems to be an over-reliance on a master algorithm that drives many of these systems which if compromised would then render any and all other signals useless. As hard as it is for me to admit this, the one exception seems to be found in [gasp] Microsoft Windows Defender whose threat detection service uses a diverse set of algorithms with different training data sets and features. If one of those algorithms is hacked, the results from the others will highlight the anomaly in the first model.
The other problem in the space is a lack of what is called explainability. Some of the most complex algorithmic responses can be very difficult to understand as they relate to the reasons they detected and identified one malicious piece of code versus another. We are used to rules based software that can be tuned to filter certain events and conditions. We are not used to autonomous computing. This lack of explainability can make it very hard to assess what’s driving certain anomalies over others.
Perhaps when the underlying engines adapt to a core AI technology platform like Kyndi, we will be able to better understand the reasons behind the actions and can regain control over the AI systems upon which we are so hungry to depend. Kyndi is an explainable Artificial Intelligence platform designed to help enterprises transform regulated business processes (soon to include cybersecurity) by offering auditable AI systems. Platforms like that will help cybersecurity analysts understand not only what is happening in their AI-enhanced computing environments but why it is happening.
In the meantime, I am sure that AI and machine learning will evolve and learn how to engineer an important role in everyone’s cyber defense strategy. The need has never been more urgent as we have proven quite remarkably that the $80 billion we have spent on over 650 cybersecurity products to-date has not been effective in detecting cyber-attacks or preventing breaches.
We will leave the next problem which is going to be how we prevent our adversary’s own AI systems from out-smarting our own to the chapter that follows, but for now at least there may be a glimmer of hope on the horizon.
(Maybe if there were more women in cybersecurity, we might be more successful in doing things like actually preventing breaches. You know, the whole female intuition thing? Just saying.)
Cybersecurity Marketing and Education Leader | CISM, Direct-to-Human Marketing, CyberTheory
6 年Yes. Too bad.?
Vice President Growth & Advisory Services @ Leapfrog Technology | AWS Advanced Partner | HCLS Healthcare and FinTech GenAI | ML | AI | SecOps | Security Services |
6 年I had no idea you were there, I attended and it would have been great to catch up.