Observations from Black Hat

Observations from Black Hat

The thing that jumps out immediately as I walked around the exhibition floor at this past week’s now huge Black Hat cybersecurity conference in Las Vegas, is the clear messaging that machine learning and artificial intelligence are the new saviors for cybersecurity. 

Oh, and the other thing that can't be ignored is the amazing absence of women in the cybersecurity community.

I certainly understand the info-security industry’s hunger for algorithms and any panacea in this nightmare storm of incessant cyberattacks. It is even more understandable when you consider that we haven’t even begun to attach to the Internet all of the stupid devices that we plan to in the next 5 years. Combined with the absolute absence of newly trained cybersecurity analysts and the weird dearth (acute insufficiency) of available talent in the existing marketplace, we should be terrified. 

The notion of being able to leverage machine learning and AI to help automate threat detection and response is like a narcotic. The promise of easing the load on existing analysts and the potential for real-time detection of badness that far outperforms anything we have seen before is understandably breathtaking.

But, and the caveat here is that this might simply be the cynical response from someone who has spent far too much time in the trenches of both cyber-warfare and business development, it feels a lot like many of these products are being rolled out to satisfy a market demand that was self-perpetuated by the very companies doing the rollouts. If I were a customer starved for help and thus fully committed to the AI/ML solution swan-song, I might also fall for the magic of one dazzling dexterous hand performing feats of amazement while the other hand remains hidden behind the curtain. 

(Did I mention the glaring absence of women?)

Many of the products being rolled out rely on “supervised learning,” which requires the vendors to select and define data sets that the algorithms are already trained to detect, differentiating between code that contains or represents malware and code that isn’t or doesn’t. The question that the vendors have universally stumbled over has been, “Does the training data contain anomalies or is it so clean that if a cyber-attacker switched labels, the algorithms might miss a real attack because they assumed that clean code is clean code?” 

Another tricky question is “If the bad guys could replicate your data models and then remove the tags, would you be able to detect the switch?”

“Uhm,” was the typical response, followed by “we are still in extensive beta testing mode, but so far we are hitting 99%.” Whatever that means.

(I think I could count the women there on one hand)

The apparent danger with most of these algorithms is that there seems to be an over-reliance on a master algorithm that drives many of these systems which if compromised would then render any and all other signals useless. As hard as it is for me to admit this, the one exception seems to be found in [gasp] Microsoft Windows Defender whose threat detection service uses a diverse set of algorithms with different training data sets and features. If one of those algorithms is hacked, the results from the others will highlight the anomaly in the first model.

The other problem in the space is a lack of what is called explainability. Some of the most complex algorithmic responses can be very difficult to understand as they relate to the reasons they detected and identified one malicious piece of code versus another. We are used to rules based software that can be tuned to filter certain events and conditions. We are not used to autonomous computing. This lack of explainability can make it very hard to assess what’s driving certain anomalies over others. 

Perhaps when the underlying engines adapt to a core AI technology platform like Kyndi, we will be able to better understand the reasons behind the actions and can regain control over the AI systems upon which we are so hungry to depend. Kyndi is an explainable Artificial Intelligence platform designed to help enterprises transform regulated business processes (soon to include cybersecurity) by offering auditable AI systems. Platforms like that will help cybersecurity analysts understand not only what is happening in their AI-enhanced computing environments but why it is happening.

In the meantime, I am sure that AI and machine learning will evolve and learn how to engineer an important role in everyone’s cyber defense strategy. The need has never been more urgent as we have proven quite remarkably that the $80 billion we have spent on over 650 cybersecurity products to-date has not been effective in detecting cyber-attacks or preventing breaches. 

We will leave the next problem which is going to be how we prevent our adversary’s own AI systems from out-smarting our own to the chapter that follows, but for now at least there may be a glimmer of hope on the horizon.

(Maybe if there were more women in cybersecurity, we might be more successful in doing things like actually preventing breaches. You know, the whole female intuition thing? Just saying.)



Steve King, CISM, CISSP

Cybersecurity Marketing and Education Leader | CISM, Direct-to-Human Marketing, CyberTheory

6 年

Yes. Too bad.?

回复
Denise W.

Vice President Growth & Advisory Services @ Leapfrog Technology | AWS Advanced Partner | HCLS Healthcare and FinTech GenAI | ML | AI | SecOps | Security Services |

6 年

I had no idea you were there, I attended and it would have been great to catch up.

回复

要查看或添加评论,请登录

Steve King, CISM, CISSP的更多文章

  • Connected Device Security: A Growing Threat

    Connected Device Security: A Growing Threat

    Many cybersecurity analysts have warned of the rapidly emerging threat from an expanded IoT space. And as you have…

    3 条评论
  • China’s Ticking Time-Bomb.

    China’s Ticking Time-Bomb.

    It should now be clear to even the casual observer that China has been spying on us for years and stealing reams of…

    7 条评论
  • Comparing Major Crises To COVID-19: A Teachable Moment

    Comparing Major Crises To COVID-19: A Teachable Moment

    Lessons from past financial crises might prepare us for the long and short-term effects of COVID-19 on the economy and…

  • The Escalating Cyber-Threat From China

    The Escalating Cyber-Threat From China

    A Modern-day Munich Agreement In an article penned back in May of 2015 in a policy brief published by the Harvard…

    1 条评论
  • Cybersecurity: Past, present, future.

    Cybersecurity: Past, present, future.

    We have made a flawed assumption about cybersecurity and based on that assumption we have been investing heavily on…

    15 条评论
  • Three Marketing Tips for Improved Conversion Rates

    Three Marketing Tips for Improved Conversion Rates

    While we are all devastated to one degree or another by this outbreak and with the knowledge that it will likely change…

  • Coronavirus in the Dark.

    Coronavirus in the Dark.

    So, yes. It is now very clear that the outbreak of the COVID-19 virus and the concomitant investor panic leading to a…

    13 条评论
  • Panicky Investors Issue Dire Warning On Coronavirus

    Panicky Investors Issue Dire Warning On Coronavirus

    Sequoia Capital just issued a dire warning to its portfolio companies. “Coronavirus is the black swan of 2020.

    5 条评论
  • AI in Cybersecurity? Closing In.

    AI in Cybersecurity? Closing In.

    "AI Needs to Understand How the World Actually Works" On Wednesday, February 26th, Clearview AI, a startup that…

    8 条评论
  • Do CapitalOne Shareholders Have a Case Against AWS?

    Do CapitalOne Shareholders Have a Case Against AWS?

    An adhesion contract (also called a "standard form contract" or a "boilerplate contract") is a contract drafted by one…

    1 条评论

社区洞察

其他会员也浏览了