The Cybersecurity Opportunity in AI

The Cybersecurity Opportunity in AI

As we have seen repeatedly throughout 2019 and over the last 10 years or so, a simple data breach can result in the loss of billions of dollars of assets, revenue and shareholder value and intense reputational damage.

It can also result in a shutdown of critical infrastructure, electric grids and nuclear power plants, the leak of a boat-load of classified government data and the public disclosure of enormous amounts of personally identifiable information.

Taken to a non-hypothetical extreme, these breaches will all someday have the potential to collapse entire economies, drive the descent of what we now think of as political civility and order into chaos and anarchy, cause an unrecoverable compromise of national security and enable the theft and manipulation of all PII resulting in a complete mistrust in the underlying security of personal identity.

No alt text provided for this image

In almost all of these instances, the cause can be traced back to human error around cybersecurity.

Most CISOs, understandably do not believe their fellow employees are capable of safeguarding the data they handle on a day to day basis. One of the main reasons behind this apparent ineptitude is the reality that most of the cybersecurity solutions used by a majority of our enterprise workers are difficult to manage. In order to be productive, most employees develop well-intentioned workarounds that create brand new vulnerabilities against which no defense has been identified nor imagined.

All of us work in highly pressurized and stressed environments and most of us use multiple computing devices throughout the day, many of which are mobile and small screen in nature. Our best intentions and sometimes our most contrived workarounds yield to the intersection of speed and malicious invention. More simply, malicious actors fully understand our vulnerabilities and leverage them through increasingly sophisticated and artful social engineering to their advantage.

Our inability to match our adversary’s speed and cunning creates an opportunity for artificial intelligence (AI) to assist in rescuing human frailty from its own DNA.

But this isn’t the AI that we see in the movies or that we read about in science fiction. Instead, this is the class of AI that IBM’s CEO Ginni Rometty speaks about when she says, “Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we'll augment our intelligence."

No alt text provided for this image

This is the AI that provides a bridge across the chasm between productivity and security and can enable the notion of an "invisible security" that can grow and evolve against threats as they occur, thus potentially leveling the grossly imbalanced playing field for threat actors.

Modern threat vectors morph at the speed of light, which of course is well beyond the reach of human response. However, the speed of AI matches that morphing pace and with the proper training, machine learning algorithms can detect the new threat embodiments before they can grab a foothold.

The challenge with current AI solutions is that they cannot function without human assistance. That means that whatever programming or tuning is necessary for detection and blocking is going to be based on our own experience and will by definition embed our entire set of cognitive biases and reasoning errors into the result.

No alt text provided for this image

Biases such as “availability” which could cause the availability of information about cybersecurity attack “trends” to influence what we train our AI systems to watch for in the way of threat vectors.

“Confirmation” bias may influence decisions by experienced security analysts that what has happened prior to past data breaches are the keys to detection for future attempts. This bias of course becomes a weakness as analysts tend toward regularly investigating incidents in ways which only support their existing beliefs.

Fundamental “attribution” biases lead security analysts to conclude most frequently that the acronym PEBKAC (Problem Exists Between Keyboard and Chair) is in play in almost every security breach and may result in overly weighting AI training in that direction.

No alt text provided for this image

The objective in creating AI solutions that will assist in fighting against polymorphic attack vectors should be to build harmony between the best characteristics of human behavior and the most effective characteristics of our current state of AI technology as it is today and not based on what it could be in the future.

In fact, human behavior and AI technology can actually compensate for one another's weaknesses. AI is obviously faster than we are and is incapable of error beyond what we impose by way of faulty rules and reasoning, and humans can manage and limit or expand the technology’s capabilities as appropriate to the targeted tasks.

The present and real opportunity in AI to assist with creating improved cybersecurity profiles and a more defensible threat landscape depends on our ability to approach the application of “augmented intelligence” with an optimized sense of purpose.

Though this characteristic is largely not noted in human endeavor, higher intentions focused objectively oftentimes result in the best outcomes.

Nigel Donovan

Executive Stress Management > Executive Leadership Coach > Emotional Intelligence Coach > Executive Coaching

5 年

Great take on Artificial Intelligence, enjoyed the read, Steve.

回复

要查看或添加评论,请登录

Steve King, CISM, CISSP的更多文章

  • Connected Device Security: A Growing Threat

    Connected Device Security: A Growing Threat

    Many cybersecurity analysts have warned of the rapidly emerging threat from an expanded IoT space. And as you have…

    3 条评论
  • China’s Ticking Time-Bomb.

    China’s Ticking Time-Bomb.

    It should now be clear to even the casual observer that China has been spying on us for years and stealing reams of…

    7 条评论
  • Comparing Major Crises To COVID-19: A Teachable Moment

    Comparing Major Crises To COVID-19: A Teachable Moment

    Lessons from past financial crises might prepare us for the long and short-term effects of COVID-19 on the economy and…

  • The Escalating Cyber-Threat From China

    The Escalating Cyber-Threat From China

    A Modern-day Munich Agreement In an article penned back in May of 2015 in a policy brief published by the Harvard…

    1 条评论
  • Cybersecurity: Past, present, future.

    Cybersecurity: Past, present, future.

    We have made a flawed assumption about cybersecurity and based on that assumption we have been investing heavily on…

    15 条评论
  • Three Marketing Tips for Improved Conversion Rates

    Three Marketing Tips for Improved Conversion Rates

    While we are all devastated to one degree or another by this outbreak and with the knowledge that it will likely change…

  • Coronavirus in the Dark.

    Coronavirus in the Dark.

    So, yes. It is now very clear that the outbreak of the COVID-19 virus and the concomitant investor panic leading to a…

    13 条评论
  • Panicky Investors Issue Dire Warning On Coronavirus

    Panicky Investors Issue Dire Warning On Coronavirus

    Sequoia Capital just issued a dire warning to its portfolio companies. “Coronavirus is the black swan of 2020.

    5 条评论
  • AI in Cybersecurity? Closing In.

    AI in Cybersecurity? Closing In.

    "AI Needs to Understand How the World Actually Works" On Wednesday, February 26th, Clearview AI, a startup that…

    8 条评论
  • Do CapitalOne Shareholders Have a Case Against AWS?

    Do CapitalOne Shareholders Have a Case Against AWS?

    An adhesion contract (also called a "standard form contract" or a "boilerplate contract") is a contract drafted by one…

    1 条评论

社区洞察

其他会员也浏览了