AI Based Defensive Systems Impact on Cybercriminal Strategy
Attendees of Cyber Salon, October 2019 - By Inteligenca Inc.

AI Based Defensive Systems Impact on Cybercriminal Strategy

Good guys are working at a fever pitch to create pre-emptive adversarial attack models to find AI vulnerabilities. But threat actors are working just as fast to develop threats and have the resources (aka money) to build powerful cyber weapons. Who will win this race against time?   

SACRAMENTO, CA - Some of California’s top security minds came together during National Cybersecurity Awareness Month to discuss the role of Artificial Intelligence (AI) in cybersecurity. Leading experts from both the private and public sectors joined our Inteligenca Cyber Salon to discuss both the promise and concern about AI. No doubt we heard during this talk that AI is controversial. 

The group spent a lot of the evening discussing adversarial machine learning. This included how AI neural networks could be tricked by intentionally modified external data. For example, an adversarial image of a cat to human eyes is a cat, however, adversarial meddling can trick AI into seeing the cat as something completely different. An attacker can distort inputs of all kinds causing AI to misclassify them.  

The benefits of AI can be huge. Right now, everything that civilization has to offer is a product of human intelligence, but it is hard to predict what we might achieve when human intelligence is magnified by the tools that AI may provide.

“Artificial Intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car),” said Malcolm Harkins, the Chief Security, and Trust Officer at Cymatic. “However, the long-term goal of many researchers is to create general AI (or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, general AI would outperform humans at nearly every cognitive task.” 

Perhaps the biggest takeaway from this discussion - we need to get ahead of this amazing technology before it turns everything we never wanted into reality. It is a matter of time before bad actors figure out how to use it against humanity in the most abusive and unethical way possible because they got there first. 

“Given the popularity of AI in security solutions being offered and the broadness of the term, my core questions with any AI solution is what exactly does it do and how much should I ‘trust’ it?” said Kim Owen, the CISO at California Earthquake Authority. “No single security solution is a silver bullet, despite what salespeople may say, so like any other security tool, I need to know its capabilities as well as its limitations. AI-enabled anti-malware? There are well-established solutions and limited scope, so absolutely! Anything beyond that, and I need to understand the algorithms and how ‘it’ makes decisions. Another aspect that’s challenging to gauge is what biases are inherent? All AI solutions are created and trained (whether through supervised or unsupervised learning) by people-and everyone has biases. Automation + the AI component only exacerbates these types of biases.”

We are facing possible futures of incalculable benefits and risks, but are the experts really doing everything possible to ensure the best outcome for us all? Absolutely not. We need government and corporations to invest a significant amount of money and resources into R&D to build a more robust AI in order to protect humanity against the advisory use of artificial intelligence.  

“I agree with the consensus in the technology community that AI will certainly have a huge impact on how the world works; however, I do not necessarily agree where a lot of the attention and effort is going, which is, in essence, trying to ‘win the race’, said Peter Liebert, former CISO of California and currently the CEO of Liebert Security. “From my perspective, who invents the technology first is not as important to who implements it first. To that end, the US government has a massive disadvantage as our institutions are notoriously slow at adopting new technologies. We need to take concrete policy steps now to help address this issue before this situation occurs, particularly if AI comes onto the technology scene in a ‘big splash’ versus ‘slow drip.’” 

So, what are the most worrisome AI-based threats on the horizon? Matthew Rosenquist, the former Intel Cybersecurity Strategist and one of LinkedIn Top 10 Tech Voices shared his three most concerning attack vectors:

1.   Automated ‘smart’ ingress – This is where AI-enhanced systems conduct the necessary research to orchestrate highly successful attacks. By looking at a vast amount of target data, it determines the most efficient method. The system learns from failures and continues in a relentless loop, pursuing the victim until successful. With the strength of AI, such attacks can be completely automated, prioritize exploitations with maximum efficiency, and scale across millions of targets simultaneously. 

2.   Dynamic ability to remain stealthy, persistent, and effective – Once within the security boundary of a victim, attackers do not want to be detected or evicted. Many current compromises are noisy or find a static place to hide and are relatively easy to discover. AI allows for a dynamic response that adapts to undermine active detection techniques with various tactics such as morphing, camouflage, distractions, self-deletion, impersonation, and system interference.  

To remain unnoticed, AI systems will greatly reduce revealing actions and instead be selective in making small but highly effective changes to data, transactions, and other valuable bits to undermine the confidentiality, integrity, and availability of the system. Once burrowed in, collections of compromised systems can work together as part of a hive structure, that actively regenerates and repels attempts to be removed. Future generations of AI-enabled cyber-attacks will have stronger persistence, longevity, and recoverability while doing more damage in covert ways.

3.   Automated customized Social Engineering attacks at scale – Through data aggregation, believable impersonation, and attack adaptation, AI-enabled methods could convincingly engage with victims to trust and follow their malicious instructions. AI allows for specific and personalized attacks against individuals, but at an unprecedented scale by leveraging the vast amount of unstructured data available from social media and other Internet sources. Fraudulent engagements could take the form of email, text messaging, voice, video, and even forge biometrics (ex. DeepFakes) to create a customized campaign that appears authentic and supremely trustworthy. These techniques will greatly elevate phishing attacks, malware infections, online harassment, manipulative misinformation campaigns, and online fraud. The most concerning aspect will be the scale and customization which will allow specialized attacks that can specifically target every person based upon data unique to them.

We know that cybercriminals are not quite advanced enough to use the adversarial machine to the extent we fear. However, with more decisions based on prediction algorithms and fewer human interactions we are increasing the risk of adversarial use of AI. 

“Some of the most interesting material dealing with A.I. in cyber defense systems involves solutions focused on the automated discovery and countermeasures associated with zero-day exploits and vulnerabilities,” said Jason Elrod, the Chief Information Security Architect at Sutter Health shared his concerns about AI technology getting in the wrong hands. “Traditionally, ‘zero-days’ have been hard to come by as they take advanced talent profiles, sophisticated tools, and large amounts of time to discover. One of the primary methods used to discover zero-days is called ‘fuzzing’ and involves the creation of unique tools designed for particular targets and then monitoring the effects of those tools and techniques in order to discover vulnerabilities. Fuzzing is not easy, but advancements in A.I. seem to be ready to change that. By combining A.I. and Machine Learning with ‘automated fuzzing’, cyber-criminals can turn what is traditionally a difficult and time-consuming effort into a common occurrence. This will eventually lead to a state where the discovery and exploitation of zero-day vulnerabilities are happening in real-time with no way to effectively defend against it.” For most organizations today, A.I. is primarily thought of as an assistive technology in the cybersecurity space; a type of force multiplier for other solutions. But if you are really paying attention and doing the research, you quickly come to the conclusion that it is just a matter of time before A.I. cyber defense solutions will be an essential requirement in order to be able to manage threats and vulnerabilities. Strategically, it is an arms race between threat actors and defenders. Organizations will need to keep parity with A.I. technology, or they will absolutely be at a disadvantage.

Ideally, the future of AI will bring a new era of innovation and productivity, where human ingenuity is enhanced by speed and precision. However, we need to ensure that this amazing technology will continue being developed by people with ethics and non-biased programming.

In fact, we need to be sure that people aren’t phased out of the process altogether. There are clearly going to be changes related to the existing talent requirements, but job placement experts are already preparing for massive changes: “As AI and Machine Learning tools improve and evolve, then Tier 1 duties will be less in demand,” said Chad Daugherty, the Managing Partner of ZEEKTEK, a Northern CA based Cyber Security Recruiting Firm. “Performing higher-level analysis, development/scripting for tools integration, and doing heavy analysis and interpretation of the specific AI & ML tools related to Identifying, Detection, Response, and Recovery will be in higher demand. I believe we will also see an increasing trend for more organizations to utilize MSP’s and 3rd party vendors to manage their Cyber Security.”

From event sponsoring companies like Zeektek that have been considering the impact of AI for years to others like the Chicago-based consulting firm KFA, including their recent expansion to Northern California, and hiring new cybersecurity employees and implement security training programs, having a Cyber Salon provides them a space for an exchange of ideas outside their regular circles of discussion. 

KFA’s managing director Leslie Gazeley said about the evening: “The Cyber Salon gave me a rare opportunity to engage in friendly dialogue with IT experts who clearly know their trade. I found these folks to be knowledgeable and authentic (not just throwing around the popular buzzwords). Their passion for information & data security, and the safety of those whose information/data they protect, is truly inspiring.”

It is without a doubt that Artificial Intelligence will transform the relationship between people and technology. It's important for us to share knowledge, fears, and hopes for AI in order for us to keep this world safe for all upcoming generations, so they can thrive.

This article was first published by the PenTest Magazine on November 9th, 2019. Link here.

#cybersecurity #AI #nationalcybersecurityawarenessmonth #cybersalon #informationsecurity #cyberwarfare #cyberdefense #artificialintelligence

Endless race - this will be a burden for smaller companies not able to invest in probably costly AI based defensive solutions which will be obsolete when implemented.

Charles Wadsworth

Chief Restoration Officer

4 年

A battle between good and bad AI will not end well. The solution lies in flipping the asymmetry, to give the defendered enduring superiority. See illusivenetworks.com for new ideas.

回复

要查看或添加评论,请登录

Carmen Marsh的更多文章

社区洞察

其他会员也浏览了