AI & Cybersecurity: Closer Allies Than You Might Think
One of the most compelling topics today in today’s IT discourse is the convergence of AI and cybersecurity. For many years, cybersecurity experts have viewed AI as a “frenemy.” AI (previously referred to as “machine learning”) has led to massive improvements in the ability of organizations to review and take action on their most pressing security vulnerabilities. But, AI has also been leveraged by cyberattackers to develop more sophisticated threats. I anticipate that trend to accelerate in the next several years.?
In observation of Cybersecurity Awareness Month, Rick Mounfield, Chartered Security Professional Director at Optimal Risk Group, and I recently discussed the impact of AI on cybersecurity. You’ll find a summary of our discussion below, and you can watch the full session here.
Key AI & Cybersecurity & Statistics?
Our discussion was framed around the following statistics:
- According to a HiddenLayer report, 98% of IT leaders consider AI models crucial to business success.?
- A separate Cloud Security Alliance report (sponsored by Google Cloud) found that 55% of organizations plan to adopt GenAI solutions this year.
- According to the same Cloud Security Alliance report, 48% of professionals are confident about their organization’s ability to execute a strategy for leveraging AI in security. Flipping that statistic around, we can assume that 52% of professionals aren’t confident about their ability to execute an “AI in Security” strategy.
- Turning our focus to cyber-threats, a recent ISC(2) study revealed that 75% of respondents were moderately to extremely concerned that AI would be used for cyberattacks or other malicious activities.?
- Finally, a recent Cisco study found that 27% said their organizations have banned usage of Gen AI applications. ?
Rick and I provided our perspectives on those statistics, then answered the questions that are outlined below. We strongly encourage you to watch the entire presentation, so you can see our interaction.
How Can AI Improve Business Productivity?
Our responses to this question included the following:
- Improves organizational efficiency.
- Enables companies to address repetitive tasks and automate them.
- Empowers organizations to focus on strategic work and improves decision-making. Rick gave an example of the UK police utilizing AI to re-open cold crime cases and process the massive volume of evidence associated with those cases.
- Prompts technological innovation: Rick has collaborated with start-ups in the UK that have utilized AI to develop new SaaS solutions.
- Can be a significant ally in keeping up with rising cyber-threat volume.
- Allows you to focus on the cyber-threats that are most likely to impact your company’s infrastructure.
Are Cyberattackers Leveraging AI Technology?
Both of us agreed that the answer to this question is “100% Yes!”?
Examples of cyberattackers leveraging AI technology include the following:
- Phishing emails that are modeled on the sender’s voice and tone, while being more targeted than before.
- Even the US Federal Bureau of Investigation (FBI) has warned about the impact of AI-based video- and voice-cloning on decision-making by business associates, coworkers and business partners.?
- Proliferation of deep-fakes: Rick provided an example of a deep-fake recording in which Sir Keir Starmer- Prime Minister of the United Kingdom- allegedly berated a staffer in a deep-fake audio. You can learn more about the deep-fake here.?
Which Best Practices Encourage Responsible AI Usage by Your Users?
Rick framed this section of the discussion with a wise introductory perspective: “The threat is people, and the answer is people.”?
Best practices to encourage responsible AI usage include the following:
- Users need to be trained to utilize AI ethically.
- Cybersecurity protection needs to be emphasized throughout the training process.?
- Ethical AI guidelines need to be developed by the company.
- Companies need to have effective AI plans. In other words, organizations need to manage AI, rather than having “AI manage them.”
- Companies need to determine what AI solutions are blessed by the organization, in order to protect their most sensitive data.
- Users need to be regularly reminded that sensitive organizational data should never be included in public AI models.?
What are the Risks of a “Lock It Down” Approach to AI Usage?
Both of us agreed that locking down AI usage is not in the best interest of most organizations.
Risks include the following:
- Without direction, users will leverage whatever Shadow IT AI technology they want to use, outside of the company’s visibility.?
- Inclusion of sensitive data in a Knowledge Base could place a company’s Intellectual Property (IP) and cybersecurity preparedness at risk.
- User productivity is likely to decrease.
- The ability to personalize customer interaction- mission-critical in today's business world- will decrease.
- IT teams won’t be able to keep up with the rising volume of rapidly-evolving cyber-threats.
- A “lock it down” approach discourages a culture of honesty, in which users are encouraged to report their mistakes.
- Human nature tells us that people will do what they want to do, so it’s best to have organizational visibility into their AI activities.
How Do You Cultivate a Symbiotic Relationship Between AI Adoption & Business Risk?
This aspect is one of the toughest for organizations to manage.
Recommendations include the following:
- There needs to be human stewardship over AI to effectively manage business risk.
- IT teams need to involve their end-users and executive teams in planning for AI initiatives, so that everyone’s aligned on the potential risks.
- Companies should analyze which AI solutions are currently being used, and determine which ones have the most effective security protection for their users.
- You need to anticipate “gray rhino” risks- risks that are large, obvious and headed in your direction. That’s because AI moves quickly and user errors perpetuated by AI can have a nearly immediate impact on your business.
What’s the Future Impact of AI on Cybersecurity?
Our predictions included the following:
- AI will continue to increase productivity.
- AI will continue to improve cybersecurity issue detection and remediation.
- There will be an ongoing battle between advancements made by cyberattackers who leverage AI and AI-based advancements by cybersecurity firms, who are trying to combat the cyberattackers’ threats.
- Additional regulation will be inevitable:
- The U.S. State of Colorado passed the Colorado Artificial Intelligence Act, which is focused on AI technology that’s involved in consequential decisions relating to education, employment, financial services, housing, health care or legal services.
- Similarly, the U.S. state of Utah enacted the Utah Artificial Intelligence Policy Act, which imposes disclosure requirements on entities that utilize generative AI tools with their customers, and limits the entities’ ability to “blame” generative AI for statements or acts that could constitute consumer protection violations.
- Contrariwise, Gavin Newsom, Governor of the U.S. State of California, vetoed a proposed measure that would have made tech companies legally liable for harm caused by AI models. The bill mandated that tech companies enable a “kill switch” for AI technology, in the event that the systems were misused or went rogue.?
- We can anticipate additional AI legislation around the world, in the next several years.
- Rather than the “good guys” being a step behind cyberattackers, AI could help us to better predict potential vulnerabilities and take more proactive action.
Learn More?
You can learn more by watching our session replay, and by including your perspectives on this topic in the comments section below.
Chartered Security Professional & Director at Optimal Risk Group
1 个月What a great summary and a very useful guide to AI adoption that considers many benefits and potential pitfalls. This will help many people consider safe integration whilst understanding the tech and people risks. Thanks Neil K. Jones this is one to download and keep going back to.
Balancing the potential and risks of AI is key. It’s promising to see leaders embracing AI while staying cautious about its misuse. Critical conversations like this are needed to stay prepared