The Role and Impact of AI in the Future of Cyber Threat Intelligence and 3 Things to Investigate
Geoff Hancock CISO CISSP, CISA, CEH, CRISC
I help business and technology executives enhance their leadership, master cyber operations, and bridge cybersecurity with business strategy.
?AI is unprecedentedly changing cyber threat intelligence, but it’s not a silver bullet.
If we don’t use it right, we’ll have more significant problems than we started.
?In almost 10 years of working in LLM/AI and Cybersecurity across industries, I’ve seen three major AI-related cybersecurity challenges rise to the surface every single time.
?
Challenge #1: AI is a Double-Edged Sword
A financial services client of mine learned this the hard way. They had AI-driven threat detection in place, but hackers used AI-generated deepfake audio to impersonate an executive, tricking an employee into wiring money. The breach wasn’t due to weak technology but to misplaced trust in AI as a complete solution.
?The Fix: Balance AI with Human Expertise
AI can detect patterns and anomalies at scale, but it lacks intuition. Balancing automation with skilled human analysts who can think critically and detect AI-powered deception is key.
?
Challenge #2: AI Creates Too Much Noise
One of AI’s most significant promises is helping security teams cut through the noise. But too often, AI tools flood teams with false positives, leading to alert fatigue and real threats slipping through the cracks.
?I worked with a healthcare company that deployed an AI-driven security operations center (SOC). They thought it would reduce workload—but instead, the AI flagged everything as a threat. Analysts became so overwhelmed that they started ignoring alerts, and an actual attack went undetected.
?The Fix: Tune AI Models Properly
AI is only as good as its training. If it’s producing too much noise, it’s not helping. Organizations must refine their AI systems to provide meaningful, actionable intelligence.
?
Challenge #3: Ethical & Regulatory Uncertainty
AI in cybersecurity isn’t just a technical challenge—it’s a legal and ethical minefield. Companies that fail to address these issues proactively risk lawsuits, regulatory fines, and reputational damage.
A retail company I worked with found this out the hard way. They used AI for fraud detection, but the model disproportionately flagged specific demographics, leading to accusations of bias. The fallout was legal and reputational—something they could have avoided with better oversight.
?The Fix: Stay Ahead of Regulations
AI regulations are evolving fast, and organizations that don’t keep up will be in trouble. The best way to avoid ethical and legal pitfalls is to be proactive rather than reactive.
?Security isn’t just about stopping hackers—it’s about protecting trust. And if your AI isn’t ethical, it isn’t secure.
AI in Cybersecurity: The Path Forward
?
?Here are three things for you to consider
1. How can organizations determine the right balance between AI and human expertise in cybersecurity?
Finding the right balance between AI and human expertise depends on an organization’s size, risk profile, and cybersecurity maturity. However, a good rule of thumb is to let AI handle tasks that require speed and scale, while humans focus on complex decision-making and strategic oversight.
Here’s a practical approach:
Example: A company using AI-powered threat intelligence should let AI scan network traffic for anomalies, but a human should investigate flagged incidents before action is taken. This prevents over-reliance on AI while maximizing efficiency.
?
2. What specific steps should companies take to fine-tune AI models effectively?
Fine-tuning AI models requires continuous monitoring and refinement to ensure they provide useful, actionable intelligence instead of excessive false positives.
Key steps for optimizing AI models:
By following these steps, organizations can prevent AI from becoming a liability while ensuring it provides real value.
?
3. What upcoming AI regulations should businesses be preparing for, and how can they stay compliant?
AI regulations are evolving rapidly, and businesses need to stay ahead of compliance to avoid legal and reputational risks. While global laws vary, here are some key regulations and trends to watch:
How to stay compliant:
Regulations will continue evolving, but organizations that take a proactive approach now will be in a much stronger position when stricter laws take effect.