The Role and Impact of AI in the Future of Cyber Threat Intelligence and 3 Things to Investigate

The Role and Impact of AI in the Future of Cyber Threat Intelligence and 3 Things to Investigate

?AI is unprecedentedly changing cyber threat intelligence, but it’s not a silver bullet.

If we don’t use it right, we’ll have more significant problems than we started.

?In almost 10 years of working in LLM/AI and Cybersecurity across industries, I’ve seen three major AI-related cybersecurity challenges rise to the surface every single time.

?

Challenge #1: AI is a Double-Edged Sword

A financial services client of mine learned this the hard way. They had AI-driven threat detection in place, but hackers used AI-generated deepfake audio to impersonate an executive, tricking an employee into wiring money. The breach wasn’t due to weak technology but to misplaced trust in AI as a complete solution.

?The Fix: Balance AI with Human Expertise

AI can detect patterns and anomalies at scale, but it lacks intuition. Balancing automation with skilled human analysts who can think critically and detect AI-powered deception is key.

  • Train teams to recognize AI-generated threats (deepfakes, AI-written phishing emails, etc.).
  • Use AI as an enhancement to human expertise, not a replacement.
  • Keep humans in the loop for critical decisions—never rely on AI alone to determine risk.

?

Challenge #2: AI Creates Too Much Noise

One of AI’s most significant promises is helping security teams cut through the noise. But too often, AI tools flood teams with false positives, leading to alert fatigue and real threats slipping through the cracks.

?I worked with a healthcare company that deployed an AI-driven security operations center (SOC). They thought it would reduce workload—but instead, the AI flagged everything as a threat. Analysts became so overwhelmed that they started ignoring alerts, and an actual attack went undetected.

?The Fix: Tune AI Models Properly

AI is only as good as its training. If it’s producing too much noise, it’s not helping. Organizations must refine their AI systems to provide meaningful, actionable intelligence.

  • Regularly fine-tune AI models to minimize false positives.
  • Use feedback loops—train the AI with real-world outcomes to improve accuracy.
  • Prioritize alerts based on context, not just raw anomaly detection.

?

Challenge #3: Ethical & Regulatory Uncertainty

AI in cybersecurity isn’t just a technical challenge—it’s a legal and ethical minefield. Companies that fail to address these issues proactively risk lawsuits, regulatory fines, and reputational damage.

A retail company I worked with found this out the hard way. They used AI for fraud detection, but the model disproportionately flagged specific demographics, leading to accusations of bias. The fallout was legal and reputational—something they could have avoided with better oversight.

?The Fix: Stay Ahead of Regulations

AI regulations are evolving fast, and organizations that don’t keep up will be in trouble. The best way to avoid ethical and legal pitfalls is to be proactive rather than reactive.

  • Implement bias audits—regularly check AI models for unintended discrimination.
  • Stay informed on global AI and cybersecurity regulations (GDPR, NIST AI Risk Management, etc.).
  • Build transparency into AI decisions—document how models work and why they make confident choices.

?Security isn’t just about stopping hackers—it’s about protecting trust. And if your AI isn’t ethical, it isn’t secure.

AI in Cybersecurity: The Path Forward

  • AI should complement human expertise, not replace it.
  • AI must be tuned correctly—too much noise is as bad as no AI.
  • AI must be managed ethically and legally—regulatory compliance isn’t optional.

?

?Here are three things for you to consider

1. How can organizations determine the right balance between AI and human expertise in cybersecurity?

Finding the right balance between AI and human expertise depends on an organization’s size, risk profile, and cybersecurity maturity. However, a good rule of thumb is to let AI handle tasks that require speed and scale, while humans focus on complex decision-making and strategic oversight.

Here’s a practical approach:

  • Automate routine tasks – Use AI for log analysis, anomaly detection, and alert prioritization. This frees up human analysts for deeper investigations.
  • Human oversight for critical decisions – AI can flag potential threats, but humans should verify and respond to high-risk alerts, such as suspected phishing attacks or fraud.
  • Train AI models with human expertise – Security teams should continuously provide feedback to improve AI accuracy, ensuring it aligns with real-world threats.

Example: A company using AI-powered threat intelligence should let AI scan network traffic for anomalies, but a human should investigate flagged incidents before action is taken. This prevents over-reliance on AI while maximizing efficiency.

?

2. What specific steps should companies take to fine-tune AI models effectively?

Fine-tuning AI models requires continuous monitoring and refinement to ensure they provide useful, actionable intelligence instead of excessive false positives.

Key steps for optimizing AI models:

  1. Regularly retrain AI with fresh data – Cyber threats evolve constantly. AI should be updated with new attack patterns, malware signatures, and real-world security incidents.
  2. Use a feedback loop – Security analysts should label false positives and false negatives to improve the AI’s accuracy over time. This process helps AI distinguish between real threats and harmless anomalies.
  3. Adjust thresholds dynamically – Instead of setting static rules, organizations should fine-tune detection thresholds based on historical data and threat context.
  4. Deploy AI in layers – AI should work alongside existing security tools (SIEM, EDR, etc.), not in isolation. Integrating AI-driven insights with human-led investigation helps avoid unnecessary alerts.
  5. Test AI decisions regularly – Red teaming exercises can expose weaknesses in AI models. If AI consistently flags non-threats, it’s a sign that adjustments are needed.

By following these steps, organizations can prevent AI from becoming a liability while ensuring it provides real value.

?

3. What upcoming AI regulations should businesses be preparing for, and how can they stay compliant?

AI regulations are evolving rapidly, and businesses need to stay ahead of compliance to avoid legal and reputational risks. While global laws vary, here are some key regulations and trends to watch:

  • The EU AI Act – Expected to set strict rules on AI transparency, bias mitigation, and risk assessments. Organizations using AI for cybersecurity should ensure compliance, especially if they operate in Europe.
  • U.S. AI Executive Order (2023) – Calls for AI safety, security, and risk management standards, influencing cybersecurity frameworks like NIST’s AI Risk Management Framework.
  • GDPR & AI – The EU’s General Data Protection Regulation (GDPR) applies to AI models handling personal data. Companies must ensure AI-driven threat detection doesn’t violate privacy laws.
  • China’s AI Regulations – Includes strict content moderation and security risk assessments for AI applications, affecting multinational companies.

How to stay compliant:

  1. Conduct AI risk assessments – Regularly review AI models for bias, security risks, and compliance with data protection laws.
  2. Maintain transparency – Document how AI makes decisions and ensure explainability, especially for regulatory audits.
  3. Follow industry frameworks – Adopting NIST’s AI Risk Management Framework or ISO/IEC 42001 AI governance standard can help companies align with global best practices.
  4. Engage legal and compliance teams early – AI governance should be a cross-functional effort, not just an IT concern.

Regulations will continue evolving, but organizations that take a proactive approach now will be in a much stronger position when stricter laws take effect.

要查看或添加评论,请登录

Geoff Hancock CISO CISSP, CISA, CEH, CRISC的更多文章