2025 Cybersecurity and AI Predictions
Jason Lau, CISO
ISACA Board of Directors, Advisory Boards, CISO, CSO30, Adj Prof, Forbes Tech Council, World Economic Forum, Ex-Microsoft Cybersecurity Advisor;CISSP, CGEIT, CRISC, CISM, CISA, CDPSE, CEH, FIP, CIPP/E, CIPM, CIPT, HCISPP
[5-10 Minute Read]
The cybersecurity and AI landscape continues to evolve at a breathtaking pace, and with it, the associated risks. Cybersecurity Ventures projects that cybercrime will cost the world $10.5 trillion annually by 2025, up from $3 trillion in 2015. Compounding the challenge is a cybersecurity workforce gap of nearly 4.8 million professionals, as reported by ISC2 and ISACA 's end of 2024 State of Cybersecurity Report shows that nearly half surveyed report no involvement in the development, onboarding, or implementation of AI solutions.
This raises a critical question: will AI help close this gap or inadvertently amplify the cybersecurity challenges ahead?
Building on my predictions from 2024, (many of which will still continue to be risks this year), I have identified a selection of prominent threats for 2025, focusing on operational security risks and the evolving challenges posed by AI. While many noteworthy threats may be omitted, these predictions aim to highlight what I feel are the more pressing concerns shaping the cybersecurity and AI landscape.
Disclaimer:
1) All predictions and perspectives are my own.
2) Added some flavour with AI generated images I personally made on Midjourney.
3) Added a touch of original AI-generated inspiration in the form of a quote based on the text in the prediction, generated by different LLM's, but using the same underlying persona, merging a handful of philosophical leaders of our time for fun :)
Let's see how the quotes compare in grasping the essence of each prediction.
1. Are We Ready for CrowdStrike 2.0?
Reflecting on the most impactful incident of 2024, there was considerable debate about whether it was a technical failure or a security incident. Regardless, one critical takeaway is the precarious reliance many companies—and even nations—have on single vendors or systems. This dependence heightens the risk of a cascading global denial-of-service event triggered by a single vulnerability. Managing resilience is far from simple; those working on the front lines understand the immense practical and financial challenges involved. Is the solution to invest heavily in complex backup systems and miraculously switching to alternative vendors at the click of a finger, or should we shift focus towards identifying, reacting to, and resolving issues faster? While not trying to be too controversial, perhaps agility in some situations —being able to adapt and fix swiftly—is a more practical and sustainable approach than over-engineering complex redundancy.?
"As the future grows more uncertain, resilience lies in agility—adapt swiftly and avoid over-investing in costly backup plans, which are often flawed. The path ahead is a river, not a roadmap." - ChatGPT v4o
PREDICTION: Another large-scale event, similar to what we experienced in 2024, is almost certain to happen. While it may not be CrowdStrike next time, the attack is likely to stem from another security vendor's vulnerability. Hackers have likely learned from the Crowdstrike disruption and the domino effect which can be caused and that these tools often need deep and broad access to an organization's network and end-user devices. Expect significantly longer downtime, more challenging patches in 2025.
2. The Silent Threat of AI Browser Plugins
AI plugins, while enhancing productivity, often carry hidden risks by bypassing traditional security controls. These vulnerabilities arise when plugins appear to perform their intended functions but also execute covert actions in the background. For instance, in the crypto industry, fake wallet plugins have been used to scam users by capturing sensitive data during digital wallet connections or through clipboard monitoring. With the rise of AI agents, even benign-looking plugins for spellchecking, grammar correction, or generative AI writing may inadvertently expose confidential information or create a gateway for malware. Attackers can leverage these plugins to gain unauthorized access or covertly extract information over time.
Organizations must adopt proactive measures, including rigorous vetting of plugins similar to comprehensive vendor risk assessments (VRAs). From an operational perspective, a stronger defense involves enforcing corporate-managed browsers, blocking all plugins by default, and approving only verified plugins through a controlled whitelist. Additionally, organizations should exercise caution with open-source plugins.
"In our quest for innovation, let us remain vigilant; for every tool that enhances our capabilities may also harbor unseen risks. By embracing rigorous scrutiny and safeguarding our digital realms, we ensure that progress does not come at the cost of our security or integrity." - Claude 3.5 Sonnet
PREDICTION: The time of writing, it was announced that around 16 Chrome extensions were compromized exposing over 600,000 users to potential risks. This is just the beginning and I expect this to get exponentially worse in 2025-2026 mainly stemming from the growth of AI plugins. Do you truly have full control of browser plugin risks in your org? If not, then you better get started.
3. Agentic AI Risks: Rogue Robots
The growth of Agentic AI—systems capable of autonomous decision-making—presents significant risks as adoption scales in 2025. Companies and staff could start to become eager to deploy agentic-AI bots to streamline workflows and execute tasks at scale, but the potential for these systems to go rogue is a looming threat. Adversarial attacks and misaligned optimization can turn these bots into liabilities. For example, attackers could manipulate reinforcement learning algorithms to issue unsafe instructions or hijack feedback loops, exploiting workflows for harmful purposes. In one scenario, an AI managing industrial machinery could be manipulated to overload systems or halt operations entirely, creating safety hazards and operational shutdowns. We are still at the very early stages of this, and companies need to have rigorous code reviews, regular pen-testing, and routine audits to ensure integrity of the system - if not, these vulnerabilities could cascade and cause significant business disruption. ISO - International Organization for Standardization and National Institute of Standards and Technology (NIST) 's AI Risk Management Framework have good frameworks for follow, as well as ISACA with AI Audit toolkits; expect more content in 2025.
"In the pursuit of progress, let us remain vigilant: adapt wisely, act ethically, and ensure our creations serve humanity, lest we become victims of our own innovations." - Grok-2 (xAI)
PREDICTION: Rogue Agentic AI incidents will dominate headlines in 2025, with more and more use cases demonstrating the efficiency gains of properly implemented agentic-AI workflows. However, expect a few major headlines of where this has gone very wrong and completely rogue. Let's hope mechanical robots will not misinterpret instructions and self-rationalizing the need to injure humans.
4. The AI Hardware Chip War
The mainstream discourse around AI risks often overlooks the foundational importance of hardware, particularly AI chips. These chips are integral to running advanced AI algorithms, but they come with their own set of vulnerabilities and geopolitical risks. Sanctions and supply chain restrictions can impact access to high-performance chips, with adversarial nations leveraging counterfeit or compromised components. In theory, security risks also arise from on-chip controls, where attackers could exploit design flaws to gain unauthorized access or alter computation outcomes.
Recent insights from the Federal News Network reveal how AI chips are increasingly becoming attack vectors due to inadequate firmware protections and in general, the lack of standardization in securing AI-specific hardware leaves critical gaps in security practices. Adding to these concerns, the STAIR Journal highlighted the risks of on-chip AI hardware controls, where backdoor implementations could enable unauthorized remote access, posing severe threats to operational integrity and data security.
"To build a future of intelligent machines, we must first fortify the silent foundations—AI chips—for in their strength lies our security, and in their vulnerability, our greatest risk. Let us harmonize innovation with vigilance, for the path to progress is paved with both creation and protection." - DeepSeek v3
领英推荐
PREDICTION: The hardware chip war will escalate in 2025, driving nations and organizations to find alternative and intuitive ways to stay competitive with the tools that they have at hand. We are seeing this already with DeepSeek able to challenge the big players, but with chips and systems which are a fraction of the cost and performance.
5. Digital Deception: Beyond Deepfakes
Digital deception is evolving rapidly, far surpassing traditional deepfakes. Generative AI tools expose vulnerabilities as attackers manipulate systems to create convincing but harmful outputs. For example, AI could be exploited to generate false medical advice or fraudulent business communications, blurring the line between real and fake content. Hidden invisible text and cloaking techniques in web content further complicate detection, distorting search results and adding to the challenge for security teams.
Be careful where some vendors (and maybe your own internal tech teams), are simply bolting on public large language models (LLMs) to your systems, through APIs prioritizing speed-to-market over robust testing and private instance set ups. Sensitive data may inadvertently flow into training pipelines or be logged in third-party LLM systems, leaving it potentially exposed. Don't be deceived by assuming all checks and balances have been done.
Meanwhile, advances in text-to-video technology and high-quality deepfakes are making it increasingly difficult for security and compliance teams to differentiate genuine content from manipulated media during Know Your Customer (KYC) checks. While 2024 saw these tools used mostly for humor on platforms like Instagram and X, 2025 will bring significant breakthroughs in video-deepfakes, escalating risks for targeted scams, reputational attacks and fake news.
"In a world where authenticity is under siege, cultivate clarity and act with integrity; for it is through sincere actions and truthful expressions that we safeguard our reality from the shadows of deceit." - Sonar Huge (Based on Llama 3.1 405B)
PREDICTION: The rise of AI-powered digital deception will fuel misinformation, fraud and scams in 2025 and more into our day to day lives. I encourage all of you to have Challenge-Response secrets with your loved ones, which only you ever know to really truly verify the identity of the person you are talking to.??
6. AI Regulation: The Next Compliance Challenge
The European Union's AI Act is set to transform global regulations, much like the General Data Protection Regulation (GDPR) did in 2018. While GDPR focused on data privacy, the AI Act addresses the broader challenge of governing AI systems, categorizing them by risk levels and imposing strict requirements on high-risk applications, including transparency, documentation, and human oversight.
What makes the AI Act particularly impactful is its global reach. Businesses interacting with the EU market must align their AI practices with these rules. South Korea, with its AI Basic Act, is already following suit, echoing the EU's emphasis on transparency, accountability, and ethical AI use. This marks the start of a global shift toward unified AI regulations. Poorly governed AI go beyond fines, potentially causing systemic failures, discriminatory outcomes, and reputational harm.
"To navigate the evolving landscape of AI, let us cultivate a spirit of responsibility and clarity; for in the harmony of regulation and innovation lies the foundation of trust and progress." - OpenAI o1
PREDICTION: Businesses will face considerable challenges navigating the complexity of the AI Act, much like the early struggles with GDPR. Key issues such as AI ethics, bias mitigation, and accountability will remain ambiguous, creating operational hurdles for legal, compliance, and privacy teams as they attempt to translate regulatory requirements into technical controls. Compounding this is the rapid pace of AI adoption, which will leave many organizations grappling to balance speed with compliance.
7. Signal in the Noise: No More Secrets?
Hackers are increasingly targeting both synthetic data and machine learning models, exposing vulnerabilities that compromise privacy and intellectual property. Synthetic data, often heralded as a privacy-preserving alternative to real data, can inadvertently reveal underlying patterns or biases if poorly implemented. For example, adversaries might reverse-engineer synthetic datasets to infer sensitive information or inject malicious biases during creation. In parallel, surrogate models are being exploited by querying proprietary AI systems to extract sensitive training data or mimic the original model’s behavior. Research is already taking place on how monitoring characteristics of multiple streams of pseudo anonymised data (and maybe even anonymised data) could potentially allow AI to reconstruct the source PII, with examples like patient re-identification through medical chest X-ray data.
"Embrace change with discernment, for in the chaos lies the opportunity to reveal truth and safeguard integrity." - Claude 3.5 Haiku
PREDICTION: Expect 2025 to be the year where AI is further used to uncover hidden data from observing characteristics of the data or system. While this may seem vague and far-fetched, there is already talk of this in IEEE's current edition headlining, "The Race to Save Submarine Stealth in an Age of AI Surveillance". AI's ability to navigate the signal in the noise, could greatly accelerate the ability to uncover secrets.?
Conclusion: The Path Ahead for 2025
2025 promises to be a transformative yet challenging year, with AI and cybersecurity set to dominate the landscape. Whether through innovative applications or the natural progression toward Artificial General Intelligence (AGI), 2025 will be marked by both groundbreaking advancements and significant risks. Siloed datasets will increasingly converge, uncovering new truths without the need for breaking encryption—from tracing transaction flows through crypto tumblers/mixers to breakthroughs in healthcare. Imagine identifying early, subtle patterns in seemingly unrelated medical symptoms, providing critical clues for early disease detection. Yet, on the flip side, this same convergence of data will empower hackers to aggregate years of breached datasets they have been harvesting and also content from the Dark Web, creating highly detailed company profiles for exploitation.
As AI and cybersecurity evolve at an unprecedented pace, the need to experiment, learn, and adapt has never been greater. Understanding these technologies hands-on is essential to identifying both opportunities and risks. To conclude, I’ll borrow the words of Thomas Huxley, a passionate advocate for Darwin’s Theory of Evolution and scientific literacy:
"Try to learn something about everything and everything about something." — Thomas Huxley
In 2025, this advice couldn’t be more relevant. The, "learn everything" should be all around AI. Dive into AI, understand its potential, and arm yourself with the knowledge and hands-on skills to stay ahead of its rapid evolution, or be left behind.
Best wishes for the new year!
Stay Safe. Stay Secure. Always Verify.
Jason
Disclaimer: The opinions and insights expressed here are solely my own and do not reflect the views of any affiliated organisations.
Professor Jason Lau, CISO is the Chief Information Security Officer at Crypto.com, holds a seat on the global ISACA Board of Directors, and Vice Chair of the Innovation and Technology Committee, contributor to the Forbes Technology Council, CSO Security Council and World Economic Forum.
With over 24 years of global experience in cybersecurity and data privacy, Jason strives to demystify the complexities of cybersecurity, and explore the intersection of cybersecurity and artificial intelligence.
Subscribe to Jason's Newsletter to follow emerging industry updates.
Regional Chief Security Officer, Asia Pacific at Okta
1 个月Jason Lau, CISO , tell me you still had time for a vacation while penning this! That’s some pretty deep analysis. I hope you’re wrong on a few of these predictions, we’re all busy enough thanks.
Faster, Easier, Safer and Together for your Digital Transformation
1 个月Thank you, Jason, for your insightful prediction, which is always a highly anticipated annual event. Automating and verifying components and counterparties are essential for building and maintaining a chain of trust. AI can be harnessed for these purposes, but it can also be exploited by malicious users or hackers.
Chief Information Officer | Chief Technology Officer | VP of Technology | Head of Software Engineering | Chief Information Security Officer
1 个月Thanks Jason, well written with good insights!
President at SET University. Reinventing higher education to empower future tech innovators. Bridge-Builder between Tech, Academia, and Government. Advocate for Women in Tech.
1 个月Jason Lau, CISO thank you for sharing your expertise ??. I would love to know more about how it may change educational landscape
A fascinating read, thank you for sharing Jason Lau, CISO