Is your AI program built to last, or will it get "popped"?

Is your AI program built to last, or will it get "popped"?

The 'inevitable' AI Bubble is predicted to "Pop"


During my training workshop on Building your AI Strategy and Program with Cybersecurity for Executives and Leaders at the 2024 OWASP Lonestar Application Security Conference (LASCON), the response to my question, “Is AI all hype, or Is there hope?” was mixed.

Baidu’s CEO, Mr. Li, at the Harvard Business Review's Future of Business conference, speculated what he calls the 'inevitable' AI bubble will burst, washing out 99% of companies with fake innovations or products that don't have a market fit and only 1% will survive creating a lot of value. Is your company going to be one of them that creates value or will it pop? We have witnessed tech bubbles burst before, leaving behind slag that looks like a popped balloon in the corner after a kid’s birthday party - deflated and disappointing.

So, if you ask me, “What’s the solution?” I’d say, “Let us stop blowing bubbles in the first place!” I would add to Mr. Li's list of fake innovations or fake products - insecure AI - which will also cause the bubble to burst!

Learning to Swim After You’ve f-AI-len into the Pool? Risky Business!


As artificial intelligence (AI) spreads through everything from financial models to content creation, there’s a vital (but often overlooked) question: Is it secure? When security is an afterthought, it is akin to putting brakes on a car after it’s rolled off the lot. In fact, waiting to “bolt it on” later is a bit like learning to swim after you’ve fallen in the pool. It’s not just risky; it’s downright terrifying!

In my book, 7 Qualities of Highly Secure Software, I explore how security architecture should be baked in from the start, not applied with duct tape after the fact. Here, we’ll explore why security architecture is essential for AI, lest a hacker can “pop” your AI advantage with a shell, made possible due to insecure architecture allowing for excessive agency risks.

Hey, HAL 9000 and Skynet: Security Foundation First!


Imagine you’re building a skyscraper. The foundation isn’t just poured after the building is up - it’s central to the entire design. Similarly, security architecture is the foundation that supports secure and resilient AI systems. Executives might ask, “But why worry about security in AI specifically?” The simple answer is this: when AI applications interact with sensitive data, make decisions, or respond autonomously, they become magnets for risk.

AI that isn’t secure is like HAL 9000 from 2001: A Space Odyssey - except instead of just ignoring your commands, insecure AI could expose your data, fail regulatory standards, or worse, become a security risk itself. And, if we want to keep our organizations out of a “Skynet” scenario, we better have a solid security design from the start.

Alluring and Tempting: New Kids on the Block


These days, you’ll hear terms like “Agentic AI” and “Retrieval-Augmented Generation” (RAG) thrown around. They’re trendy but come with their own security considerations.

Agentic AI refers to systems designed to act autonomously and independently - a step toward creating “thinking” systems. It’s intelligent and adaptive, but left unguarded, it could mimic the “forbidden fruit” temptation: exposing or making risky decisions that leave an enterprise vulnerable. Strong governance, regular validation, and clear limits are necessary so that agentic AI operates within a controlled sandbox.

Then, there’s RAG architecture, where systems retrieve real-time data to generate responses or predictions. This architecture is like Sherlock Holmes consulting his archives before giving you an answer. It’s brilliant for keeping data fresh but also highly susceptible to attacks if security isn’t robust at every step. Imagine RAG as a librarian retrieving books - you want to ensure the information retrieved is reliable, accurate, and, most importantly, trustworthy.

From The Matrix to Moats and MoAIr: The Secure Foundation


One of my most memorable events was visiting the Vatican museums in Rome with my family. We were awestruck with the grandeur and beauty of Michaelangelo’s masterpiece frescos on the Sistine Chapel’s ceiling. Just as Michelangelo’s Sistine Chapel shows, a remarkable architectural feat requires both a strong foundation and sturdy walls, for without them, the exquisite beauty of the ceiling cannot be enjoyed. Likewise, an AI program without a strong foundation of security design principles may look impressive, but it will be exposed and vulnerable. Or, take J.R.R. Tolkien’s The Lord of the Rings as a lesson: an unfettered AI can be potent, but only when it’s wielded with wisdom and caution, lest it become a “One Ring to Rule Them All” situation.

The most secure AI systems will implement security principles in their design. Some of these principles are discussed below.

Least Privilege (Keeping AIgent Smith in Check)

This means agents or models only access what’s absolutely necessary. Imagine if Agent Smith from The Matrix could jump into bodies only with explicit permission. It will keep him in check - just like the Least Privilege principle will keep AI from accessing more systems or data than it needs. Implement role-based or resource-based access control across your AI models and data so that each AI agent only accesses necessary resources. Unless absolutely necessary, AI models should be designed to access only generalized, anonymized, or tokenized data to limit data exposure.

Separation of Duties (AI Buddy System)

No single component should have all the control. Think of it like a buddy system. One agent makes the decision; another reviews it. That’s why in well-architected secure AI, one function handles data, another verifies requests, and yet another validates outputs. This layered approach creates solid checks and balances. This also reduces the risk of a single-component failure. Additionally, for tasks such as modifying model hyperparameters or deploying updates, implementing approval workflows can protect against model poisoning or AI supply chain risks.

Fail-Secure (Batman’s Backup Plan, Just in Case)

If something goes wrong, the AI should “fail” in a secure state. It’s the equivalent of Batman’s “If I go rogue, here’s the plan” protocol. AI can go haywire, resulting in misogynistic chatbots, poisoned LLMs, and even AI sleeper agents; fail-secure ensure the damage is contained and mitigated. Program your RAG-based systems and AI models to abort processing and revert to a secure state when (not if) errors or inconsistencies are detected. Centralized logging and auditing to capture and monitor key events come in handy to detect anomalous behaviors of AI systems, which should invoke fail-secure controls immediately.

Defense in Depth (MoAIts, DrAIwbridges, AIrchers…)

Every layer - from exploratory data analysis, data preparation, data access, model development, model deployment, and MLOps - must have its own security controls. It’s like a castle bolstered with multiple defenses - moats (perimeter controls like firewalls and access controls), drawbridges (secure channels), walls (network segmentation), well-positioned archers (IDS and IPS), and keep (secure repositories), and more, so that when one is breached, attackers aren’t guaranteed success.

In my book, “The Official (ISC)2 Guide to the CSSLP CBK”, which helps professionals get their Certified Secure Software Lifecycle Professional (CSSLP) certification, I discuss other security principles such as economy of mechanisms, complete mediation, open design, least common mechanisms, and psychological acceptability. All these should be built into your AI program to securely architecture AI systems that won’t pop like a bubble at the slightest technological changes or threats in this emerging landscape.

As sagaciously penned, “The wise builders build their house with a deep, strong foundation laid on rock.” (Luke 6:47-49). Think of your security architecture as that strong and rock-solid foundation, without which we will be left with a house that will not be strong enough to withstand the assailing torrents of a cyberattack - one that will fall and fail us.

The finAI_ word – Avoid Bubble Trouble


Let us not be like the yellow tang fish from Finding Nemo, mesmerized by every shiny “AI bubble” that pops up. Remember, those bubbles can turn green fast, polluting the tank and making it hazardous for everyone involved. Just as the fish clung to the treasure chest, shouting, “My bubbles!” we, too, risk becoming enamored with fleeting AI hype instead of securing our systems for the long haul.


Our focus should be on architecting reliable, recoverable, and resilient AI systems that assure trust and have needed security controls to protect us against AI risks. A secure architecture is more than a foundation; it is the bedrock that ensures our AI systems won’t crumble under pressure or leave us gasping for air when the next tech bubble inevitably bursts. So, let us architect our AI applications and systems with secure design principles so that our AI systems and applications will garner us a sustainable competitive advantage and not be a fragile bubble that can get "popped" anytime.


PS:

If you liked this article and found it helpful, please comment and let me know what you liked (or did not like) about it. What other topics would you like me to cover?

NOTE: I covered only at a high level some of these essential elements of Security Architecture as they apply to AI. If you need additional information or help, please reach out via LinkedIn Connection or DM and let me know how I can help.

#SecureAIArchitecture #SecurityArchitecture #AISecurity #MLSecurity #SecuringAI #AICyber #HackingAI

Works Cited


“Only 1% of Companies Will Thrive after “Inevitable” AI Bubble | Robin Li at HBR’s Future of Business.” YouTube, 17 Oct. 2024, www.youtube.com/watch?v=mtbBXwoOkDk.

Paul, Mano. The 7 Qualities of Highly Secure Software. CRC Press, 2012.

---. Official (ISC)2 Guide to the CSSLP CBK. Boca Raton, Fl, Crc Press/Taylor & Francis Group, 2014.

Kubrick, Stanley, and Arthur C Clarke. “2001: A Space Odyssey.” IMDb, 12 May 1968, www.imdb.com/title/tt0062622.

Cameron, James, et al. “The Terminator.” IMDb, 26 Oct. 1984, www.imdb.com/title/tt0088247.

Tolkien, J. R. R., et al. “The Lord of the Rings: The Fellowship of the Ring.” IMDb, 19 Dec. 2001, www.imdb.com/title/tt0120737.

Reeves, Keanu, et al. “The Matrix.” IMDb, 31 Mar. 1999, www.imdb.com/title/tt0133093.

“OWASP Machine Learning Security Top Ten 2023 | ML10:2023 Model Poisoning | OWASP Foundation.” Owasp.org, owasp.org/www-project-machine-learning-security-top-10/docs/ML10_2023-Model_Poisoning.

Roose, Kevin. “The Year Chatbots Were Tamed.” The New York Times, 14 Feb. 2024, www.nytimes.com/2024/02/14/technology/chatbots-sydney-tamed.html.

“PoisonGPT: How to Poison LLM Supply Chainon Hugging Face.” Mithril Security Blog, 9 July 2023, blog.mithrilsecurity.io/poisongpt-how-we-hid-a-lobotomized-llm-on-hugging-face-to-spread-fake-news/.

Hubinger, Evan, et al. “Sleeper Agents: Training Deceptive LLMs That Persist through Safety Training.” ArXiv.org, 17 Jan. 2024, arxiv.org/abs/2401.05566.

“Bible Gateway Passage: Luke 6:47-49 - King James Version.” Bible Gateway, 2015, www.biblegateway.com/passage/?search=Luke%206%3A47-49&version=KJV.

Bubbles, My. “My Bubbles! Finding Nemo.” YouTube, 3 Jan. 2011, www.youtube.com/watch?v=s7IYR_rELyE.


Aaron Severance, MBA, CCSP, CCSK

US Security and Resiliency Practice Leader - Security is no longer just keeping the bad guys out… Zero Trust!

4 个月

Thanks for keeping the spotlight on the need for secure foundations, regardless of the emerging tech toy. I think foundational Ai/ML will have staying power, that’s already shown, but Gen AI needs to prove itself in robust business applications outside of chatbots and content creation. For those running any form of AI adoption committee, I try to remind them, once you define your foundational use case, start with the data and build from there. Too many models are being unleashed to gobble up data without regard toward or enforcement of existing data lifecycle and security policies. These models become discoverable and exploitable data points. That should be where the security foundation starts. Thanks for trying to keep us on our toes!

要查看或添加评论,请登录

Mano Paul, MBA, CISSP, CSSLP的更多文章

社区洞察

其他会员也浏览了