Is the Force (F) of your AI impact proportional to Security Controls mass (m) and Innovation acceleration (a)? (F=ma)
Mano Paul, MBA, CISSP, CSSLP
CEO, CTO, Technical Fellow, Cybersecurity Author (CSSLP and The 7 qualities of Highly Secure Software) with 25+ years of Exec. Mgmt., IT & CyberSecurity Management; Other: Shark Researcher, Pastor
Introduction
One of the qualities of highly secure software that I write about in my book, the 7 Qualities of Highly Secure Software, is for security to be balanced - to have the right balance between risk and reward, functionality and assurance, and exposure and controls to mitigate threats. In today’s AI world, I would add balancing AI's impact with the speed of innovation and security robustness to that list.
Leveraging AI to solve our business problems or automate manual operations is like upgrading from a horse-drawn carriage to an autonomous self-driving automobile. One could, in fact, align with the skeptics who popularized the phrase ‘Get a horse!’ advocating that a horse-drawn carriage was safer than a horseless carriage (the term used to describe automobiles in the 1910s). AI can help make our business processes more streamlined, improve and reinforce human decision-making, and optimize the expectations and experiences that our customers and stakeholders have. But like many of us who get behind the wheels of a brand-new car and start driving, never ever reading the car manual, driving without understanding how the car can break down is like implementing AI solutions without robust cybersecurity measures, which could result in downtime and disaster later.
From Neural Networks, Agentic or Predefined rules-based AI applications, Retrieval Augmented Generation (RAG), Privacy enabling Federated Learning or Secure Multi-Party Computations (SMPC) to ML algorithmic bias, the technical architecture of AI is unique contextually, intricate, and sophisticated. However, just as a complex engine can fail without proper maintenance, so too can AI systems fall prey to vulnerabilities if security isn’t woven into their very design.
Remember the story of the Three Little Pigs? If only the first two pigs were like their hard-working and intelligent sibling, they would have fortified their homes with strong and ‘weighty’ cybersecurity bricks, and their houses would still be standing, and the big bad wolf would have never gotten in.
Security Strategies: Buckle Up!
We need robust security strategies to secure AI that can withstand emerging and unprecedented threats against AI data, algorithms, or models. Leaders must recognize the importance of promoting a security-first culture as they implement a comprehensive defense-in-depth strategy across the organization.
Effective security strategies are like seatbelts and airbags — they can save your business during a cyber crash aka hack. This involves strategic vision and tactical execution—training our teams to spot vulnerabilities like a hawk eyeing a mouse and fix them as soon as possible. AI security strategies should include establishing the foundation of Governance to manage risks and compliance, securing the AI (product) development lifecycle, and educating and empowering your team to earn and maintain the trust that companies run on. ?
Governance, Risk, and Compliance: The GPS and Controls Foundation of AI
Unfortunately, anyone who has dealt with having to demonstrate regulatory compliance, especially as a member of the security team dealing with regulators, may sometimes feel like they are an FBI agent (say Samuel L. Jackson) transporting a witness (say Sean Jones) on a plane froth with slithery reptilian danger. This is often the case because the Governance, Risk, and Compliance (GRC) requirements are a checklist to demonstrate due diligence as it pertains to the existence of controls and not necessarily their effectiveness. However, if we were to take the perspective of Bruce, the Great white shark from “Finding Nemo,” and think of auditors and lawyers as friends who can help us make our AI applications more secure, the narrative would change.
In this fast-paced environment, GRC acts as our GPS, guiding us through the twists and turns of AI development. These governance and compliance requirements are like speed limits, which help to keep us safe. And oh, by the way, a speed limit means that that is the zenith of your speed, while many of us treat it as the starting nadir, hoping to get away if pulled over (not admitting to anything here). Without these safety control boundaries, you can drive a car or let the car drive you at a speed that can endanger you and others. GRC helps us understand the boundaries so we don’t get ourselves into danger - treating a roundabout like the Bundesautobahn.
When assessing risks, we need to identify and address specific threats and vulnerabilities pertinent to AI. By implementing controls that can adapt to changing environments, we can proactively address emerging threats with adeptness and agility. Risk assessments help us identify the gaps in our fences that will allow bears to get to our honey (dataset, models, and AI code).?
Securing the AI (Product) Development Lifecycle
The AI Lifecycle primarily includes the following four phases - business alignment, data engineering, model engineering, and AI operations. Each of these phases requires security controls to address potential threats. In the Business Alignment phase, misalignment with business goals can lead to wasted resources on irrelevant AI projects. To address this risk, companies should conduct periodic stakeholder reviews and establish confidentiality, integrity, and availability success criteria. When Data Engineering, threats like data poisoning can happen because of lack of access control or a disgruntled insider. Robust data validation, cryptographic protection of sensitive data, and auditing of data access can help mitigate this threat. When the model is architected and developed during the Model Engineering phase, the lack of proper input validation and inadequate testing for adversarial inputs can result in input manipulation or model evasion attacks. Threat modeling AI architecture and application is very useful for understanding trust boundaries and chinks in the armor during the data and model engineering phases. Finally, in AI Operations, threats like model drift can lead to decreased accuracy over time. Secure ML Operations (MLOps) with model replacement or retraining and redeployment with appropriate versioning can mitigate threats that can happen during this AI Operations phase.
As any seasoned parent knows, sometimes a little precaution goes a long way—ask anyone who has had to remove a Lego block stuck in a vacuum cleaner.
Collaboration and People Development: Empowering your Team
Security is not a solo endeavor; it requires collaboration. By nurturing a collaborative teamwork and communication culture, we can develop our people and ensure everyone—from the C-suite to the coder to the Boardroom to the builder—understands their role in securing AI. The Security team’s message to the rest of the business should be in the spirit of “Toy Story”: “You’ve got a friend in security!”
The success of any project or game is directly proportional to the team with a focus on the mission to win and cohesive gameplay.
I go to watch my son, Ittai, play his middle-school football game every week. After every game, as we reflect on the game, the pattern that invariably emerges is that when each player on the team knows what is expected of them to do and they deliver, their team wins. Additionally, when a player is unwell, Ittai has to play both on the offensive line and the defensive line. This means that he has to learn and know the plays well in advance and adapt to the game when called upon. In like manner, our teams must be equipped to think like hackers and act like defenders, knowing how somebody can exploit AI threats and the mitigating controls that can protect us against them – AI teams should know both offense and defense.
Investing in team development by educating your team about AI security – how the hacker plays and how you can defend against the offense - is just as important as your investment in business and/or model development.
Security-first Culture: A solid foundation, not one of Sand
Cultivating a balanced security-first culture is a paramount and crucial step that will help us adapt and thrive in this dynamic AI technological landscape. Just as a wise person does not build his house upon a foundation of sand (Matthew 7:24-27), innovation without security is like a house built on sand—no matter how beautiful it may look, it won’t withstand adversarial hacker storms. What we need is not pretty AI systems but instead protected AI systems.?
领英推荐
The finAI_ word
In the progressive world of AI innovation, securing our AI implementations is not a nice-to-have but essential - much like wearing a seatbelt in a speedy car, especially if it is self-driving. Just as Newton’s second law of motion (F=ma) highlights the relationship between force, mass, and acceleration, we must recognize that the force of your AI impact is directly proportional to the mass of robust security and your acceleration of innovation. Without it, our AI initiatives risk spiraling out of control, akin to driving a high-speed car without proper safety measures. By incorporating security controls de facto into each phase of the AI product development lifecycle, we can balance the speed of innovation while mitigating threats and managing risks. Let us embrace our role as guardians of both innovation and security, steering and accelerating our company toward success, maximizing our impact with a sustainable advantage.
Note: The cover image of this article covers most threats and controls by phases. I would like your feedback and thoughts. If there are suggestions as to how I can make that more comprehensive, please comment.
PS:
If you liked this article and found it helpful, please comment and let me know what you liked (or did not like) about it. What other topics would you like me to cover?
NOTE: I covered only at a high level some of these essential elements for Secure AI Lifecycle management. If you need additional information or help, please reach out via LinkedIn Connection or DM and let me know how I can help.
#SecureAILifecycle #SecureAIDevelopment #AIStrategy #AISecurity #MLSecurity #SecuringAI #AICyber #HackingAI #AISecurityStrategy
Want to learn more? You can attend my 1-day workshop at OWASP LASCON 2024 on Building your AI Strategy with Cybersecurity for Executives and Leaders on Oct 23, 2024.
At the OWASP LASCON conference on Oct 23, 2024, I will deliver a 1-day workshop on Building your AI Strategy with Cybersecurity for Executives and Leaders. You can register here before the training sells out.
Works Cited
Cwienk, Jeanette. “Germany’s Autobahn — Finally Time for a Speed Limit? – DW – 04/25/2024.” Dw.com, 25 Apr. 2024, www.dw.com/en/german-highways-fast-cars-speeding-paradise-safety-accidents-fossil-fuels/a-68911999.
Ellis, David R., et al. “Snakes on a Plane.” IMDb, 18 Aug. 2006, www.imdb.com/title/tt0417148/.
“Google Security Whitepaper | Documentation.” Google Cloud, cloud.google.com/docs/security/overview/whitepaper.
“How the Unique Culture of Security at AWS Makes a Difference | Amazon Web Services.” Amazon Web Services, 17 Apr. 2024, aws.amazon.com/blogs/security/how-the-unique-culture-of-security-at-aws-makes-a-difference/.
IMDb. “Toy Story.” IMDb, 22 Nov. 1995, www.imdb.com/title/tt0114709/.
“Introduction to Apple Platform Security.” Apple Support, Apple, support.apple.com/guide/security/intro-to-apple-platform-security-seccd5016d31/web.
“Keeping Bears out of the Honey Jar with Electric Fencing.” Gallagher.com, Gallagher Animal Management, 1 Jan. 2022, am.gallagher.com/en-US/Solutions/Case-Study-Listings/Keeping-Bears-Out-of-the-Honey-Jar-with-Electric-Fencing.
“Matthew 7:24-27 KJV - - Bible Gateway.” Www.biblegateway.com, www.biblegateway.com/passage/?search=Matthew%207%3A24-27&version=KJV.
Paul, Mano. The 7 Qualities of Highly Secure Software. CRC Press, 2012.
“Secure Future Initiative | Microsoft.” Microsoft.com, Microsoft, 2023, www.microsoft.com/en-us/trust-center/security/secure-future-initiative.
“Security Overview.” Docs.oracle.com, Oracle, docs.oracle.com/en-us/iaas/Content/Security/Concepts/security_overview.htm.
Steel, Flora Annie. “The Three Little Pigs.” Americanliterature.com, 2019, americanliterature.com/childrens-stories/the-three-little-pigs.
Winton, Alexander. “Get a Horse! America’s Skepticism toward the First Automobiles.” The Saturday Evening Post, 9 Jan. 2017, www.saturdayeveningpost.com/2017/01/get-horse-americas-skepticism-toward-first-automobiles/.