AI finds Vulnerabilities - But what about AI's own?
Mano Paul, MBA, CISSP, CSSLP
CEO, CTO, Technical Fellow, Cybersecurity Author (CSSLP and The 7 qualities of Highly Secure Software) with 25+ years of Exec. Mgmt., IT & CyberSecurity Management; Other: Shark Researcher, Pastor
We are living in "InterestAIng" times
We are living in "interestAIng" times, in a world where technology evolves, arguably unbridled, faster than a Shinkansen on full throttle. The confluence of artificial intelligence (AI) and cybersecurity presents us with a delightful cocktail of remarkable opportunities and significant challenges - shaken, not stirred, of course!
On Election Day 2024, in addition to the news channels tracking the US Elections, noteworthy articles for the cybersavvy tech-informed community include Forbes’ report headlined “Google Claims World First as AI Finds 0-Day Security Vulnerability” and CIO.com's publication “Meta offers Llama AI to US government for National Security”. Google’s AI-powered security vulnerability research framework, Big Sleep, snagged the limelight by identifying a 0-day vulnerability in the open-source database SQLite. Meta’s LLama AI availability to Government agencies and private sectors to facilitate applications in logistics planning, cybersecurity, and threat assessments marks a trend wherein AI is being used in security applications.
I remember the days when, as security researchers, we leveraged fuzzing - no, not the kind that makes you feel warm and fuzzy inside, but a method that uses random data to trigger software errors - to detect code vulnerabilities. This would take a considerable amount of time, even with scripting. The Project Zero team at Google write that their Big Sleep LLM agentic application automates the executions of several security tools that a manual pentester would use to discover buffer overflow and other 0-day vulnerabilities. While it is unclear if any ML-based adversarial perturbation technique was also coded to detect the SQLite vulnerability, it is noteworthy that the time to detect 0-day vulnerabilities can be significantly reduced with AI ML. It is like having an AI knight in shining armor on your team that could elevate security practices by identifying vulnerabilities in software before they even hit the shelves.
It is no surprise that AI can be used to improve security. As we pop the champagne, celebrating these technological trends that make our world a little safer - one must not forget to don our cautionary hats and ask - but what about securing AI itself? The prudent approach is clear: "The prudent see danger and take refuge, but the unwise go on and suffer for it" (Proverbs 22:3).
AI can help bolster our security posture, but what about securing AI itself?
AI Black boxes - Should not leave you "fuzzy"
The nature of AI models can often be described as “Black Boxes,” which Google’s CEO, Mr. Pichai, himself discusses in his 60 Minutes interview a little over a year ago. He shares how these emergent properties of AI algorithms, which enable them to be creative with reasoning and planning properties, and perform tasks that it was not previously trained or tasked to do, are not fully understood or known - a Black Box. Apropos, within the AI context, a Black Box doesn’t refer to the mysterious item you find in your attic, nor is it akin to the Digital Flight Data Recorder in aviation - it signifies the opacity of these AI systems. Developers can train AI models to perform specific predictive or generative tasks, but what goes on inside that shiny black box is often an enigmatic riddle. I guess there is some truth to why the processing layers in a neural network are called hidden (pun intended).
AI's mysteries today can be tomorrow misfortunes, if emergent properties remain an enigma!
The creative and emergent properties of AI algorithms that make it possible for autonomous discovering a 0-Day security vulnerability is certainly appealing, but with such excitement should also come a sense of concern. When stakes are high, especially in the realm of cybersecurity, clarity becomes non-negotiable. We cannot afford to have a “fuzzy” understanding of how AI operates. We must be cognizant and vigilant of AI’s limitations while enjoying its innovative potential. Striking this balance becomes pivotal for the responsible use of AI, especially in security contexts.
Making AI Black boxes clear with Security Controls
To address the opacity issues in AI systems, it is important to design, develop, and deploy several administrative, procedural, and technical controls. Administrative controls include establishing AI governance policies, regular reviews and audits of models, and user education training. Process controls include model documentation, AI systems threat modeling, writing secure AI code, and secure MLOps, which include secure deployments, continuous monitoring, and incident management. Technical controls include Model Interpretability tools, Explainable AI (XAI) frameworks, adversarial testing, differential privacy, and AI systems auditing.
Taking the mystery out of the AI Black box involves designing, developing, and deploying administrative, procedural, and technical controls.
Administrative Controls
AI Model Governance Policy: Establish a governance framework for AI systems that defines standards for development, deployment, and monitoring, emphasizing transparency and security. Governance can include roles, responsibilities, and guidelines for managing AI throughout its lifecycle.
Regular Audits and Model Reviews: Schedule periodic audits of AI models by independent experts. Audits should assess compliance with transparency, security, and ethical standards and identify unintended consequences or biases.
Training for Stakeholders: Educate developers, data scientists, and end-users about black-box issues, ethical considerations, and secure usage practices. This can increase awareness of potential pitfalls and promote responsible use of AI.
Process Controls
Model Documentation (Datasheets for Datasets and Model Cards): Document datasets and models to clarify their development, intended use, and limitations. Model cards, for example, outline performance metrics, fairness considerations, and intended use cases, which help users interpret AI model capabilities responsibly.
Bias and Fairness Assessments: Fairness and bias testing should be included as a mandatory step in AI model development. These evaluations should be ongoing and iterative, covering everything from data preprocessing to model deployment, to guarantee that AI decisions are fair and unbiased.
Continuous Monitoring and Incident Response: Real-time monitoring for anomalies in AI decisions should be set up, along with a plan for incident response. This allows for the quick identification, investigation, and resolution of unexpected behaviors.
Technical Controls
Model Interpretability Tools: Use tools like LIME (Local Interpretable Model-Agnostic Explanations) or SHAP (SHapley Additive exPlanations) to understand feature importance and model decisions. These tools help uncover patterns and logic behind predictions, making AI behavior more explainable.
XAI Frameworks: When I say, XAI, I am not talking about Elon Musk's Colossus AI training supercomputer cluster (which I think is cool). Instead, I refer to eXplainable AI (XAI) frameworks that help users interpret model behavior and limit unintentional bias. Integrate frameworks like IBM’s AI Fairness 360 or Google’s What-If Tool, are some examples that offer insights into model biases and prediction rationale.
Adversarial Testing and Robustness Checks: Conduct robustness tests, such as adversarial testing, to identify vulnerabilities that black-box models might have to adversarial attacks. Techniques like FGSM (Fast Gradient Sign Method) can help gauge model resilience.
Differential Privacy: Use differential privacy to prevent the leakage of sensitive data. This also protects data by ensuring that the model does not inadvertently reveal private information, particularly when trained on sensitive data.
Auditing and Logging: Implement comprehensive logging of AI models’ inputs, outputs, and decision-making processes. This allows for a clear trail that helps us understand how specific decisions are made, enhancing accountability and facilitating debugging.
The finAI_ word
In our quest to build an AI-powered world, let’s not forget to play defense while running for a touchdown! We require a strong cybersecurity program that is multi-faceted to understand and regulate AI as we navigate this promising yet presently not-fully known AI landscape. So, let’s assemble as cyber-mighty Avengers — from boardroom strategists to basement builders — to team up for a world where AI is not only innovatively powerful but secure. By incorporating security controls, organizations can enhance transparency, reduce risks linked to Black Box AI systems, and cultivate a culture of accountability and security.
AI can be leveraged for security and it has the potential to transform how we identify and address security vulnerabilities. But if we fail to fully understand its internal workings, our business goals could become a Sisyphean undertaking, not to mention potentially resulting in catastrophic consequences.
While AI is great for Security, let us not forget Securing AI! lest our business goals become Sisyphean and potentially catastrophic!
After all, the endgame isn’t just harnessing AI to find the bugs but making sure AI itself doesn’t become the bug! - and that is no BigSleeping matter!
PS:
If you liked this article and found it helpful, please comment and let me know what you liked (or did not like) about it. What other topics would you like me to cover?
NOTE: If you need additional information or help, please reach out via LinkedIn Connection or DM and let me know how I can help.
#AISecurity #MLSecurity #SecuringAI #AICyber #HackingAI
Works Cited
“Adversarial Example Using FGSM | TensorFlow Core.” TensorFlow, www.tensorflow.org/tutorials/generative/adversarial_fgsm.
Allen, Thelma. “How to Deploy Machine Learning with Differential Privacy.” NIST, 17 Dec. 2021, www.nist.gov/blogs/cybersecurity-insights/how-deploy-machine-learning-differential-privacy.
“Google Cloud Model Cards.” Modelcards.withgoogle.com, modelcards.withgoogle.com/about.
Google Project Zero. “Project Naptime: Evaluating Offensive Security Capabilities of Large Language Models.” Project Zero, 20 June 2024, googleprojectzero.blogspot.com/2024/06/project-naptime.html.
IBM. “AI Fairness 360.” Aif360.Res.ibm.com, aif360.res.ibm.com/.
Lundberg, Scott. “An Introduction to Explainable AI with Shapley Values — SHAP Latest Documentation.” Readthedocs.io, 2018, shap.readthedocs.io/en/latest/index.html.
Robinson, Sara, and James Wexler. “Introducing the What-If Tool for Cloud AI Platform Models.” Google Cloud Blog, 18 July 2019, cloud.google.com/blog/products/ai-machine-learning/introducing-the-what-if-tool-for-cloud-ai-platform-models.
Sharma, Abhishek. “Decrypting Your Machine Learning Model Using LIME.” Medium, Towards Data Science, 4 Nov. 2018, towardsdatascience.com/decrypting-your-machine-learning-model-using-lime-5adc035109b5.
The AI Revolution: Google’s Developers on the Future of Artificial Intelligence. 60 Minutes, 16 Apr. 2023, www.youtube.com/watch?v=880TBXMuzmk.
Thomas, Prasanth Aby. “Meta Offers Llama AI to US Government for National Security.” CIO, 5 Nov. 2024, www.cio.com/article/3599448/meta-offers-llama-ai-to-us-government-for-national-security.html.
“What Is Differential Privacy? - IEEE Digital Privacy.” Digitalprivacy.ieee.org, IEEE, digitalprivacy.ieee.org/publications/topics/what-is-differential-privacy.
Winder, Davey. “Google Claims World First as AI Finds 0-Day Security Vulnerability.” Forbes, 5 Nov. 2024, www.forbes.com/sites/daveywinder/2024/11/05/google-claims-world-first-as-ai-finds-0-day-security-vulnerability/.
Executive Director of Engineering, Digital Payments and Client Originations at Wells Fargo & Screen Play Writer
3 个月AI comes with the Intelligence and the speed of execution. Counter attack with AI can cause the cyber issues with equal.speed and scale. The we do work on airwar mission critical apps to handle counter and counter-counter threats. I am exploring cyber security that can be caused by AI.
US Security and Resiliency Practice Leader - Security is no longer just keeping the bad guys out… Zero Trust!
4 个月Insightful as ever my friend. I see the challenge emerging as we try to align the principles of "secure AI" with moving with speed at scale. Maybe AI can help us with that ??