Is your AI FAT? (and yes, I mean FAT!)
Mano Paul, MBA, CISSP, CSSLP
CEO, CTO, Technical Fellow, Cybersecurity Author (CSSLP and The 7 qualities of Highly Secure Software) with 24+ years of Exec. Mgmt., IT & CyberSecurity Management; Other: Shark Researcher, Pastor
Welcome to the world where machines are smart, algorithms are our new overlords, and... we’re worried about how FAT our AI is. Yes, you heard me right. I’m talking about FAT AI, where FAT stands for Fair, Accountable, and Transparent—not the kind that ate too many potato chips during quarantine (although one can argue that some of the data, we feed into our AI models are like potato chips).
FAT AI stands for Fair, Accountable, and Transparent AI
But why should your AI be FAT? Because if you don’t make your AI FAT, it might just make some incredibly bad decisions on your behalf—like confusing a banana for a toaster[i] (talk about psychedelic perturbation), a turtle for a rifle[ii], misclassifying a malignant tumor as benign[iii] or even worse, the other way around, or a self-driving car treating a stop sign as a speed limit[iv].
Now, a hacker can intentionally perform an adversarial attack to introduce noise and tamper the input to spoof the AI system to mispredict. Case in point, by introducing some synthesized noise (a noise vector perturbation), lo and behold, my Shark classifying AI system misclassified a Tiger shark that it had originally correctly classified with 99.91% confidence to be a Great White shark with 100% confidence. But without intentional adversarial manipulation, an AI system must be FAT.
Without intentional adversarial manipulation or somebody messing with shark pictures, your AI system must be FAT.
Have I lost you or are you still tracking with me - If you are still with me, what was the confidence in the original prediction that the Tiger shark was a Great White and after my adversarial manipulation, what is the confidence level that shark is a Tiger shark? If you figured that out, put it in the comments.
Fairness: AI That Doesn’t Play Favorites
Let’s start with Fairness. Imagine you’re training an AI to help hire people for your company. Now, if you train it on resumes from a “party culture” tech firm (I’m looking at you, ‘Pong Beer Fridays’ crew), guess what happens? Your AI could start rejecting candidates based on gender and clearly miss out on the most qualified candidates for the role[v], just because they didn’t play beer pong in college.
Fair AI means we need diversity in the datasets when training. This ensures that the AI isn’t making decisions based on bias that can be inadvertently introduced by the data that it trains on. Want to avoid AI hiring only people who list “Fortnite” under special skills? (And oh, BTW, gamers do make some fantastic cybersecurity and technology professionals[vi] – sorry, I digress). Feed the training algorithm. data that includes resumes from people of different genders, races, and backgrounds. It’s like teaching your kid to eat vegetables and not just pizza. (Though let’s face it, pizza is amazing.)
Diversity in the datasets used for training Machine Learning models is one way to ensure that your AI is Fair
Diversity of data in the training data set can also help with addressing overfitting or underfitting, as it would need to consider outliers when generating a model, but I will defer this to another article.
We are counseled to be fair and impartial (James 2:1) but then why is it okay to accept an AI system that is not fair and biased??
Accountability: The AI Fingerprint
Now, let’s talk Accountability. Accountability means your AI doesn’t get to shrug its metaphorical shoulders and blame someone else when things go wrong. In the case of the stop sign misclassification, who is to blame? WhoDunIt? Who is at fault… guess what, the manufacturer cannot say, “Well, the AI did it?”.
A practical way to accomplish accountability is with saliency maps, which is a data scientist’s jargon for the layman’s heatmaps. No, not the kind that shows you how sweaty you got at the gym. In AI, heatmaps can highlight which parts of the data influenced the AI’s decision. For example, if an AI rejects a loan application, a heatmap could show whether it was based on income, credit score, gender, marital status, education or something else.
Accountability of AI can be accomplished using Saliency Maps, aka Heatmaps
With heatmaps, you can pinpoint what’s driving your AI’s decisions and say, "Aha! That’s why it thought a benign tumor was a malignant one!" [vii] Being able to explain decisions after they’re made? That’s accountability, folks.
I will be posting an article on how saliency maps can be used to create responsible AI systems as part of this newsletter series; so, subscribe if you have not already done so and stay tuned. Disclaimer: It may have something to do with Sharks.
Transparency: No Black Boxes, Please!
Finally, there’s Transparency. If your AI is a mysterious black box that spits out decisions with no explanation, you’ve got a problem. You wouldn’t take advice from a guy in a dark alley whispering, “Trust me, buy these stocks.” (Or would you? In which case, we have bigger problems, and I have questions - we need to talk!)
Transparent AI means clear, explainable models. You need to be able to crack open that AI brain and see why it made the decisions it did. When a self-driving car decides to turn left into a tree (because the AI smartly summoned it or was inebriated), you should be able to trace the thought process—or, in this case, the algorithmic nightmare—that led to it.
If your AI is transparent, you can correct mistakes, improve accuracy, and avoid those awkward moments when the AI recommends you hire a loon instead of a duck. What looks like a duck is not always a duck[viii]. You don’t want your AI systems to be a Blackbox.
Transparent AI means clear explainable models that are not black boxes.
Tofu on Vegan Cheesecakes of AI (The Ethics)
Now, let’s talk about ethics. Because no FAT AI is complete without a sprinkle of morality. We need AI systems that don’t just work but work ethically. That’s where AI Ethics and Governance Frameworks come in, like the icing on the cake—or, for my health-conscious folks, the tofu on the vegan cheesecake.
Governance frameworks, such as the ISO 42001[ix], AI Risk Management Framework (RMF) by NIST[x] or IEEE[xi], or the EU AI Act[xii],? help in providing the requirements that need to be built into our AI systems to keep your AI behaving responsibly. Think of these frameworks like a babysitter who makes sure your AI doesn’t sneak out at night to make a quick buck predicting stock market crashes. These standards ensure your AI isn’t just following the law, but also being a good digital citizen—respecting privacy, avoiding bias, and not accidentally sending your grandma’s recipe emails to the Pentagon.
AI Governance Frameworks are like baby sitters ("good" baby sitters) that make sure that your AI is behaving ethically and is FAT
Responsible AI == FAT AI
Let’s tie this all together with a real-world example. Imagine you’re designing an AI for facial recognition. To ensure Fairness, you use a dataset with diverse faces—different skin tones, ages, and genders — so your AI doesn’t think everyone who does not match a particular feature like a particular age group or skin tone is an alien.
Next, you implement Accountability by using heatmaps that highlight which facial features are influencing decisions. If the AI seems to always focus on people’s noses, you can adjust your model to be a little less... nosey.
Finally, for Transparency, you make sure the algorithm explains itself. Why did it recognize Person A but not Person B? Maybe Person B was wearing sunglasses. Or maybe the AI just has a thing for perfect cheekbones. Either way, you can look under the hood and figure out what went wrong—or right.
Wrapping Up
So, in conclusion, if you want your AI program to succeed—and avoid any unfortunate hallucinogenic "Jaws" incidents—make sure it’s FAT. Fair, Accountable, and Transparent. And don’t forget the ethical frameworks that ensure your AI doesn’t run wild like a toddler on a sugar high.
Bottom-line is when your AI is FAT, then your company can be said to be LEAN AI (Legitimately Ethical And Neutral – just coined this term), unbiased, and less prone to regulatory headaches. Just remember: a FAT AI is a healthy AI. And when it comes to AI, you really want to avoid any non-FAT diets.
When you AI is FAT, it is infact LEAN
One Final Thing - Requesting your Input
I request the following from you
Note: When I started this article, the term xAI was being used for Explainable AI that required AI systems to be FAT (even DARPA would say so[xiii]), but with Elon Musk renaming Twitter to X and launching his artificial intelligence company, xAI[xiv], a more generic and yet apt term that has now surfaced is Responsible AI. I tend to use xAI and RAI interchangeably, but Elon, please stop buying companies and renaming them that forces us to rewrite articles. ?? The important thing is we all need to be Responsible, and our AI needs to be FAT.?
Disclaimer: No sharks were hurt in the making of the article
#SecuringAI #AICyber #AISecurity #AIStrategy #AIEthics #HackingAI
Want to learn more? You can at my 1-day workshop at OWASP LASCON 2024 on Building your AI Strategy with Cybersecurity for Executives and Leaders on Oct 23, 2024.
I have converted the shark misclassifying proof of concept referenced above, into a lab along with several other exercises for the attendees of my 1-day Securing AI practical workshop training - "Building Your Own AI Cybersecurity Strategy: A Comprehensive & Practical Guide for Business Executives and Security Leaders". It will be delivered at the OWASP LASCON conference on October 23rd, 2024 and you can register here. The outline of what we will be learning.
Works Cited
i News, BBC. “Psychedelic Toasters Fool Image Recognition Tech.” Bbc.com, BBC News, 3 Jan. 2018, www.bbc.com/news/technology-42554735.
ii “Why Did My Classifier Just Mistake a Turtle for a Rifle?” MIT News | Massachusetts Institute of Technology, news.mit.edu/2019/why-did-my-classifier-mistake-turtle-for-rifle-computer-vision-0731.
iii Evans, Harriet, and David Snead. “Understanding the Errors Made by Artificial Intelligence Algorithms in Histopathology in Terms of Patient Impact.” Npj Digital Medicine, vol. 7, no. 1, Apr. 2024, pp. 1–6, https://doi.org/10.1038/s41746-024-01093-w.
iv Eykholt, Kevin, et al. Robust Physical-World Attacks on Deep Learning Visual Classification. arxiv.org/pdf/1707.08945. Accessed 28 Apr. 2024.
v Lytton, Charlotte. “AI Hiring Tools May Be Filtering out the Best Job Applicants.” Www.bbc.com, 16 Feb. 2024, www.bbc.com/worklife/article/20240214-ai-recruiting-hiring-software-bias-discrimination.
vi Guan, Lilia. “Gaming Creates a New Breed of Cybersecurity Talent.” CIO, CIO, 9 May 2019, www.cio.com/article/213808/gaming-creates-a-new-breed-of-cybersecurity-talent.html.
vii Cerekci, Esma, et al. “Quantitative Evaluation of Saliency Based Explainable Artificial Intelligence (XAI) Methods in Deep Learning Based Mammogram Analysis.” European Journal of Radiology, vol. 173, Elsevier, 2024, https://doi.org/10.1016/j.ejrad.2024.111356.
viii Southern. “Southern Wisconsin Bird Alliance.” Southern Wisconsin Bird Alliance, 26 Oct. 2020, swibirds.org/blog/2020/10/25/waterfowl-basics-get-your-ducks-and-coots-and-grebes-in-a-row.
ix “ISO/IEC DIS 42001.” ISO, Dec. 2023, www.iso.org/standard/81230.html.
x NIST. “AI Risk Management Framework.” NIST, 12 July 2021, www.nist.gov/itl/ai-risk-management-framework.
xi “Artificial Intelligence Risk Management Framework (AI RMF 1.0).” IEEE Computational Intelligence Society Resource Center (CIS), 2017, resourcecenter.cis.ieee.org/government/usa/cisgovph0010.
xii European Parliament. “EU AI Act: First Regulation on Artificial Intelligence.” European Parliament, 8 June 2023, www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.
xiii Turek, Matt. “Explainable Artificial Intelligence.” Darpa.mil, 2018, www.darpa.mil/program/explainable-artificial-intelligence.
xiv “How Memphis Became a Battleground over Elon Musk’s XAI Supercomputer.” NPR, 11 Sept. 2024, www.npr.org/2024/09/11/nx-s1-5088134/elon-musk-ai-xai-supercomputer-memphis-pollution.