Persistent Issues for AI

Persistent Issues for AI

Over the course of the past decades, I have been working with various incantations of AI - from the AI Lab in IBM T. J. Watson Research Center (1982-86), Advanced Technology Labs (1990-93), NYNEX Science & Technology (1993-98), Pitney Bowes Advanced Concepts & Technology (2005-07), Bank of America Security Research & Innovation (2014-2020) and a stint at Applied Research Associates Neya Robotics division (2024).

With each generation of advancement, I have seen the same issues come up over and over again; and decided to list my Top 5 attributes when considering an AI approach to business.

(1) Embracing AI Means Understanding the Dirty State of Data.

Look beyond the AI to the solution providers themselves. when enterprises are sold security solutions, they are marketed that the solution will detect anomalies and/or drift from a baseline. But the reality is that implementing such a solution on a dirty network creates a dirty baseline … Not only does the deployment ignore existing issues, those existing issues are now considered part of the baseline.

(2) AI Modeling Transparency is Crucial to Understand Bias.

Transparency needs to be aligned to a motivational outcome. It seems that everyone mentions transparency without understanding the challenges and implications of achieving transparency. An experiment by Lakkaraju on AI-based decision modeling proved that one cannot remove bias from the decision process, even though the AI training data had bias inputs removed (https://dl.acm.org/doi/10.1145/3097983.3098066). In other words, understand we can never get rid of bias, as it is part and parcel of every decision we make; but we can understand and embrace how AI identifies bias in our society through data modeling.

(3) Entitlement Restrictions for AI is Crucial to Data Privacy.

AI systems need to know the role of the person asking the AI for data and returning results appropriate for the participants entitlements. There is a concept known as “differential privacy” (DF) which does entitlement cleansing at certain stages. If DF is done at the prompt level, then the outcome is the safest but not very useful. If DF is done at the tail end of the process, the outcomes are very detailed but very difficult to hide parts which should be restricted. Two examples where over-sharing of data privacy is being mitigated:

  • Knostic.ai is a startup looking to create DF management consoles showing administrators a matrix of outcome elements and the roles of the inquirers, and let them allow/block certain elements – which is fed back into the training data.
  • Brian Cincera, CISO at Pfizer, had his team implement a homegrown orchestration UX using LLMs in combination to human first use cases, offering employees a front-end to access the most appropriate internal use-case models securely. This applies RAG based on entitlements. Pfizer also has an "AI governance center of excellence" for policies and support. Pfizer operates on 3 rules for using AI: focus on the mission, use the right method, own the output. This is derived across 4 key elements in their deployments: people-first approach, know the inputs, protect the models, validate the outputs.

(4) AI Explainability in Financial Services is Crucial to Regulatory Compliance.

Nobody has even considered how the existing SEC rules affect the GenAI processing of financial instruments. When the SEC proposed more transparency in financial lending back in 2021 (Exchange Act Rule 10c-1) as a result of the mortgage crisis, I doubt they understood the enormity of what we are seeing now with AI hallucination. Beyond bad financial decisions, imagine these same AI algorithms deciding the best medical treatment for your health issues. As patients, we should demand transparency in how our medical decisions are derived.

(5) AI Verification in Threat Intelligence is Crucial to Offensive Security Operations.

Imagine our threat intelligence analysis is derived from AI-fabricated data points, resulting in mis-interpretation, mis-direction and mis-attribution. This is exactly why AI-based decision-making needs to be vetted by human experts. Let the AI give an array of alternative outcomes, and let the human determine the best path forward.

(6) AI Consumers Needs to Adopt "Verify Then Trust" Mentality.

The scrupulous over-adoption of GenAI by enterprises for every conceivable problem – relevant to the training data or not – should be tempered by continuous skepticism. My mantra for GenAI outcomes is to move from the legacy security tenet of “Trust but Verify” to a new perspective of “Verify THEN Trust” – an ode to the Volkswagen marketing slogan “Sign then Drive”. Basically assume your network is already compromised. (We can get into the efficacy of the Zero-Trust rabbit hole here too.)

要查看或添加评论,请登录

John C. Checco, D.Sc.的更多文章

  • Utilization of AI in Cyber Threat Intelligence

    Utilization of AI in Cyber Threat Intelligence

    With the recent hype on the use of AI/ML models in every part of the organizational operations, cyber threat…

    1 条评论
  • Social Media and Modern Conflict

    Social Media and Modern Conflict

    How has social media changed the nature of modern conflicts? To address this question means to look at the definition…

  • A Decade-Old Method for Detecting DeepFakes

    A Decade-Old Method for Detecting DeepFakes

    Back in 2011, there was a paper on a novel video processing method called Eulerian Video Magnification. A variety of…

  • ZT's Love-Hate Relationship with AI/ML

    ZT's Love-Hate Relationship with AI/ML

    ZT needs to treat entities requesting data with both access and entitlement controls. AI works best with training data…

    2 条评论
  • Quotes I Live By

    Quotes I Live By

    I am perpetually thinking about (and occasionally opining on) leadership topics, critical infrastructure protection…

  • (QGI) Quantum General Intelligence

    (QGI) Quantum General Intelligence

    I predict a disruption - a major disruption. It will occur as GenAI's progression towards AGI (Artificial General…

  • The AI Morality Divide

    The AI Morality Divide

    There's been a lot of commentary on the value as well as the luggage that GenAI brings to our internet always-on…

  • How a Car-Jacking Influenced my Security Posture

    How a Car-Jacking Influenced my Security Posture

    Are you prepared? I mean really prepared ..

    10 条评论
  • Why Corporations are more dangerous than Nation States

    Why Corporations are more dangerous than Nation States

    Traditionally, nations were the largest assembly of people and governments were the most powerful ruling entities for…

  • Walk a Mile in the End-User's Shoes

    Walk a Mile in the End-User's Shoes

    Cybersecurity professionals pride themselves on their breadth and depth of knowledge, yet there are some concepts we…