Reflecting on AI Bias
I been having the same discussion with my friend James Blom for more than a decade, the need to do AI and Machine Learning better. What does better mean? It means protecting the provenance of the data, the integrity of the AI system, the attribution of the participants, and importantly, the systematic reduction in the inherent bias embedded in these systems.
These concerns cross vertical market boundaries. Twelve years ago, I was designing and constructing systems to hunt for terrorists, foreign and domestic, using social media, Inherent bias in the models was a deep concern. A few years after that our smallish company beat out IBM to conduct the AI modeling for fraud detection and efficacy for the Affordable Care Act. Inherent bias was again, a touch stone. A few years ago, I expressed my frustration at the bias inherent in autonomous vehicle AI systems. Ethical AI - Addressing Embedded Bias. Finance is another area, working with Big Thinker, we were applying AI to automated investment engines, same issue.
Through all this Jim and I continue to wonder how to fully address this. Why? As companies apply AI to more and more verticals, whether automotive, security, identity, finance or healthcare, the ability to do good or harm increases. My colleagues at Bootstrap Labs held a robust panel discussion on this very topic Applied AI Conference 2017 - Cybersecurity: AI, Friend or Foe? Further, the concern goes to the value of an enterprise that deploys AI. Is the value as high if the provenance of the AI data is suspect? Is the value as high if the data is found to be biased, with no plan to address that point? How do you protect the privacy of the user’s data used to create the highly personal experiences that AI promises for every digital transformation? These are core board level governance and valuation issues for enterprises who embrace AI.
Writing here is one way to share the concern. With a year of epidemic, the explosion of telemedicine is evident. Here AI and Machine learning come to the fore again. I’m working with a company, “I Will, Till I’m Well”, who seeks to provide telehealth services into underserved communities. When applying AI in this instance, how do you do so without inherent bias and fully serve the community? Gain trust and permission to apply AI technology in a way that does not discount the community or do them harm?
I remain unsure if AI is a friend or foe, no matter the market vertical. I am certain that creating governance around AI will help. That removing bias is a way to ensure AI is more friendly. I’m hopeful that by writing, creating IP, creating products, I can assist in this process. It’s important. Doing AI well creates value, doing it poorly can destroy value. AI and Machine Learning are here to stay in lives, let’s do it better. I’ve been at this for over a decade. I’m in.
Product Marketing/Management Leader. Identity Security/Cybersecurity/Data Platforms, SaaS/Cloud/On-prem. Proven thought leader & public speaker. Technology enthusiast, ethical hacker at heart.
4 年Can't agree more. I see too many businesses turning to AI companies to do 'black box magic' for them. Sort term gains possible, but in the end, datasets are the source of bias, and full automation with no human in the loop is a big risk. We need to build some AI anomaly detection at the core.