Gauging the Liabilities of Artificial Intelligence Within Your Organization
Jules Polonetsky
CEO @ Future of Privacy Forum | Advancing Responsible Data Practices
Artificial intelligence and machine learning (AI/ML) generate significant value when used responsibly – and are the subject of growing investment for exactly these reasons. But AI/ML can also amplify organizations’ exposure to potential vulnerabilities, ranging from fairness and security issues to regulatory fines and reputational harm.
Many businesses are incorporating ever more machine-learning based models into their operations, both on the backend and in consumer facing contexts. For those companies who are not developers of these systems themselves, but who use these systems, they assume the responsibility of managing, overseeing, and controlling these algorithmically-based learning models, in many cases without extensive internal resources to meet the technical demands they incur.
General application toolkits for this challenge are not yet broadly available, and to help fill that gap while more technical support is developed, we have created a checklist focused on asking questions to carry out sufficient oversight for these systems. The questions in the attached checklist – “Ten Questions on AI Risk” – are meant to serve as an initial guide to gauging these risks, both during the build phase of AI/ML endeavors and beyond.
While there is no “one size fits all” answer for how to manage and monitor AI systems, these questions will hopefully provide a guide for companies using such models, allowing them to customize the questions and frame the answers in contexts specific to their own products, services, and internal operations. We hope to build on this start and offer additional, detailed resources for such organizations in the future.
1. How many AI/ML models does your company deploy (including third-party models or those that serve as inputs into other models)?
2. What types of outputs or recommendations is each model making and where is documentation about these models stored?
3. How many people or organizations does each model potentially impact?
4. How are your organization’s models audited for security or privacy vulnerabilities?
5. Incidents, such as attacks or failures of AI/ML models, can cause substantial harm. Does your company have response plans in place to address AI/ML incidents?
6. Does your company audit models for AI/ML-related liabilities before a model is deployed? Are different audit processes applied for different types of models?
7. Does your company monitor models for AI/ML-related liabilities during deployment? Are different audit processes applied for different types of models? Disclaimer: bnh.ai leverages a unique blend of legal and technical expertise to protect and advance clients’ data, analytics, and AI investments. Not all firm personnel, including named partners, are authorized to practice law.
8. Have you quantified sociological bias in your company’s AI/ML training data and model predictions? Is your company aware of how each model affects different demographic customer segments?
9. Several organizations have published detailed standards or best practices for “trustworthy AI.” Does your company utilize any of these resources when 1 implementing AI/ML? If so, which ones?
10.Have any independent third parties or other external experts (legal, security, or others) been involved in your company’s procedures to address the known liabilities of AI/ML?
About the 10 Questions
These questions were adapted from written letters sent by Sens. Cory Booker and Ron Wyden to the heads of major healthcare companies in December of 2019. The letters arose in response to research that indicated a widely used algorithm was also discriminatory. These types of questions are indicative of increased regulatory oversight of AI/ML in the short- to medium-term. Note that both senators were sponsors of the 2019 Algorithmic Accountability Act, introduced in each chamber of Congress, which would delegate increased powers to the FTC to regulate AI/ML.
The above was posted at fpf.org by FPF Senior Counsel Brenda Leong and prepared by her and our friends at bnh.ai, a boutique law firm specializing in AI/ML analytics. For further information about how to manage the risks of AI/ML, please reach out to [email protected].