AI Risk Management in Practice: Algorithmic Audits

AI Risk Management in Practice: Algorithmic Audits

Edition 2 : Algorithmic audit

As I sit in this little coffee shop by the Gale river in the White Mountains of New Hampshire in the United States, sipping my dark roast Peruvian blend of coffee and by one of the covered bridges, I wanted to share the second edition of this AI Risk newsletter. The topic of today is Algorithmic audits!

No alt text provided for this image

AI and machine learning are entering every aspect of our life. Marketing, autonomous driving, personalization, computer vision, finance, wearables, travel are all benefiting from the advances in AI in the last decade. As more and more AI applications are being deployed in enterprises, concerns are growing about potential "AI accidents" and the misuse of AI. With increased complexity, some are questioning whether the models actually work!?As the debate about fairness, bias, and privacy grow, there is increased attention to understanding how the models work and whether the models are thoroughly tested and designed to address potential issues.?

The area "Responsible AI" is fast emerging and becoming an important aspect of the adoption of machine learning and AI products in the enterprise. As regulators introduce new legislations (For example, New York City will require bias audits for hiring technology? ) and the fear of being on the wrong side of the law grows, (Meta (previously Facebook) recently settled with the DOJ over allegedly discriminatory housing advertising ), companies are beginning to incorporate algorithmic auditing to ensure compliance and to mitigate potential risks in algorithms. So what is an algorithmic audit?

Well, it depends on who you ask! ?Some of the things I have heard..

  • Formal ethics reviews, model validation exercises, and independent audits
  • Ensuring that the adoption of AI is transparent and has gone through formal validation phases
  • Checks and balances to ensure that the AI system deployed is fair and transparent
  • Audits to ensure systems are secure, auditable, reproducible, explainable and compliant with law

So what is it? It is all of it and no one can deny that each in isolation doesn't cover all considerations of a comprehensive algorithmic audit.

With limited regulatory guidance, it is a loaded question subject to interpretation on who you ask! The legal folks focus on legal and compliance issues, the high-tech software industry focuses on AI-Ops, ML-Ops, Dev-ops etc, the security camp focuses on security issues, the privacy folks focus on data leakage, privacy rights etc, the ethics camp focuses on AI ethics related issues, the fairness camp talks about fairness and the data scientists focus on model metrics and validation, the explainability folks focus on explainability/interpretability issues with models and the monitoring folks talk about drift etc. and so on!! This is exactly how the elephant was described in the parable by the blind Indian men when asked to describe an elephant!?

No alt text provided for this image

Source:?https://en.wikipedia.org/wiki/Blind_men_and_an_elephant

As someone who has developed many large-scale analytical systems and has also validated multiple analytical models, I acknowledge, it is a hard question without a simple answer. I am glad the discussion is bringing up all the issues that are pertinent to ensuring analytical models are vetted out but I also am seeing a growing demand for “just-enough” algorithmic auditing.

Let me elaborate. You see, in an organization, you don’t have unlimited budgets. The industry wants to push products and features out! Agile methodologies mean your product is never done which means testing is never done too! You build, deploy, address issues and add features iteratively. This has been the trend to build data-driven analytical systems too. But that approach is the reason why there are so many “unsafe” and “discriminatory” products out there.?When issues are found/noticed, the approach is to “fix” it and if the stakes are high, get it “certified” to mitigate negative consequences that includes being on the watch list of regulators, customers migrating to other products or the fear of bad press. And unfortunately, half-baked algorithmic audits is becoming a means of achieving this. Usually budgets for audits are fixed and companies try to do “just enough” to not rock the boat. There is a growing interest in pre-packaged templates and fillable forms that lowers the costs and brings about the illusion that “all is good” and that’s something we should all watch out for! The so called “Audit washing” isn’t going to address the real risks and a one-size-fits-all solution isn’t possible! So how should we go about solving this dilemma??

Well, that’s a discussion for Edition 3 of the news letter! :) Let’s talk about it some more tomorrow!

??Event of the week:?

Maya Murad from IBM will be speaking on the topic: “A framework for involving people in enabling the responsible use of algorithmic decision-making systems” at the Quantuniversity Machine Learning Summer School 2022. I will be moderating the session. Join in at 12.00pm ET if you want to be part of the discussion. The event is free !

Register at: https://us02web.zoom.us/webinar/register/WN_QuMh-wJjRpCdRFVzpTjK_A

??Keep on learning!

?? Want to learn more formally? Join the AI Risk Management Certificate program developed in partnership with PRMIA ->?https://lnkd.in/eVEhyNSQ

??Many of these topics will be elaborated in the AI Risk Management Book published by Wiley. Check updates here ->?https://lnkd.in/gAcUPf_m

??Subscribe to this newsletter/share it with your network -> https://www.dhirubhai.net/newsletters/ai-risk-management-newsletter-6951868127286636544/

I am constantly learning too :) Please share your feedback and reach out if you have any interesting product news, updates or requests so we can add it to our pipeline.

Sri Krishnamurthy?

QuantUniversity

#machinelearning?#airiskmgt?#ai

Shea Brown

AI & Algorithm Auditing | Founder & CEO, BABL AI Inc. | ForHumanity Fellow & Certified Auditor (FHCA)

2 年
Sri Krishnamurthy, CFA, CAP

CEO, QuantUniversity | AI Expert | Educator | Author | TedX Speaker |

2 年

Thanks Pat Richards for reading and the feedback!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了