Beyond data: responsible AI
Vacslav Glukhov
Co-founder | Advisor | Consultant | Mentor | Engineer | Scientist | AI Research, Engineering, Quantitative Research, Algorithmic Trading, Analytics, Management Consulting
In all this excitement set in motion by the success of large language models, the news that Microsoft laid off the ethics and society team within the artificial intelligence organization went almost unnoticed.
In 2020 the team employed 30 engineers, designers and even philosophers. Its area of responsibility and the place relative to the Microsoft's Office of Responsible AI is unclear. The sarcastic commentary on the news is likely going to be misplaced: a big, successful organization can afford multiple initiatives with seemingly overlapping remits -- in good times. But good times do not last forever.
Having spent not an insignificant amount of my time working with my talented colleagues and partners on something that can be called Responsible AI in my last organization, I have a particularly soft spot for the topic. So I spent a couple of hours trying to discern the similarities and differences in approaches to Responsible AI in four technology companies: Microsoft, Alphabet/Google, Meta/Facebook, and Open AI. The comparison table, to the best of my knowledge and interpretation of the publicly available documents, is below.
A few comments, all my opinions.
Star Glossary (second column)
领英推荐
Further miscellaneous notes
m Microsoft: safety is understood as explicit harm avoidance or harm quantification+communication when the residual/probable harms remain
mm Microsoft’s Office of Responsible AI
a Alphabet encourages “human-centric design approach”
aa Alphabet considers safety and reliability (i.e. the product works as designed) a joint requirement; this approach is reasonable if safety is understood as “reliability under malicious threat”, but generally the concept of reliability is broader than that - e.g. the product must work as designed under foreseeable adverse conditions
aaa ?Alphabet’s principles of interpretability are not yet fully capable: no degrees of interpretability, transparency, explainability are recognized.
aaaa Alphabet has a very elaborate ethical review/approval process for AI-centric proposals
aaaaa Alphabet also has an on-going process of risk, performance review and mitigation, but no explicit collaborative-adversarial partners working with product teams
f Meta’s recognizes the concepts of transparency, explainability and interpretability, they are the subjects of ongoing research, no specifics given: “work in this area is in its infancy”
ff Meta does not seem to have a high level policy- and process-defining model development body?
fff Meta’s Responsible AI (RAI) team, guidance is provided, but no explicit oversight mentioned?
Product Management. Design Strategy.
1 年Thank you so much for writing this article! 3rd party models definitely seem to be something the Tech giants didn't consider seriously, but a majority of smaller orgs & startups will/are trying to use. Do you have any insights in that direction, apart from practices like OpenRAIL Licensing & Model Cards from Hugging Face ?
Co-founder | Advisor | Consultant | Mentor | Engineer | Scientist | AI Research, Engineering, Quantitative Research, Algorithmic Trading, Analytics, Management Consulting
1 年Links Microsoft: Office of Responsible AI:? https://www.microsoft.com/en-us/ai/our-approach Responsible AI resources: https://www.microsoft.com/en-us/ai/responsible-ai-resources Learning: https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/innovate/best-practices/trusted-ai Alphabet: Guidances, best practices: ?https://ai.google/responsibilities/responsible-ai-practices/ Review/approval process for AI-centric proposals: https://ai.google/responsibilities/review-process/ Principles document: https://ai.google/static/documents/ai-principles-2022-progress-update.pdf Meta: Pillars: https://ai.facebook.com/blog/facebooks-five-pillars-of-responsible-ai/ and references therein
Head of EMEA at OptimX Markets
1 年Excellent piece Slava, a huge responsibility should weigh on the leaders in this space.