Beyond data: responsible AI
Isabella (1848-1849) is a painting by John Everett Millais, a fragment illustrating an irresponsible behaviour of one of Isabella's brothers

Beyond data: responsible AI

In all this excitement set in motion by the success of large language models, the news that Microsoft laid off the ethics and society team within the artificial intelligence organization went almost unnoticed.

In 2020 the team employed 30 engineers, designers and even philosophers. Its area of responsibility and the place relative to the Microsoft's Office of Responsible AI is unclear. The sarcastic commentary on the news is likely going to be misplaced: a big, successful organization can afford multiple initiatives with seemingly overlapping remits -- in good times. But good times do not last forever.

Having spent not an insignificant amount of my time working with my talented colleagues and partners on something that can be called Responsible AI in my last organization, I have a particularly soft spot for the topic. So I spent a couple of hours trying to discern the similarities and differences in approaches to Responsible AI in four technology companies: Microsoft, Alphabet/Google, Meta/Facebook, and Open AI. The comparison table, to the best of my knowledge and interpretation of the publicly available documents, is below.

A few comments, all my opinions.

  • Items I consider important - Issues recognized and Governance - are further broken down, again, according to my perspectives. Your perspective can differ
  • All companies truly excel in some areas but look less mature in other areas
  • Some issues such as broad understanding of robustness, or different levels of explainability are barely recognized or covered very lightly
  • Ethical AI is much broader than just fairness and inclusiveness; the ethical reductionism seems common, though
  • Alphabet/Google seems to have the most meticulous process of pre-development review and approval focusing, as I understand, on ethical issues predominantly or exclusively. The company also seems to have the ongoing risk and performance monitoring process.
  • My perception is that among the three big players Meta has the lightest touch approach to transparency and accountability
  • Microsoft is most explicit in stating that accountability for the AI product lies with the human/team/organization
  • No company seems to use an adversarial approach to the prevention and mitigation of AI-related risks
  • My perception is that processes and policies within the "big three" companies create substantial friction in the organizational gears
  • No wonder that Open AI, openly concerned predominantly with safety and alignment, is the most nimble company which bakes amazing products with incredible speed
  • I am therefore curious how the big three companies in the table below will approach the integration of third-party AI models fully developed outside their Responsible AI frameworks; issues related to the use of third-party models are either muted or not recognized as far as a cursory glance at their documents show

No alt text provided for this image

Star Glossary (second column)

  • Alignment: affinity with human intentions and (unspecified) human values?
  • Ethical heuristics: explicit ability to be driven by an ethical system (e.g. utilitarian, deontic, value)??
  • Worldview, beliefs: explicit ability to be driven (in the absence of data), or informed (along with data) by a particular coherent theory of how the world works
  • Robustness: explicit ability to recognize uncommon, possibly unforeseeable, situations and to behave in a controlled (e.g. aligned, ethical) way when they occur, as different from Reliability below
  • Reliability: ability to act as designed under foreseeable adverse conditions
  • Transparency, explainability, interpretability: ability of the accountable person or organisation to?

  1. explain and defend the model development process (weak transparency), or?
  2. explain, interpret and defend its general behavioural patterns (global transparency, explainability, interpretability), or?
  3. fully explain its behaviour in each case (strong or local transparency, explainability, interpretability)?

  • Blue team/Red team: aka Devil’s Advocate,? an organisational process by which complex decisions such as a particular AI model design, development, and deployment, are challenged by a permanent or an ad hoc team: you cannot grade your own homework

Further miscellaneous notes

m Microsoft: safety is understood as explicit harm avoidance or harm quantification+communication when the residual/probable harms remain

mm Microsoft’s Office of Responsible AI

a Alphabet encourages “human-centric design approach”

aa Alphabet considers safety and reliability (i.e. the product works as designed) a joint requirement; this approach is reasonable if safety is understood as “reliability under malicious threat”, but generally the concept of reliability is broader than that - e.g. the product must work as designed under foreseeable adverse conditions

aaa ?Alphabet’s principles of interpretability are not yet fully capable: no degrees of interpretability, transparency, explainability are recognized.

aaaa Alphabet has a very elaborate ethical review/approval process for AI-centric proposals

aaaaa Alphabet also has an on-going process of risk, performance review and mitigation, but no explicit collaborative-adversarial partners working with product teams

f Meta’s recognizes the concepts of transparency, explainability and interpretability, they are the subjects of ongoing research, no specifics given: “work in this area is in its infancy”

ff Meta does not seem to have a high level policy- and process-defining model development body?

fff Meta’s Responsible AI (RAI) team, guidance is provided, but no explicit oversight mentioned?

微软 , Alphabet Inc. , Meta , OpenAI , #ai #machinelearning #responsibleai #fairness #robustness #organizationalexcellence #transparency #explainability #safety

Prathamesh Patalay

Product Management. Design Strategy.

1 年

Thank you so much for writing this article! 3rd party models definitely seem to be something the Tech giants didn't consider seriously, but a majority of smaller orgs & startups will/are trying to use. Do you have any insights in that direction, apart from practices like OpenRAIL Licensing & Model Cards from Hugging Face ?

Vacslav Glukhov

Co-founder | Advisor | Consultant | Mentor | Engineer | Scientist | AI Research, Engineering, Quantitative Research, Algorithmic Trading, Analytics, Management Consulting

1 年
Peter McStay

Head of EMEA at OptimX Markets

1 年

Excellent piece Slava, a huge responsibility should weigh on the leaders in this space.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了