Moving AI ethics beyond guidelines

Moving AI ethics beyond guidelines

This article first appeared as an op-ed in The Straits Times, 16 December 2020

The recent row over a leading researcher’s departure from Google raises questions about the role and remit of AI ethics team and the inadequacy of frameworks to deal with moral complexities.

Lim Sun Sun and Jeffrey Chan Kok Hui

The departure of AI researcher Timnit Gebru from Google under controversial circumstances has raised discomfiting questions about the company’s stance on AI ethics. It has also revealed the challenges of practising AI ethics at the frontlines of this field. 

Dr Gebru, who was co-lead of Google’s Ethical AI team, had earned widespread acclaim for her earlier work highlighting that AI facial recognition was less capable of identifying women and people of colour, thereby perpetuating discrimination if unchecked. 

Her alleged dismissal from Google was apparently triggered by her latest research paper questioning inherent biases in large models used to train algorithms for language processing. 

She also highlighted the staggering environmental costs of training such models given the considerable computer processing power and electricity involved. She cited previous research that had found that training one language model generated as much carbon dioxide as the lifetime output of five average American cars.

No alt text provided for this image

Dr Gebru asserts that she was pressured by higher-ups in the company to retract the paper from a forthcoming research conference or to remove the Google employees’ names from it. 

In response, CEO Sundar Pichai stated in a company-wide memo that Google should seek to improve the processes leading to her dismissal and framed it as a failure to protect the rights of a black, female minority employee but did not address the issue of her research being censored.

Thousands of Google employees and individuals from other organisations have since endorsed an open letter expressing support for Dr Gebru and the public pressure continues to mount.

DEALING WITH DILEMMAS

This unfortunate episode that is far from resolved holds interesting lessons for AI ethics. That a technology behemoth such as Google even has an AI ethics team is noteworthy in and of itself. It underscores how society’s intensifying deployment of AI has unleashed an expanding litany of ethical dilemmas around automation, datafication and surveillance that technology companies must grapple with. 

While it is taken for granted that large companies must have finance, legal, marketing and human resource departments, our technologising world does indeed necessitate that companies also hire ethics teams to provide guidance on issues relating to moral responsibility and civic duty. 

But this then begs the question as to the roles and remits of such ethics teams. Given that ethics is about the morally good life, one that is to be reflected in our AI milieu, then the crucial matter of how to define the organisational role, discretions and safety net of professional ethicists taking after Dr Gebru remains an outstanding task.

With the far-reaching impact AI has on our everyday lives, AI ethics teams bear the colossal burden of ensuring that this technology is safe and fair. AI-powered algorithms increasingly make many high-stakes decisions with potentially serious consequences for lives and society – from meting out legal penalties to qualifying for a loan or landing a job.

While AI technologies present clear benefits, they can nevertheless bring about different harms. These harms do not only include the direct harms manifested in malicious adversarial attacks and disinformation, they also extend to the indirect harms perceived when organisations and societies fail to check data biases or nuanced discrimination when using AI tools.

AI ethics teams like Dr Gebru’s must therefore weigh the benefits and harms introduced by AI processes so as to flag immediate implications for their company, but also to caution against long-term repercussions for humanity at large. 

In practice therefore, if such ethics teams are to be more than a token of the company’s corporate social responsibility, are they to serve as the proverbial conscience of the organisation and rein it in when it wades into ethical grey areas? 

Or is their job to educate colleagues on the potential ethical pitfalls they could land in, and thereby imbue in their engineers and designers an instinctive appreciation for their ethical burdens? Or perhaps their key function is to develop ethical guidelines for the organisation as it forges groundbreaking innovations without ethical precedents, and then to clarify and settle ethical conflicts that may result?

INADEQUATE MODELS

There is in fact no lack of AI ethics guidelines or model frameworks today. In an important study evaluating AI ethics guidelines, Dr Thilo Hagendorff from the University of Tuebingen counted at least 22 major ethical guidelines in the world. And this number is surely set to rise with the recent introduction of Cyberspace Administration of China’s guidelines on data collection and Singapore’s evolving AI Ethics and Governance Body of Knowledge framework. 

However, criticisms of such ethics guidelines also abound. They range from their ineffectiveness because of inadequate enforcement, to the neglect of feminist ethical principles of care and ecological concerns when developing these guidelines. Furthermore, stating clear ethical principles and values upfront does not always result in unambiguously ethical outcomes.

Consider an example from the European Commission’s influential ‘Ethics Guidelines for Trustworthy AI’ published in 2019. Four ethical principles, namely, “Respect for human autonomy”, “Prevention of harm”, “Fairness” and “Explicability” undergird this guideline.

Nevertheless, to prevent harm sometimes, human autonomy may have to be violated, for instance, when predictive policing aims to reduce crime through constant surveillance that impinges on individual privacy and freedom.

These guidelines neither inform AI developers of how to translate ethical principles into mathematical functions, nor how to make the most ethical trade-off between contesting principles in their models. 

In other words, these guidelines cannot settle conflicts of ethical principles and values when they clash; only individuals and organisations willing to embody these ethical guidelines, and to transform them into actionable thoughts and deeds can do that. 

These are tasks that AI ethics teams alone cannot undertake, especially if they are not accorded some modicum of protection and security when drawing out inconvenient truths and imposing constraints that no conscionable organisation should violate.

BUILDING ETHICAL SCAFFOLDS UPSTREAM

Building robust ethical scaffolds upstream is another urgent endeavour to pursue. Principally, we must ensure that our next generation of technology professionals are fully cognisant of the moral complexities of their work.

They must learn to appreciate how their apps, codes, programmes, software and structures can have large social impacts beyond their technological applications. They must also learn how to integrate and amplify principles of beneficence, fairness, justice and transparency in their designs. 

At the Singapore University of Technology and Design, we train our students to navigate the rich but also chequered terrains of ethics. At the end of their first year, undergraduates are required to take a mandatory course on ethics as part of the Professional Practice Programme. 

This course serves as a primer for more advanced Humanities, Arts and Social Sciences electives on AI ethics from such diverse disciplines as anthropology, design theory, history and philosophy. 

The aim is to continuously and progressively buttress students’ familiarity with and understanding of ethics, so that they can be ready to take on the complex moral challenges presented by AI practices in their professional lives.

CORPORATE ACCOUNTABILITY

We must also complement such educational interventions by moving decisively from AI ethics guidelines to consider regulations that hold technology companies accountable to concrete ethical standards. 

For example, under Singapore’s Resource Sustainability Act that introduced the Extended Producer Responsibility approach, electrical and electronic goods manufacturers are now legally obligated to collect and treat the e-waste their products generate when they reach end-of-life. Similarly, technology companies should also be subject to regulations governing carbon footprint thresholds for computing processes that power AI-driven solutions.

The salutary discourse around the promise of AI must be grounded in a recognition of the possible harms it can wreak. While ethics teams and guidelines are steps in the right direction, they risk being trampled upon in the race for technological domination.

Lim Sun Sun is professor of communication and technology and head of humanities, arts and social sciences. Jeffrey Chan Kok Hui is assistant professor of design theory and ethics. They are both faculty members at the Singapore University of Technology and Design.

Rajesh Sreenivasan

Head, Technology Media and Telecoms Law Practice, Rajah & Tann Singapore LLP, Director & Co-Founder of Rajah & Tann Technologies Pte Ltd and Rajah & Tann Cybersecurity Pte Ltd & Board Member, Mediacorp Pte Ltd.

3 年

Thanks for this super important and timely piece. The task of actualising these guidelines has begun in many organizations. There are significant pain points that I have seen:- 1. organizations struggling to appoint the correct persons & department to address these ethics issues usually landing on compliance teams who then instinctively try to create a checklist. (Sigh!) 2. Ethical standards implementation works if the scientists recognize that what is needed is a perennial state of conciousness of ethical behavior - one off training alone will not suffice. Training via well designed simulations followed by constant affirmation by appropriate compliments of good behavior and admonition of poor ethics is the only way. 3. Articulating the guidelines in the form of an ethics SOPs is not appropriate. Better approach is to align existing SOPs with ethical standards and deal with ethics compliance via internal and/or external spot audits. Challenge I’ve seen here is scoping the auditors mandate. Lots of work to be done. Thanks for raising awareness that guidelines are a good start but not the endpoint.

Reynold D'Silva

Chief Executive Officer

3 年

A very timely article, Dr. Sun Sun Lim. The mandatory course on ethics at SUTD stands in stark contrast to the notorious Persuasive Technology Lab at Stanford, which seeded many potentially addictive technological features, some of which have been turbo-charged by AI. In addition to well-designed and evolving regulations, we will need a new generation of technologists trained to have a clear moral and ethical compass like Dr. Gebru; as well as independent watchdogs within organizations or industry bodies; to ensure that technology is deployed in a way that enhances and protects people's well-being. Ethics guidelines, frameworks and case studies are necessary to show the way, but not sufficient to lead to action.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了