When ethics end and lip service begins

When ethics end and lip service begins

In recent times, apprehensions over biased AI systems have dominated the media and the minds of tech innovators. It turns out simply developing an AI model and letting it do its work is not acceptable. Discussions unravel over who is responsible for the potential flaws in AI systems. Who should be held accountable when a self-driving car has an accident? Who’s to blame when algorithms discriminate against a certain group of people? Such questions are important, not for pointing the finger at the culpable parties, but rather for creating a shared responsibility among innovators, owners, and users of AI technology.?

A key issue that industry and society need to confront is how can we address the philosophical concept of ethics in inanimate, unthinking algorithms, especially when there is no one-size-fits-all description of ethics? AI is amoral, not immoral – though the unintended consequence can feel like the latter.

It is hard to discern if tech companies are genuinely dedicated to creating responsible AI or are paying lip service to avoid formal regulation. Recent public disclosures point to some bad actors; Google’s reputation is under scrutiny after they?dismissed their lead AI ethics researcher Timnit Gebru after she collaborated in the paper ?that disclosed Google’s facial recognition technology discriminated against women of colour. Another tech giant, nowadays calling itself Meta (which many ethics specialists lambaste?for its questionable business operations ), has also been criticised over their Responsible AI team calling it?“superficial and toothless ”.

Many experts in the AI field are doubtful that the AI ethics movement will bear any fruit when it comes to actual practice. Depressingly, a?Pew Research Centre survey found? that more than two-thirds of AI experts do not believe that AI would be implemented to bring social good by 2030.?

Tech giants often use a theatrical approach to reassure the public that serious questions such as historical biases, discrimination and social profiling are addressed by their auspiciously named AI ethics boards.?Meta’s Responsible AI team has been cornered into tackling AI bias ?while completely abandoning unethical company practices such as?subliminal algorithms . However, many concerns highlighted by the AI ethics movement such as racial bias in the?judicial system, can’t be mitigated by the AI systems alone. ?These major challenges will require more remediation than simply tweaking an algorithm and, given that they are deeply ingrained in society, we will need society as a whole to identify and implement appropriate safeguards and solutions.?

While there is hope that they can be resolved, there is almost no doubt that it can’t be done by a uniform group of experts in only one field.?

The publicity AI technology receives mostly goes two ways; either it is the solution to all of humanity’s problems or the ultimate evil that will destroy us all. But there should be a different approach – AI could be seen for what it is – powerful technology that can and will shape many aspects of our lives and the society we live in. Humans have the ability to shape the way AI should operate. AI ethics should not be a PR stunt to conceal the unacceptable uses of such a powerful, and potentially useful, technology. All AI should be ethical AI – to make this a reality, we will need to hear many different voices, including citizens, ethicists, tech leaders, governments and end-users.?

The public shouldn’t fall for the idea that incompetent and immature AI systems can be amended and fixed once already deployed by simply adding a human into the loop or assuming that some magical antidote exists. Responsible AI systems must be created as such, rather than building on top of already flawed algorithms.

An academic,?David Edelman, has said : “perfect AI isn’t coming any time soon. It’s up to us to get specific about where and when we’re willing to tolerate it”.?

To prevent any harmful uses in the development of AI systems, the European Commission released a proposal for the EU AI Act. The proposal is still awaiting further amendments from the European Parliament and Council of Europe. The proposed legislation sets out a vision of how AI technology should be regulated and who should take responsibility for it.?The proposed AI Act ?has a flexible risk-based approach, dividing AI technology according to the risk it imposes on the public. Certain AI technologies, such as real-time biometric surveillance or social scoring by government bodies, will get harsh treatment from the European Commission with a complete embargo. Yet, other applications of AI leave some unanswered questions and potential loopholes, which hopefully will be addressed in further amendments of the document.?

Dr Adrian Byrne,?a Marie Curie Fellow Researcher at CeADAR and a Lead Researcher at the AI Ethics Centre at Idiro Analytics, noted that the “proposed legislation might struggle to be effective while coexisting alongside other legislation”.?

“For example, the regulation of social media is to be handled separately within the proposed Digital Services Act despite the prominent use of AI on its platforms. This non-consolidated approach risks causing confusion and extra burden for the regulators, affected companies and individuals,” Dr Byrne said.????

Another experts’ concern is how member states will implement the regulations set by the EU. The example of GDPR shows that it might be slow and not as efficient as is hoped.?In the most recent reports,?GDPR has been criticised as insufficient due to many operational, financial, and staff difficulties .?

“The EU needs to back up this legislation with real resources for the regulators of each member state. Otherwise, it risks becoming toothless,”?said Dr Byrne.?

We have seen futile efforts of green legislations and endless climate conferences that haven’t shown any progress in dealing with the environmental crisis the world is facing. Now we are left to wonder if the EU AI Act will be just another addition to the list of political lip services.???

Ethics is a hard concept to define since it is more art than science, it is often subjective and cultural, but that does not mean that we should give up on trying to get it right.

Idiro?AI?ethics centre

?Karin ?? Funke ?

Marketing & Export Manager at Burren Smokehouse Ltd.

2 年

Great article! Also, some might argue that it is immoral and unethical companies/AI programmers who are responsible for the programming of AI. So their low ethical standards are passed on to the machines - which is already happening in the use of face recognition etc., and will be further developed when the social credit system will be brought in. I think the legislation should cover the early stages of AI development, i.e. "who is allowed to program the AI machines/applications". Right now it seems that all the Googles and Metas of the world are just out for the progress and scientific advancement (assuming that that is their best intention), or worst case scenario, they don't care and want to advance somebody's agenda to the detriment of humanity.

Vivek Singh Bhakar

Business Intelligence Analyst

2 年

interesting article. Really liked the line "AI is amoral, not immoral – though the unintended consequence can feel like the latter." If i understand correctly, bias generated in a Machine learning model/algo would make it unethical if some of the sections/categories are underrepresented and data is not well distributed. Would i still call it amoral or is it immoral?

James McCabe

The Story Doctor - Speaker & Author

2 年

Algorithms are the metrics of digital commerce, but as any poet knows, metre is merely the expectation, rhythm is what actually happens. For Artificial Intelligence to move beyond mirroring Natural Stupidity, algorithmic metrics will have to be negotiated like political policies. Well done, Aidan. ??

Tom Connolly

I can make your projects better. I'll ask a lot of questions and work with you to find the answers.

2 年

Great article Aidan, thanks. I especially like your point: “The public shouldn’t fall for the idea that incompetent and immature AI systems can be amended and fixed once already deployed by simply adding a human into the loop or assuming that some magical antidote exists. Responsible AI systems must be created as such, rather than building on top of already flawed algorithms.” Always better to get to the root causes, instead of trying to ameliorate symptoms. Tom

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了