AI Ethics, an Unholy Alliance
wallpaperup.com

AI Ethics, an Unholy Alliance

First published in March 2020

When the Catholic Church, global IT corporations and the Pentagon all line up on the same side of an issue, you have to wonder whether or not a fix is in. When it comes to the regulation of artificial intelligence, it looks like it is. In the past couple of weeks principles governing the use of AI and biometrics have been released by all three parties which are all remarkably similar, and also remarkable vague.

On Feb 25th [2020] the U.S. Department of Defense publicly announced a set of five principles jointly developed with its technology partners which cover the use of artificial intelligence and biometrics in combat and non-combat situations.[i] Just a few days later, in conjunction with several of these same big tech companies, the Vatican together with the UN and the European parliament, signed and released a document entitled “Rome Call for AI Ethics”[ii] which outlines 6 principles that the Vatican thinks ought to govern the use of AI and biometrics.[iii] The alignment between these two sets of principles is more than coincidental, its alarming.

The Vatican document, which is being promoted by the Pope, tries to reinforce the Church’s stated social values around respect for human dignity. It highlights how artificial intelligence, particularly facial recognition, is beginning to diminish dignity and betray principles of justice. The Vatican code of ethics for AI calls for new forms of regulation and it lists six principles for AI governance: transparency, inclusion, responsibility, impartiality, reliability, security and privacy.

The Pentagon principles for the ethical use of AI and biometrics were originally published on the quiet last November – no doubt to provide a baseline for the other participants in this public relations stunt. In the recent press release DOD Chief Information Officer, Dana Deasy, asserted that “The new principles lay the foundation for the ethical design, development, deployment and use of AI by Dept of Defense- building upon the Department's long history of ethical adoption of new technologies”.[iv] Wonderful stuff from an organization that has killed over 20 million people since WWII[v] in an uninterrupted series of declared and proxy wars against countries like Korea, Vietnam, Afghanistan and Iraq (twice) as well as Libya, Syria, Serbia, Sudan, Guatemala, Nicuragua and many others.

The Pentagon’s ethical principles were developed by its Defense Innovation Board which is Chaired by Eric Schmidt, executive chairman of Alphabet, and includes executives from Google, Microsoft, and Facebook and Amazon.[vi] According to their thinking ‘ethical AI’ is AI which is: responsible, equitable, traceable, reliable and governable.?

Both sets of principles set out above align spectacularly with Microsoft’s principles for ethical AI which include the following: fairness, inclusiveness, reliability & safety, transparency, privacy & security, and accountability.[vii]

A few things should strike the reader immediately about these putative principles for ethical AI: firstly, they read more like an IT procurement specification than an enforceable code of ethics. Secondly, they are incredibly vague (details on the definitions can be found at the links in the footnotes). Thirdly, all of these statements studiously avoid any explicit prohibitions on the uses of A.I. such as facial recognition in government surveillance systems, or the use of algorithms in criminal justice sentencing or in the creation and deployment of autonomous weapons. Of course, any limitations they do impose will be almost impossible to enforce legally.

But this is precisely the point. The tech industry and their customers don’t want any regulation of AI or biometrics so the best way to protect themselves against such restrictions is to put forward a set of hollow regulations which they can happily live with as they go about the business of surveillance capitalism. To succeed with such a plan the tech industry must create the illusion of protection for the public while ensuring that they and the governments the collaborate with have unfettered access to every aspect of every person’s life on this earth.

The pious tones of the Rome Call for AI Ethics and those of Microsoft, Google and the Pentagon cannot be taken at face value. They serve a specific purpose and that is to mollify public anxiety while the real agenda of surveillance and control continues unabated. The fact that the Catholic Church has become embroiled in this ruse is an absolute disgrace. If the Vatican was really interested in preserving human dignity it would seek to impose a just balance of power between data harvesters and data subjects. But true to historical form, the Catholic Church is once again issuing platitudes on behalf of the powerless while busily aligning itself with the real power structures of the secular world.

In fact, Pope Francis has some pedigree in this respect which is no doubt why he was picked for the job of Pope. During Argentina’s ‘Dirty War’ Jorge Mario Bergoglio, as he was known then, was the Jesuit Provincial of Argentina – the highest-ranking Jesuit in the country. Both he and his Church superiors openly aligned themselves to the military dictatorship led by General Jorge Videla which disappeared some 30,000 civilians between 1976-1983. During this time, it is alleged that Bergoglio identified left-leaning priests as dissidents, refusing to extend them Church protection and leaving them exposed to kidnap and torture by the military.[viii] His response to the eruption of sexual abuse charges that has occurred on his watch has been similarly ambiguous. No surprise then that he should provide moral cover for the predations of the technology industry and the military.

Despite the ethical cynicism being displayed by the Vatican, the Pentagon and Big Tech, many technologists are starting to speak up. Both Microsoft and Google have suffered recent push-back from their staff regarding the use of company technology in military applications.[ix] In a letter addressed to top executives, a group of Microsoft workers claimed they did not sign up to develop weapons and demanded that the company drop a controversial $479 million contract to supply the US military with 100,000 HoloLens headsets designed for combat training.[x]

The company has also faced internal criticism of an AI program that has been used by the US Dept of Immigration and Customs Enforcement to separate migrant children from their parents in detention centres and retain 97% of low risk detainees in detention.[xi] Such concerns have spread to other technology providers.[xii] ?Google had to recently withdraw from a controversial program, Project Maven, that it was performing for the Air Force, developing target identification systems for pilotless drones.[xiii] ?Such resistance is a positive sign. When the people objecting to the militarization of technology are some of the most knowledgeable and engaged technologists on the planet, it lends significant moral force to the case for genuine regulation of these technologies. To get a visceral appreciation of the case for regulation watch the compelling video produced by Stuart Russell, the Professor of Computer Science at Berkley, at this link (here).[xiv]

In response to employee activism most tech companies have come out swinging. In many cases their biggest single customer is the government and they are not going to let any issue, or group of individuals interrupt the reciprocal flow of information and cash between them. In a statement released last October regarding the HoloLens project, Microsoft President Brad Smith said the company “…was committed to providing our technology to the US Department of Defense, which includes the US Army under this contract. As we’ve also said, we’ll remain engaged as an active corporate citizen in addressing the important ethical and public policy issues relating to AI and the military.”[xv]

Misdirection, Diversion, Distraction

When industry attempts to directly resist regulation founder, the next best strategy is diversion. In January of 2010, when Mark Zuckerberg announced that “privacy was no longer a social norm”,[xvi] he was directly challenging the rights of natural persons to any form of legal protection from the predations of his company. At the time most people disagreed with him, arguing that they really did value their privacy (and that of their children) on-line. But what Zuckerberg was actually telling people was that privacy as legal constraint for data-centric corporations was already irrelevant, they just didn’t know it yet. Fast forward nine years to March 2019 and the Facebook CEO seemed to have experienced a change of heart; announcing, after several rounds of Congressional and Senate hearings, that “private interactions would be a foundational strategy for Facebook going forward”. In a major announcement at the time, he highlighted several policy pillars that would help shape a “new Facebook” including: private interactions, encryption, reducing permanence, safety, interoperability and secure data storage.[xvii] But within weeks of this avowal of respect for user’s privacy, Facebook’s lawyers were in a California court arguing that users had no legal claim of privacy since any use of their platform negated the user’s expectation of privacy:

“There is no privacy interest, because by sharing with a hundred friends on a social media platform, which is an affirmative social act to publish, to disclose, to share ostensibly private information with a hundred people, you have just, under centuries of common law, under the judgment of Congress, under the SCA, negated any reasonable expectation of privacy. There is no expectation of privacy when you go on a social media platform…”[xviii]

This duplicity is characteristic of the way large technology companies go about their business and it underlies the inherently predatory nature of their data-centric business models. It also highlights the misdirection and subterfuge surrounding the narratives that make up our public discourse on informational ethics.

The public are constantly sold false narratives as a solution to deep structural problems associated with digital technologies. “privacy” rather than “justice” has become the mantra. While every tech company in the world is prepared to make public commitments to privacy, few of them will address the issue of data justice, i.e. a rebalancing of the massive asymmetries of informational power that exist between natural and corporate persons. Nor are any of them interested in recognition of personal data sovereignty, i.e. enduring, exclusive, alienable rights vested in data subjects (you and I) vs those assigned to data harvesters and processors.

As digital technologies remove one frontier after another of insight into the lives of natural persons, we find technology companies and governments are increasingly relying on intellectual property rights, trade secrets and national security to exclude any insight into their workings. This is resulting in a creeping structural injustice where we have two classes of persons under the law: the class of natural persons who have few legal defences against the onslaught of predatory data harvesting companies and no rights to exclusive ownership of their personal information, and the class of corporate persons (and their agents) who enjoy legal protection from almost all forms of scrutiny and exclusive, enduring, alienable rights to any and all data they harvest from others. Clearly, the concept of “privacy” is serving two masters in this debate. This conflict makes it a weak tool for protecting the rights of natural persons. It would be much better if we relied on the far more robust legal principle of “justice” for protection against the predations of Big Tech. Only, that’s not going to happen so long as corporations are setting the agenda.

Another significant form of misdirection in the debate about AI ethics is the focus on tactical applications of AI rather than the strategic or systemic applications. If the raison d’etre of the Pentagon is to impose and sustain injustices all around the world, how can any application of AI in that endeavor be considered ‘ethical’? If we are prepared to support an economic system that exploits weak and vulnerable nations abroad as much as it exploits weak and vulnerable individuals at home, how can we claim to be using our technology ‘ethically’? This begs the question; is AI really the problem? In the past the injustices inherent in economic and geopolitical systems were limited by various natural constraints; cost, time, distance and uncertainty. As AI and other technologies eliminate these constraints, we are left facing the raw truth of what it is we stand for. And it’s not pretty.?

Ethics Washing

These days AI ethics programs have become widespread in Silicon Valley. The biggest companies have even appointed Ethics Officers (nominal executives with important titles but no real authority) to manage their public relations around contentious projects. The sort of ethical codes that these officers come up with are typically comprised of vague promises rather than hard rules or restrictions on company activities with mechanisms to enforce them. Almost every code of corporate ethics lacks any form of independent assessment or enforcement. Then there is the issue of definition: many of the terms used to frame these AI codes are notoriously difficult to define legally. Does a term like “Transparency” mean that an algorithm needs to be explainable to the people impacted by its output, or just its creators and users? Do prisoners denied parole by an algorithm, or candidates denied a job by an automated assessment tool get to query the processes of the instruments making these decisions? Would they even know if an AI was involved?

As it happens many deep learning algorithms are simply not explainable in human terms since they don’t ‘think’ in a way that makes any sense to humans. While humans approach a game of chess or Go with a strategy, deep nets like Alpha Zero (which learned to play Go without relying on any human game inputs) learn by deploying a variety of stratagems including, for example, analysing board positions and determining which next move has led to the most wins out of all the games in its database. Once the connections and weights in its neural network are set by this process, it is nearly impossible to explain why one link is activated at one time and another is not.

If the ethics initiatives around the use and deployment of AI are to provide any tangible benefit, they must include practical mechanisms for turning principles into practices which are subject to regulation and independent external oversight. Any regulations must be backed up by formal legal sanctions and mechanisms for auditable insights into organizational as well as algorithmic decision-making procedures. No defensible claim to ethical AI can avoid the need for legally enforceable restrictions on the deployment of AI technologies – including technologies of mass surveillance and automated violence. Unless such restrictions are imposed, codes of ethics like those set out in the Rome Call for AI Ethics will have no more force than a pontifical decree.[xix]

Conclusion

The challenge for everyone concerned about the issue of ethical AI is to focus on the structural aspects of the problem, not the tactical ones. When it comes to protecting ourselves from corporate predation, we need to rely on robust legally enforceable principles such as the principle of justice which can regulate the development and application of AI and biometric systems, not weak ones like privacy - which has ambiguous application and a limited procedural history. Ultimately intelligent technologies are forcing us to confront endemic injustices which form the very backbone of our legal, economic and political systems. To blame the technology for the problems it brings into stark relief is disingenuous. In many ways technology is just a mask. The real terror lies behind the mask, not within it. That’s what we need to confront.

Alan Hamilton.

Footnotes

[i] https://www.defense.gov/Explore/News/Article/Article/2094085/dod-adopts-5-principles-of-artificial-intelligence-ethics/?

[ii] https://www.romecall.org/

[iii]https://www.academyforlife.va/content/dam/pav/documenti%20pdf/2020/CALL%2028%20febbraio/AI%20Rome%20Call%20x%20firma_DEF_DEF_.pdf

[iv] Ibid: defense.gov-5-principles-of-artificial-intelligence-ethics

[v] https://www.globalresearch.ca/us-has-killed-more-than-20-million-people-in-37-victim-nations-since-world-war-ii/5492051

[vi] https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/

[vii] https://www.microsoft.com/en-us/ai/responsible-ai?

[viii] https://www.globalresearch.ca/washingtons-pope-who-is-francis-i-cardinal-jorge-mario-bergoglio-and-argentinas-dirty-war/5326675

[ix] https://futureoflife.org/open-letter-autonomous-weapons/

[x] https://www.bloomberg.com/news/articles/2018-11-28/microsoft-wins-480-million-army-battlefield-contract

[xi] https://www.theverge.com/2018/6/21/17488328/microsoft-ice-employees-signatures-protest

[xii] https://twitter.com/MsWorkers4

[xiii] https://www.theverge.com/2018/4/4/17199818/google-pentagon-project-maven-pull-out-letter-ceo-sundar-pichai

[xiv] https://futureoflife.org/2017/11/14/ai-researchers-create-video-call-autonomous-weapons-ban-un/?cn-reloaded=1

[xv] https://blogs.microsoft.com/on-the-issues/2018/10/26/technology-and-the-us-military/

[xvi] https://www.theguardian.com/technology/2010/jan/11/facebook-privacy

[xvii] https://observer.com/2019/03/mark-zuckerberg-privacy-future-facebook/

[xviii] https://theintercept.com/2019/06/14/facebook-privacy-policy-court/

[xix] Ibid: the intercept ethical-ai-artificial-intelligence

Philip Bull

Principal Consultant

3 年

thanks Al, i should look at Linkeden more. Trust in the law ? As Steve used to say, you have to trust some people sometimes.

Nisarga Gandhi

Program Management, Presales & Solutions, ITSM, Ethics & AI

3 年

Excellent thoughts Allen. What this means is that, military, religion and czars are providing lip service to Ethics - leaving democratically elected government to look after the people who have elected them :) Having said this - ethics and morality are the topics which are suitable to be evaluated based on intent and not policies and that is where the challenge lies. We are on a journey to artificially codify humans - ignoring the intent aspect completely and maybe rightly so as how does one codify intent? How does one measure purity of heart? In our relentless pursuit of power and money - ethics and morality looses out.

回复
Tim Janisch

Senior Managing Consultant - Capgemini Invent

3 年

Thanks for reposting Alan. I missed this thoughtful post the first time around.

回复

要查看或添加评论,请登录

Alan Hamilton的更多文章

  • Australian Digital Identity Bill 2024

    Australian Digital Identity Bill 2024

    A system of pervasive surveillance under the guise of Digital Identity Problem Statement As our public and private…

    3 条评论
  • Sympathy for the Devil?

    Sympathy for the Devil?

    The fire causes a huge commotion on the street. People with bundles run from house to house, from street to street…

    1 条评论
  • The Great Australian Swindle

    The Great Australian Swindle

    “A Proposed Law: to alter the Constitution to recognise the First Peoples of Australia by establishing an Aboriginal…

    5 条评论
  • Just what the doctor ordered: an Economic Coup d’état

    Just what the doctor ordered: an Economic Coup d’état

    Preface 20 Dec 2021 The following blog post was written btw 23-26 March 2020. It contains my assessment of the Covid 19…

    1 条评论
  • First as Tragedy, Then as Farce

    First as Tragedy, Then as Farce

    On the 15th of February, Australian Prime Minister Scott Morrison stood before the parliament and delivered a speech…

  • From Blue Shirts to Brown

    From Blue Shirts to Brown

    “On at least 3 or 4 occasions in the past week we’ve had to smash the windows of people in cars and pull them out of…

    2 条评论
  • The Covid Inception

    The Covid Inception

    In the 2010 Christopher Nolan film, Inception, Leonardo DiCaprio’s character asks the question: “What is the most…

    3 条评论
  • Crossing the Rubicon

    Crossing the Rubicon

    In 49 BC Julius Caesar crossed the Rubicon river which marked the border between Gaul and Italy. In doing so he…

    1 条评论
  • Designing Our Future

    Designing Our Future

    Designing our Future In just the same way that modern cities have been designed to suit automobiles rather than…

    9 条评论
  • Designing the Digital Economy - the productivity paradox -

    Designing the Digital Economy - the productivity paradox -

    Like it or not we live in interesting times. Every day we’re seemingly bombarded with news about how the pace and scope…

    1 条评论

社区洞察

其他会员也浏览了