An Open Letter to Mr. Pichai, CEO of Alphabet, parent of Google
Ryan Carrier, FHCA
Executive Director at ForHumanity/ President at ForHumanity Europe
Dear Mr. Pichai,
The entire world of AI Ethics saw the piece in Wired where Tom Simonite wrote that “Google Offers to Help Others With the Tricky Ethics of AI.” ForHumanity, a 501 (c) 3 public charity, populated with AI Ethics volunteers who are dedicated to examining and mitigating the downside risks in society’s autonomous systems, noticed. While much of the rest of the AI Ethics community responded with vitriol or cries of “Oh, the temerity!”, and others broke into outright laughter listing off the previous Google missteps, the ForHumanity Fellows were meeting to advance the work of Independent Audit of AI Systems and thus we discussed a legally-grounded, fact-based, more circumspect and inclusive response for Alphabet to consider. What follows is our open letter for your consideration.
The Wired article referenced three points which we will address in this letter: AI Ethics, Audit, and Audit Rules/Standards. We will talk about all three and help you to understand the role that Google may play. It may seem like an equal act of temerity to say “the role that Google may play”, but you will see that the law has already laid out the playing field. We will also introduce a crucial concept of Independence, which is not just a patriotic, or a feel-good idea, but a legal term of art defined closely and deliberately in The Sarbanes Oxley Act of 2002.
We commend you on your choice to act on the correct side of the discussion — AI Ethics are good and necessary. The notion of operating a company and embedding ethical rules/standards into each of your autonomous systems is good and worthy. Google should be celebrated for making this choice - it is the right choice. Now that you have come to the party, allow us to show you to your seat at the table. The table is large, it seats thousands, if not more — as all are welcome to this table. Your seat is right here and it is not at the head of the table. The table is round, like King Arthur’s fabled Round Table - renowned for demonstrating that all at the table are equal. Your chair is three feet off the ground, with four legs and a back. The seat is not comfortable or padded. It is not overly wide or ornate, and in fact, it matches every other chair at the table. It is not meant to be comfortable either, as we are all here to do hard work for the good of all humans who use autonomous systems. “Human-centric” is the design mantra and that is the work we will do at this table where we are drafting, critiquing, and editing audit rules/standards.
The round table’s mission is clear, “to ensure that all humans can trust autonomous systems. We will examine Ethics, Bias, Privacy, Trust, and Cybersecurity needs by-design.” We are building an #infrastructureoftrust in our autonomous systems. At this table, where dedicated volunteers come to craft audit rules/standards, ethical behavior starts with honoring humility and equity - all are welcome, all voices will be heard and considered — in fact, your designers and developers are all welcome to join you at the table. We welcome Googlers. We welcome all because we know that each person will bring unique expertise, knowledge, and valuable perspective to the work. This is the work of a rules/standards-setting body. We are crowd-sourcing and consensus-building. We are iterating and collaborating. All of this is done transparently and no one will be excluded, no one will dominate, no one will be marginalized.
This table is not only a story from legend either. ForHumanity and the ForHumanity Fellows have established a framework for auditing autonomous systems in every corner of the economy. The audit rules and standards will be applicable to all autonomous systems and as new ones arrive, they will have their own new rules created and adapted to govern their functionality. In fact, Google is welcome to demonstrate its commitment to Ethical and human-centric design presently. ForHumanity and the Fellows have drafted a 135-line audit dedicated exclusively to “Schedules/Calendars”. It is a small subset of the larger work, Independent Audit of AI Systems (IAAIS). IAAIS has rules that were established transparently, objectively, and designed for the good of humanity by examining function level impact on ethics, bias, privacy, trust, and cybersecurity. It offers an opportunity to resolve known issues and undiscovered problems to achieve compliance with the audit. It is an opportunity to work with a worldwide set of reviewers and vetters to craft this audit into the systems humanity needs to assure an infrastructure of trust in our autonomous systems.
Google's participation would demonstrate a commitment to AI Ethics as yet unseen from most major technology companies. To join the discussion we invite you to register here: https://www.forhumanity.center/registration
At the round table where societal rules and standards for autonomous systems are created on behalf of humanity in the areas of Ethics, Bias, Privacy, Trust, and Cybersecurity — all are equal, all are represented, and all voices matter.
This leads us to the next point — the business of AI Ethics audit/standards, auditing and supplying the systems of internal controls and compliance with the audit standards. A story will help frame the issue. On December 3rd, 2001 a firm called Enron went bankrupt. Fortune had named Enron "America's Most Innovative Company" for six consecutive years prior to its bankruptcy. It had peaked two years before at a market capitalization of $70 billion dollars. By 2002, it was worth $0. At about the same time, Worldcom, which reached a height of $186 billion was also in the process of going bankrupt. These two high profile corporate failures, directly linked to failed auditing and accounting practices, led to the passing of The Sarbanes-Oxley Act of 2002 which laid out some rules which you will want to be familiar with. First, it should be noted that Arthur Andersen, an auditor, was the firm responsible for most of the external audit work for both firms (KPMG was the external auditor at the bankruptcy, having acquired Worldcom’s business after Arthur Andersen failed from the Enron collapse). Note that Arthur Andersen no longer exists. Being an auditor requires that the only relationship you have with your client is that of an auditor — and to be an auditor means you cannot be in the business of providing other services to your clients. You charge them fees for your only service, and you are liable if you assure compliance and that client is later shown to have been non-compliant. Being an auditor is all about managing downside risk. It is also about being Independent. Arthur Andersen breached those covenants, and the firm ceased to exist.
Consider this closely, the Sarbanes-Oxley Act of 2002 forbids the external auditor from benefiting in any way from its audit client EXCEPT for audit fees. So Google may become an auditor if you and your shareholders so choose, and we suspect you would be quite good at it, but the rule of Independence will require you to sell-off every business unit and service provider unit that you would normally have sold to your audit client. It is our belief that Google will make more money selling goods and services to clients to support audit compliance rather than engaging in external audit practices consistent with the legal term Independent. You will want to focus specifically on Title II from the Sarbanes Oxley Act. Title II consists of nine sections and establishes standards for external auditor independence, designed to limit conflicts of interest. It also addresses new auditor approval requirements, audit partner rotation, and auditor reporting requirements. It restricts auditing companies from providing non-audit services (e.g., consulting) for the same clients. Mr. Pichai, the choice is available to Google. You are welcome to become an AI Ethics auditor, but as the law dictates, you will have a lot of remediation to do in order to meet the requirements of Independence. One last thing, this isn’t our opinion, you might want to check with the Public Company Accounting Oversight Board (PCAOB), they “oversee the audits of public companies and other issuers in order to protect the interests of investors and further the public interest in the preparation of informative, accurate and independent audit reports”.
Independent Auditors shall not have a single additional relationship with the firm that they audit. Independence, well defined by the law, assures that the auditor is at-risk for affirming compliance and has zero upside to gain from auditing a company
This leads us to our final point Mr. Pichai: you are not alone in thinking that it is okay for major corporations to believe that they can set the rules and standards for Ethics, Bias, Privacy, Trust, and Cybersecurity of our autonomous systems. Other firms, even audit firms, have created guidelines and frameworks for adapting ethical principles around autonomous systems. We don’t believe it represents ill-intent, in fact, it is likely the opposite. But not one of these frameworks will ever be considered the rules/standards for the industry. We are familiar with these frameworks, which represent fantastic, ethical work. They will be very useful when adapted and blended into the consensus-driven, collaborative, iterated audit rules/standards being created. As we have mentioned, there are seats at the table for them as well. These are intelligent people, like your team at Google, Mr. Pichai, with great perspective, expertise, and skill. All are welcome. But, it is crucial that we remember what went wrong with Enron, Worldcom, and others. When the auditors and the service providers also make the rules, then conflicts of interest abound. Rules are adapted for convenience, shortcuts are offered, tough questions go unasked and ethical decisions are set aside for practicalities like cost, ease of compliance, and sometimes, unfortunately, profit.
Speaking of money — it is a problem in this process. When too much money flows from one entity, from one sector of the economy, from a source which might be deemed to have “an ax to grind”, then it ruins the work. It discredits the outcome and raises trust questions about the motivations for the audit/rules and standards. Conflicts of interest result when pay-to-play and a lack of transparency and inclusiveness overwhelm in the process. The work of setting the audit rules/standards must be human-centric and free from conflicts of interest. Being a non-profit, being mission-driven, and having objective partners at the table does not alleviate the impact of money at creating a conflict of interest. To build an #infrastructureoftrust, the process must be free of these conflicts.
Mr. Pichai, you are right to pay attention to Ethical AI for Google. First, it is important to your customers. Second, lawmakers and regulators are paying more attention and acting. Third, the industry of Independent Audit of AI Systems is in its infancy. This burgeoning industry will likely generate substantial revenue over the next decade. New undergraduate and master's level degrees will be created to audit, monitor, remediate, control, and implement ethical principles into all of our autonomous systems as well as the internal risk and control systems responsible for compliance. Your shareholders should be pleased with your wisdom and foresight to move Google into this field. You will simply have to choose your role and you only get one choice:
Choose ONE Mr. Pichai
- Audit rules/standards body;
- Independent Auditor; or,
- Compliance and system provider
We applaud the message you have shared about the role of Ethical AI and its importance and we agree. The ForHumanity Fellows live these words day-in and day-out with their volunteer commitment to advance audit rules/standards known as Independent Audit of AI Systems. We welcome you to your seat at the table. There is much you can do to advance this worthy mission and we look forward to seeing the hallmarks of Ethical AI in your products and services - transparency, willful compliance, and human-centric design in the areas of Ethics, Bias, Privacy, Trust, and Cybersecurity.
Sr. AI/ML Governance & Risk Management | EU AI Act, GDPR | NIST AI RMF, ISO 42001 | Senior Non-Resident Fellow, AI and Global Governance | Opinions are my own
4 年Bravo Ryan Carrier "You will simply have to choose your role and you only get one choice: Choose ONE Mr. Pichai Audit rules/standards body; Independent Auditor; or, Compliance and system provider"? Mr. Pichai, your best decisions are often what you choose NOT to get involved in.
Principal Analyst Data Governance | Posting commentary for analysts since 2017 | Brier Score of 0.22 | Experimental science: show me the evidence | Veritas filia temporis | Views mine own
4 年"So Google may become an auditor if you and your shareholders so choose, and we suspect you would be quite good at it, but the rule of Independence will require you to sell-off every business unit and service provider unit that you would normally have sold to your audit client" The key point: independence isn't another business line.
Executive Director at ForHumanity/ President at ForHumanity Europe
4 年Jim Provost Chetan Phull, LLM, JD, CIPP/C/US Magnus Westerlund Dmitry Gelgor Peter Cihon Jeff Ervine Willie Costello, PhD Pearlé Nwaezeigwe
Standards Strategy and Research | BCS F-TAG | AIQI
4 年Great letter Ryan Carrier - I like my uncomfortable chair.
Executive Director at ForHumanity/ President at ForHumanity Europe
4 年Isabel Agosto Ansgar Koene Casandra Rusti Charles Radclyffe Connor Wright Isabella Russo-Tiesi Jacquie H. Chris Waters Laura Galindo-Romero Megan Ma Paul Lashmet Rohan Light Tristi Tanaka Bryan Wilson Luca Flurin Brunner Charles Paré Elizabeth (Lizzie) Shen Sarah Wyer Sheldon Cheng Matthew Newman Rossana B. Roba Abbas Aurelie Jacquet