The First Intergovernmental Standard On Regulating AI Has Arrived - How Does It Fare?

The First Intergovernmental Standard On Regulating AI Has Arrived - How Does It Fare?

For the most part, the world is connected: we share cultures, fashion, languages, cuisine, commerce, clients, data and of course, modern, pervasive technologies. Because technologies and their utility are borderless, a harmonised approach to their regulation is essential if the worlds nations and their economies are to smoothly adopt new technologies into industry and co-operate in the global data economy. Artificial Intelligence ("AI") (in the broadest sense of the term) is one such technological development that will proliferate across the globe and will require harmonised regulation.

Previously, we witnessed the adoption of Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data by the Organisation for Economic Co-operation and Development ("OECD") which have since been codified (in varying forms and extents) under national data protection laws and regional regulations

Notably, these Guidelines stemmed from a prior Recommendation of the    OECD dated 23 September 1980. 

Now, the first intergovernmental standard on AI has been recommended by the OECD under OECD Legal Instrument 0449 ("the Recommendation"). Having been adhered to by 42 countries since 22 May 2019, the significance of this Recommendation should now be apparent in that there is a strong likelihood that the principles in the Recommendation will ultimately be transposed into national laws and/or regulations.

Below I introduce the Recommendation, highlight its core principles and offer some thoughts thereon.

   _________________________________________________________________

THE RECOMMENDATION

Quoting from the Recommendation, its preamble includes a recognition that:

  • "AI has pervasive, far-reaching and global implications that are transforming societies, economic sectors and the world of work, and are likely to increasingly do so in the future";
  • "trust is a key enabler of digital transformation; that, although the nature of future AI applications and their implications may be hard to foresee, the trustworthiness of AI systems is a key factor for the diffusion and adoption of AI..."; and that
  • "given the rapid development and implementation of AI, there is a need for a stable policy environment that promotes a human-centric approach to trustworthy AI, that fosters research, preserves economic incentives to innovate, and that applies to all stakeholders according to their role and the context."

The Recommendation proposes definitions of key terms such as “AI system”, “AI knowledge” and “AI actors” and contains two substantive sections:

 (1) Principles for responsible stewardship of trustworthy AI; and

 (2) Recommendations for National policies and international                 co-operation for trustworthy AI.

The Principles For Responsible Stewardship Of Trustworthy AI.

The first section of the Recommendation proposes five principles for the responsible stewardship of AI, namely: (1) Inclusive growth, sustainable development and well-being; (2) Human-centred values and fairness; (3) Transparency and explainability; (4) Robustness, security and safety; and (5) Accountability. These principles are re-hashed verbatim below.

1        Inclusive growth, sustainable development and well-being

Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development and well-being.

2        Human-centred values and fairness

a) AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognised labour rights.

b) To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.

3      Transparency and explainability

AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art: 

i. to foster a general understanding of AI systems, 

ii. to make stakeholders aware of their interactions with AI systems, including in the workplace, 

iii. to enable those affected by an AI system to understand the outcome, and, 

iv. to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.

4       Robustness, security and safety

a) AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk. 

b) To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art.

c) AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.

5       Accountability

AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art.

   _________________________________________________________________

SOME INITIAL THOUGHTS

Rule of Law, Human rights And Democratic Values - [Principle 2]

AI systems that reach outcomes through ADM and Profiling practices have the potential to be biased and can significantly prejudice Stakeholders (for example, consider the stakes involved when AI systems step into the shoes of a Judge). If AI systems learn from humans (who have the capacity to be racist, sexist or biased), then we ought to be highly responsible in designing systems that learn through observation. Saying that, it is welcoming to note that the 2nd principle in the Recommendation introduces what I refer to as a form of 'fairness-by-design' where AI actors are recommended to "respect the rule of law, human rights and democratic values, throughout the AI system lifecycle" and "implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art."

Understanding And Contesting Outcomes/Decisions Reached By An AI System - [Principle 3]

In many instances AI actors may not always understand the reasoning and/or logic behind the decision/outcome of an AI system - this is why many AI systems have been dubbed 'Black Box AI systems'. Holding the 'black box' problem in mind, the 3rd principle recommends that AI actors: "enable those affected by an AI system to understand the outcome"; and "enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision". What I find welcoming here is that the transposition of this principle into law has the potential to fill a void that currently exists in data protection laws revolving around a right of explanation (which is not legally binding under the European Union's GDPR and which exists under South Africa's POPIA only in limited cases). At the same time, this principle is pushing for 'explainable AI' as opposed to 'black box AI systems' and is notably broad in requiring explainability through both the factors and the logic that were utilised by an AI system in reaching a decision/outcome. The breadth of the recommended obligations may be both welcomed and opposed, depending on which side of the fence you are standing. Ultimately, the utility of this principle will likely be determined by:

  • How the terms "affected" and "adversely effected" are conceptualised under law. What I mean here is that whilst this principle can result in profound rights for data subjects, the construction of such rights can end up limiting their application (as is the case under the GDPR and the POPIA insofar as Automated Decision Making ("ADM") and/or Profiling are concerned):
Under both the GDPR and the POPIA, the thresholds of 'legal effects' / 'significant effect' and 'legal consequences'/'substantial degree',  respectfully, limit the scope of data subject rights concerning       automated decision making and/or profiling. 
  • How legislatures choose to balance the intellectual property rights/trade secrets of AI actors and the privacy/other rights of Stakeholders.

The 3rd principle essentially requires AI actors to open the 'black box' in a manner that allows transparency for Stakeholders. How legislatures intend on enforcing such an obligation, whilst allowing AI actors to protect their intellectual property and/or trade secrets, is yet to be seen. In striking this balance, I would suggest that legislatures (at minimum) consider a technical solution like 'Counterfactual Explanations' that has been proposed by Wachter et al (Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR) and Guidotti et al (The AI Black Box Explanation Problem). Wachter et al describe 'Counterfactual Explanations' as:

"[Providing] information to the [Stakeholder] that is both easily digestible and practically useful for understanding the reasons for a decision, challenging them, and altering future behaviour for a better result".

Other solutions to meeting the obligations required by principle 3 are the adoption of 'cognitive AI' systems that can provide both a 'top-down' explanation for end-users (simplified, specific explanations of an outcome) as well as 'bottom-up' explanations for AI actors, their engineers and/or regulators and courts to understand the logic that drove the outcome of the AI system.

  • How legislatures end up embedding this principle into law in such a manner that it does not overlap with current statutory rights relating to ADM/Profiling.

Above I described how certain data protection laws already consider ADM and/or Profiling, albeit to a limited extent in that the rights provided are restricted on various bases. Therefore, these provisions will either need to be amended, supplemented or repealed to align with the obligations and rights recommended within principle 3.

Traceability And Analysis of AI System Outcomes - [Principle 4]

In securing AI systems, the 4th principle recommends that AI actors:

"ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art."

In essence, the obligations created under the 4th principle are a form of data-retention and consequently, an interesting question arises: how compatible is this principle with that of data minimisation/minimality under current data protection laws? Whilst "datasets" and "decisions made" may contain/constitute personal information and therefore invoke data minimisation requirements, it is not as clear to me whether "processes" would contain/constitute personal information. In any event, legislatures will need to clearly mark the lines between data minimisation and data retention for traceability.

CONCLUDING REMARKS

In my opinion, the five principles consider the most pressing AI-specific policy, legal and ethical issues that we face today and offer a strong foundation upon which national policies and international co-operation may stem from. I look forward to observing how these principles are transposed into national laws.

What are your thoughts on the Recommendation?

   _________________________________________________________________

DISCLAIMER: Any views or opinions represented in this article belong solely to the author and do not represent those of any people, institutions or organisations that the author may or may not be associated with in his professional or personal capacity. All views are solely provided for informational purposes and should not be construed as constituting legal advise in any form whatsoever.

Sadé Catania

Head of Digital | Marketing & Operations

5 年

????

回复

要查看或添加评论,请登录

Alon Alkalay的更多文章

社区洞察

其他会员也浏览了