Should Governments Do Human Rights Impact Assessments for Digital Technologies?
Lamin Khadar
Head Advisor Human Rights and Social Impact, Statkraft and Global Adjunct Professor of Law, NYU Paris
Digital technologies have the potential to positively transform societies and economies. However, they also pose significant challenges and risks, especially when they are used by governments or governments fail to effectively regulate their use.
[The comments below emerge from several collaborations and discussions over the previous three years between NYU Paris, Dentons and Microsoft. I am grateful to Dentons lawyers (and in particular Joanna Khatib and Amelia Retter ), NYU students and Microsoft personnel (in particular Bernard Shen ) for their contributions towards developing the thoughts expressed here. However, the below expressed thoughts are my own opinions and do not necessarily reflect the views of any of the organizations I am affiliated with.]
Generative AI can enhance human creativity, innovation and productivity. However, it can also be used to create realistic but deceptive or harmful content that can manipulate, exploit, or harm people's privacy, dignity, reputation, or autonomy.
Social media creates opportunities for mass mobilization for social causes, civic engagement, and collective action for positive change. However, it also enables the spread of misinformation, hate speech, incitement to violence, surveillance, censorship, and manipulation of public opinion.
Central bank digital coins could provide humans with a fast, secure, and inclusive way to access and use central bank money in the digital economy. However, CBDCs can also enable excessive surveillance, censorship, and control of financial transactions and personal data by the State.
Tools and weapons . This is a tale as old as time. Fire was used to provide warmth and nourishment - but it was also used to burn down villages. TV was used to educate populations, build solidarity and to provide urgent information – but it was also abused for tele-marketing fraud taking advantage of vulnerable individuals to extract money. The internet can be used to connect disparate communities and share ideas around the globe – but it can also be used to establish communities of pedophiles to more effectively abuse children.
For emerging digital technologies specifically, where innovation and adoption move at lightspeed globally, a digital governance gap emerges. Digital technology often escapes regulation, simply because regulators are not fast enough, are under resourced, and/or do not have sufficiently nuanced understanding of emerging technologies to keep up. At other times, digital technology may escape effective regulation because a lack of rules is coupled with a lack of enforcement – either an unwillingness to enforce rules or an inability to enforce rules, (whether on the part of the State or businesses). If you then factor in the ability of digital technologies to be deployed and adopted globally almost instantaneously, the scale of the “digital governance gap” can be perceived.??
In theory, individuals should enjoy the same human rights offline as they do online in the digital realm. In practice, a number of legal and practical challenges arise in the intersection between private technology and the State. Several common challenges have emerged.
One common scenario is where, in absence of effective regulation, digital technologies are deployed in ways that adversely impact human rights of users inherent in the technology itself. For example, social media technologies that allow companies to gather vast amounts of sensitive user data (ethnicity, sexual orientation, health status etc.) violating the right to privacy. A second example relates to situations where the State makes requests on digital technology providers that may adversely impact the rights of users of those technologies. For example, a State may request that certain user content is censored in violation of the right to freedom of expression. A third example relates to the ways in which the State itself makes use of digital technologies in ways that may violate the rights of citizens. For example, police facial recognition systems that are unable to effectively differentiate between people of color due to limited data sets, violating the right to equal treatment. Or a central bank digital coin that can be programmed so that the State can prevent certain civil society organizations from receiving donations simply by pressing a button violating the rights to freedom of association.
International human rights law – in principle binding on States only – requires that States respect, protect and fulfil the human rights of individuals within their power or control. While this includes a duty on the State to protect against human rights abuse by private enterprises, States are not in principle responsible for human rights abuse by private enterprises. States may nevertheless breach their international human rights law obligations where such abuse can be attributed to them, or where they fail to take appropriate steps to prevent, investigate, punish and redress private enterprise’s abuse. In the context of digital technologies, this status quo is problematic both insofar as the enterprises conceiving and developing the technologies are not directly bound by human rights themselves, and insofar as States are left to police technologies that they may not sufficiently understand, are not incentivized to police or are simply unable to effectively police.
Self-regulation, Private Commitments and Codes of Conduct?
?Soft law norms have been developed to fill the digital governance gap. Typically, soft law norms developed in relation to digital technology seek to build on the existing international human rights law framework i.e. both the “hard law” frameworks (comprising the international human and labor rights, corruption and transparency and environmental conventions binding on states) and the “soft law” frameworks (most prominently the UN Global Compact, UN Guiding Principles on Business and Human Rights and the OECD Guidelines for Multi-national Enterprises). These soft law norms develop both sector-specific and technology-specific standards, guidelines and codes of conduct for digital technology companies seeking to integrate human rights norms in their operations. These norms are sometimes developed by governments and regulators or international organizations and at other times by multi-stakeholder initiatives or companies acting by themselves. Such soft law norms may be addressed to companies, or to public actors or to both. ????
One of the most notable and representative efforts in this regard is the Global Network Initiative Principles, first developed in 2008. GNI is a multi-stakeholder initiative established by leading ICT companies (including Google, Yahoo, Facebook and Microsoft) and universities. The fundamental challenge it seeks to address is framed in the following way:
?“Every day, technology companies receive requests from governments around the world to censor content, restrict access to communications services, or provide access to user data. Given this reality, how can technology companies best respect the freedom of expression and privacy rights of their users – wherever they operate?”
?Based on the international human rights law framework, the GNI has developed a set of standards that the companies should follow to mitigate privacy and freedom of expression violations caused by governments. By virtue of principle 2, participating companies are required to respect and work to protect the freedom of expression of their users by seeking to avoid or minimize the impact of government restrictions on freedom of expression. By virtue of principle 3, participating companies are required to employ protections with respect to personal information in all countries where they operate in order to work to protect the privacy rights of users.
The GNI Principles are just one example of a growing body of soft law norms related to human rights and digital technology. Other (non-exhaustive) examples include the OECD AI Principles, the EU Commission Code of Conduct on Countering illegal hate speech online (signed by Facebook, Microsoft, Twitter and YouTube) the the Christchurch Call Signed by over one hundred States, tech companies and civil society organizations, the?Council of Europe Guidance Note on Content Moderation or the The New Zealand Algorithm Charter.
As mentioned, in addition to norms developed by several actors working collectively, larger digital technology companies also seek to use their leverage by developing their own standards which are then applied internally and throughout their business relationships with governments, other companies and users. For example, the Microsoft and Google AI Principles.?
Human Rights Impact Assessments in the Policy or Public procurement process?
One of the key tools that have emerged within business practice to promote respect for human rights and address some of these issues are Human Rights Impact Assessments (HRIAs). Businesses have a responsibility to respect human rights under the UNGPs and are called upon to do HRIAs. Many digital technology companies have incorporated HRIAs into their standard business practice. HRIAs in this context seek to analyze the effects that activities of digital technology enterprises have on rights-holders such as users and consumers, workers, local community members and others.
Recognizing that under international human rights law (and reiterated in the UNGPs), States are required to protect and fulfil the human rights of individuals within their power or control, a question that arises is whether, like enterprises, States should have the responsibility to do HRIAs. In particular, when governments regulate digital technologies (e.g., government regulation of online content) or when governments use digital technologies (e.g., government use of facial recognition technology) are they required to, and do they make use of HRIAs? ?
Certainly, national constitutional orders, human rights laws and regulatory regimes provide for various forms of horizontal checks and balances which seek to ensure that human rights are protected when the State is regulating business activity or procuring from businesses. However, can these various duties and practices be described as HRIAs and are they effective at protecting human rights in the context of digital technologies? To what extent are they mandatory, participatory, or transparent?
RightsCon Community Lab
On 7 June 2021 during the RightsCon Community Lab 12273 several global business and human rights experts facilitated a discussion centered around the idea of “forging a global norm for governments to do human rights impact assessments when they regulate or use digital technologies.”
The discussion focused in part on identifying existing examples—both from HRIAs in other human rights issue areas and impact assessments from other fields/industries—that could serve as models for HRIAs focused on digital technology. It was noted that while widespread adoption of HRIAs by States should be the goal, the obligation for governments to conduct HRIAs already exists. Accordingly, this discussion would focus less on ‘creating’ a norm for HRIAs around digital technology and more on strategies for designing effective multistakeholder HRIA processes and achieving their widespread use.
?During the discussions, among other topics, participants noted:
?During the wrap-up facilitators emphasized key takeaways from the breakout rooms. They stated that while there are existing practices already in place for HRIAs (within the EU, CoE, and US DOJ, for example), there are still questions about the level of consultation and transparency in HRIAs across the board. They also emphasized the idea that there is no norms gap around States conducting HRIAs regarding the use of digital technologies, as there is already a norm for accountability. Instead, more must be done to ensure this responsibility for HRIAs is being discharged consistently and done so in a way that is transparent and consultative with all relevant stakeholders, including in the global South. Finally, HRIAs must also be accompanied by legitimate avenues for remedy when harms do occur.?
NYU Paris, Dentons and Microsoft research project
During the course of 2020 to 2022, international law firm Dentons collaborated with students at NYU Paris and Microsoft on a research project related to States duties to conduct HRIAs when regulating or procuring digital technologies. The research comparatively explored policy and regulation in relation to a range of digital technologies including digital media, artificial intelligence, digital surveillance and facial recognition technology. The collaboration involved comparative research in twelve countries (Australia, France, Honduras, India, Kenya, New Zealand, the Philippines, Singapore, Spain, United Kingdom, Venezuela and Vietnam) and the EU as a regulator. The objective was to pull together selected comparative findings and explore the status quo in relation to the protection of human rights in the regulation and use of digital technologies by the State.
?The research explored three sets of questions:??
?A.??Are there existing bases/grounds/situations/mechanisms (under international treaty/convention, national law, human rights law, government policy, or otherwise) where governments are required to conduct HRIAs prior to enacting legislation or regulations or prior to or after engaging in governmental action??
?B.??What are examples of HRIAs (or other checks and balances) in the context of governments regulating digital technologies or using digital technologies??
?C.??Can we learn from examples of bases/grounds/situations/mechanisms where governments are required to conduct other types of impact assessments prior to regulating or engaging in certain activities (e.g., in the environmental sphere)?
In the following I share some of the findings.
?A - International and national law duties to conduct HRIAs
Duties under international human rights law
While there may be no explicit horizontal duty on States to conduct HRIAs within international (human rights) law, there are various duties and rights within international human rights law which, taken together, constitute the functional equivalent to a duty to conduct an HRIA (although not specifically with respect to the regulation and use of digital technologies). These are the duties to ensure coherence between national law and international treaty obligations, the duty to undertake due diligence during the legislative and policy process (especially where balancing between competing interests is required, in complex policy areas or where large numbers of identifiable - and especially vulnerable - rights holders are expected to be affected), and the right of public participation.[i] ????
Ex ante human rights impact assessments at the national level
In the findings, a distinction is made between two different forms of ex ante (before legislation is passed) human rights assessment: a compatibility statement vs. an impact assessment. Of the States studied, New Zealand had the most comprehensive duty under national law to perform an analysis of proposed legislation to ensure coherence with international human rights obligations, although even this analysis was not exhaustive.[ii] ?In Australia[iii] , New Zealand[iv] , the European Union[v] , and the United Kingdom[vi] , there was a duty to complete a compatibility statement of proposed legislation with specific human rights instruments. In the European Union[vii] , France[viii] , Spain[ix] , and the United Kingdom[x] , impact assessments or regulatory impact reports must also be prepared for proposed legislation, which may assess (amongst other matters) the impact of proposed legislation on human rights.
Issues identified with ex ante (human rights) assessments were that:
Ex poste human rights impact assessments at the national level
In relation to ex poste assessments (after legislation is passed) in each of the countries studied, there was no mandatory legal duty to undertake ex post analyses of domestic legislation to ensure coherence between national law and international treaty obligations. However, existing mechanisms for general ex post reviews include the requirement in the United Kingdom for routine reviews of legislation and policies/regulations,[xi] and procedures for voluntary reviews of existing legislation in the United Kingdom, New Zealand, Australia, and France.[xii] Although none of these reviews were specific to human rights, a consideration of coherence with international treaty obligations could be made if prescribed by the body initiating the review, or if a human rights committee was the body tasked with undertaking the review (for example in Australia where the Attorney General may refer an inquiry to the human rights commission). In the United Kingdom and France, the judiciary could refer legislation if it was considered incompatible with human rights for further review (in the United Kingdom – to Parliament, and in France – to the Constitutional Council) – although this would likely occur only after a breach of human rights had occurred.[xiii]
Issues identified with ex poste (human rights) assessments were that:
?
Duties on States to Engage with Civil Society Stakeholders at the national level
There was generally no explicit legal duty to carry out public consultations when developing legislation and policy in the countries studied, although in some States (the European Union[xiv] , Australia[xv] , and Kenya[xvi] ), a general obligation of consultation (not necessarily extending to public stakeholders or experts) was required by law. Nevertheless, governments and policy makers can and do engage with the public in practice, and consultation was a matter of policy in a number of countries (for example, in the European Union[xvii] and New Zealand[xviii] ). Public consultation was carried out by making draft proposals for legislation available to the public for initial consultation and feedback (for example, the United Kingdom[xix] and Spain[xx] ), via specialist committees who may consult with stakeholders in the course of their review and analysis of legislative proposals (New Zealand and the Philippines[xxi] ), and through general consultations by the government (the United Kingdom).
Issues identified with stakeholder engagement were that:
?
B - Human Rights Impact Assessments related to regulation and use of digital technologies by States
None of the legislative frameworks included within the study mandated HRIAs related to the regulation and use of digital technologies in general by the State. The research identified several more specific mechanisms and checks and balances applicable to digital technologies. These could be broken down into impact assessments and risk assessments related primarily to the right to privacy (required as a matter of law or policy)[xxii] and aspirational norms and soft law in the form of principles, charters or codes of conduct, specifically in the context of AI and algorithmic technology.[xxiii] Typically, such norms are generated by expert groups or committees sometimes including external stakeholders with relevant expertise.
Interestingly, the obligations on public actors to conduct data privacy impact assessments under data protection law may incorporate many other rights beyond the right to privacy and may include a broad range of digital technologies within their scope. For example, Kenyan data protection law provides that, while the rights and freedoms of data subjects primarily concern the right to data protection and privacy, other constitutionally protected rights may also be implicated.[xxiv] Supplemental guidance further specifies that processing operations which are considered to result in high risks to the rights and freedoms of a data subject include, inter alia, automated decision making, use of profiling or algorithmic means, processing biometric or genetic data; a systematic monitoring of a publicly accessible area on a large scale and innovative use or application of new technological or organizational solutions.[xxv] This is a very broad obligation and may, in effect, amount to a general duty to conduct an HRIA when deploying digital technologies at scale.??
Of particular note, however, was the Dutch Government’s recently published fundamental rights and algorithm impact assessment (FRAIA) which is a discussion and decision-making tool for government organizations.[xxvi] The tool seeks to facilitate an interdisciplinary dialogue between those responsible for the development and/or use of an algorithmic system and places the onus on the relevant government agency to conduct the HRIA.?
Opportunities identified going forward included (which are also avenues for further research):?
?
C - Other types of Impact Assessments deployed by States
The research identified several examples of bases where governments conduct other types of impact assessments outside of the context/scope of digital technologies. The most prevalent type of IA that was identified in the research focused on the environment and came in the form of Environmental Impact Assessments (“EIAs”) aimed at assessing the impact, mainly of infrastructure or building projects, on the environment, land or biodiversity (for example in Australia, Kenya and India)[xxvii] . Aside from the EIAs, the research also identified IAs conducted and used by governments in the health sector. It is important to note that the types of IAs that were mostly identified assess proposed activities, projects or actions of individuals and private companies, rather than of the government itself. In some cases, however, the relevant legal framework requiring (environmental) IAs extends to both proposals for projects and actions made by private as well as public bodies and entities. Moreover, the underlying rationales for introducing the types of IAs that were identified may further serve as a basis for introducing a duty on governments to conduct IAs in the context of digital technologies (i.e. insofar as they also require the balancing between competing interests in complex policy areas and/or large numbers of identifiable rights holders are expected to be affected).
However, existing national experience with IAs also pointed to a number of challenges with the implementation of IAs:
Future research avenues
In the years ahead, as new digital technologies are deployed and regulated by states (in particular Distributed Ledger Technology and Generative AI, it will be important to explore some of the following questions:
NOTES
[i] See Article 26 of the Vienna Convention; Articles 2 and 25 of the International Covenant on Civil and Political Rights; UN Human Rights Committee, General Comment 31, paras 6-8 and General Comment 25.
[ii] New Zealand Bill of Rights Act 1990 (New Zealand) s. 7.
[iii] Human Rights (Parliamentary Scrutiny) Act 2011 (Australia) ss.8(1), (2), (3) and ss. 9(1), (2) (3).
[iv] Cabinet Office Cabinet Manual (New Zealand) at 7.65.
[v] Communication from the Commission on Application of the Charter of fundamental rights of the European Union, SEC 2001 (EU) 380/3.
[vi] The Human Rights Act 1998 (UK) s. 19, Gov.uk Guidance – Legislative process: taking a Bill through Parliament Government Guide to Making Legislation – June 2017.
[vii] Communication from the Commission on Impact Assessment, COM 2002 (EU) 276.
[viii] Organic Law 2009 (France) Article 8.
[ix] Royal Decree 931/2017 (Spain).
[x] Gov.uk Guidance – Legislative process: taking a Bill through Parliament Government Guide to Making Legislation – June 2017.
[xi] By way of Post-Legislative Scrutiny (PLS) and Post Implementation Review (PIR).
[xii] See e.g. the Inquiries Act 2015 (New Zealand) s. 7; Australian Human Rights Commission Act 1986; Constitution of October 4, 1958 (France) Article 61-1.??
[xiii] Human Rights Act 1998 (UK) s. 4 and 7; and Constitution of October 4, 1958 (France) Article 61-1.
[xiv] See Treaty on European Union and the Treaty on the Functioning of the European Union 2012 (EU), Article 11(3).
[xv] Legislation Act 2003 (Australia) s 17.
[xvi] See the Constitution of Kenya, 2010 (Kenya) Article 10; and Wilfred Manthi Musyoka v County Assembly of Machakos; Governor - County Government of Machakos & 2 others (Interested Parties) [2019] eKLR, Kenya Human Rights Commission v Attorney General & another [2018] eKLR.
[xvii] European Commission, Better Regulation Toolbox, European Commission, 2021 (EU), Chapter .7
[xviii] Cabinet Office Cabinet Manual (New Zealand) at 7.44 to 7.46.
[xix] In the United Kingdom, by convention, the government may invite stakeholder feedback on proposed policy or legislation before it is formally introduced to Parliament through the use of a Green or White Paper. White Papers are statements of policy, and often set out proposals for legislative changes, whereas Green Papers are consultation documents for stakeholders both inside and outside government.
[xx] In Spain, when a legislative initiative is proposed (and before any draft text has been prepared), the objectives of the proposal are published on the website of the relevant ministry to allow for any feedback. Once the legislation has been drafted, it is again published on the ministry’s website for feedback.
[xxi] In the Philippines, public participation in the legislation-making process is enabled where committees exercise their discretion to conduct public hearings.
[xxii] E.g. Privacy Impact Assessments in the Philippines (see NPC Circular 16-01 (Security of Personal Data in Government Agencies), 2016 (the Philippines) s. 5 and NPC Advisory No. 2017-03 (Guidelines on Privacy Impact Assessments), 2017 (the Philippines), Key Considerations, s. 4).?
[xxiii] See e.g. Towards Responsible AI for All, NITI Aayog, February 2021 (India); Ethics Guidelines for Trustworthy AI, High-Level Expert Group on Artificial Intelligence established by the European Commission, 2019 (EU); and Algorithm Charter for Aotearoa New Zealand, New Zealand Government, 2020 (New Zealand).
[xxiv] Kenya Data Protection Act 2019 (Kenya), S. 31.
[xxv] Government of Kenya, Office of the Data Protection Commissioner, Guidance Note on Data Impact Assessments, 2019 (Kenya), p. 7.
[xxvi] Netherlands Ministry of the Interior and Kingdom Relations, Impact Assessment Fundamental rights and algorithms, 2022 (the Netherlands).
[xxvii] See Environment Impact Assessment Notification 2006, issued under the Environment Protection Act 1986 (India); Environmental Protection and Biodiversity Conservation Act 1999 (Australia); and the Environmental Management and Coordination Act 1999 (Kenya).??
Head Advisor Human Rights and Social Impact, Statkraft and Global Adjunct Professor of Law, NYU Paris
1 年Yaniss Aiche Joanne Bauer Mauricio L. in case of interest.