European Union AI Act. Ethical Guidelines or merely procedural principles of systems functionality?

European Union AI Act. Ethical Guidelines or merely procedural principles of systems functionality?


If the future is one and only to be colonized or is plural and collectively co-designed is the implicit issue these days, in line with the meeting of October 24th concerning the current negotiation on the AI Act of the European Union, a landmark proposal which is on track to be adopted by the end of 2023 or early 2024. On UN level, Antonio Guterres has just launched a high level advisory body on AI. Also, on UN level, Antonio Guterres has just launched a high level advisory body on AI.

Stakeholders ?push the Trilogue members on artificial intelligence to speed up the approval ?(and push forward the efficient output thereof) of the drafted regulation of AI govenrnance? and finalize and get reflected in the legislation. For instance, stakeholders’ demands on transparency, such as disclosing that content is AI-generated and publishing summaries of copyrighted data used for training AI systems are promoted. The drafted AI rules have to be agreed by the European Parliament and European Union member states. They have so far been discussed more than three times in trilogues, which are meetings between parliament and EU states to thrash out the final versions of laws. Conditions for new systems such as ChatGPT, are in view during the negotiations with the Commission and the Council to agree the final text. ?However, it is crucial to distinguish AI? high risk from other, simpler, software systems or capability sets. AI for citizen "scoring" purposes and exploiting "the vulnerabilities of specific groups are unethical and, hopefully on top of that, potentially illegal.

?

Let me get straight to the point. Accountability, transparency, explanatory power (obligation to provide explanations for data use), interoperability ARE NOT ETHICS. THEY ARE MERELY PROCEDURAL PRINCIPLES. (a short detour on top of that: via automated systems citizens are often merely nudged to confirm by default decisions, ticking boxes of their informed consent -unfortunately, opting out ethicopolitical views but also digital illiterate populations, automating inequality-but let's not switch to another issue).

Let me scrutinize what is at stake here. The negotiated framework of the drafted Act guidelines I strongly advise to be framed as divided, both for the purposes of analysis and for policy intervention suggestions, into two separate registers:

?a) procedural principles that govern the functionality of systems such as accountability, transparency, explanatory power, interoperability, management of bias, and

b) ethical principles that prospectively constitute and harmonize such as autonomy and respect for the dignity of persons, privacy and rights against improper management by third parties, the principle of no harm and benefit, the promotion of group welfare, the avoidance not only of the 'digital divide' but also of the wider social consequences of algorithmic bias and the creation of automated inequalities in the distribution of benefits, solidarity and a sense of belonging, justice, accountability in creating safeguards for human control of automation in algorithmic decision making, and the integration of ethical standards from the design process of the technological systems produced.

The standard, by default -but ?for me inadequate- key principles for the use of generative AI are accountability, transparency and independent oversight.

Accountability. Humans must remain in the loop to evaluate the quality of generated content; for example, to replicate results and identify bias. Although low-risk use of generative AI — such as summarization or checking grammar and spelling — can be helpful in scientific research, it would be better to advocate that crucial tasks, such as writing manuscripts or peer reviews, should not be fully outsourced to generative AI.

Transparency. Researchers and other stakeholders should always disclose their use of generative AI. This increases awareness and allows researchers to study how generative AI might affect research quality or decision-making. In my view, developers of generative AI tools should also be transparent about their inner workings, to allow robust and critical evaluation of these technologies.

Independent oversight. External, objective auditing of generative AI tools is needed to ensure that they are of high quality and used ethically. AI is a multibillion-dollar industry; the stakes are too high to rely on self-regulation.


However, there is more ethics and politics to be considered on top of the above.

-Some stakeholders in the debate start considering that, in the long run, it is unclear whether mere legal restrictions or self-regulation will prove effective. AI is advancing at breakneck speed in a sprawling industry that is continuously reinventing itself. Regulations drawn up today will be outdated by the time they become official policy, and might not anticipate future harms and innovations. Some reiterate that copyright is not the only prism through which reporting and transparency requirements should be seen in the AI Act. Greater openness and transparency in the development of AI models can serve the public interest and facilitate better sharing by building trust among creators and users. As such, they generally support more transparency around the training data for regulated AI systems, and not only on training data that is protected by copyright.

- Some stakeholders promote a Proportionate approach. They are keen to support a proportionate, realistic, and practical approach to meeting the transparency obligation, which would put less onerous burdens on smaller players including non-commercial players and SMEs, in order not to stifle innovation in AI development. Too burdensome an obligation on such players may create significant barriers to innovation and drive market concentration, leading the development of AI to only occur within a small number of large, well-resourced commercial operators.

-Other stakeholders deal with the lack of clarity on the scope and content of the obligation to provide a detailed summary of the training data. ?According to them, AI developers should not be expected to literally list out every item in the training content.

-Other stakeholders are insisting that the core elements of the legislative proposal that classifies AI systems into four risk categories Is not enough concerning the demanded ethical principles. I recap these risk categories:

The first category (unacceptable risk) is banned outright and includes systems like social scoring that manipulate social behaviour in ways that may be harmful to human rights and development.

The second category (high-risk) includes data governance issues, human oversight, law enforcement uses, and usage related to critical infrastructures, and essential private and public services.

The third category (limited risk) includes AI systems such as chatbots (e.g., ChatGPT) and imposes transparency and disclosure requirements on them.

The fourth risk category (minimal or no risk) is afforded near unlimited use of AI, so long as the technology remains classified as minimal risk.

?What does to proposed AI Act propose on this? The AI Act seems to allow fluidity in the classification and re-classification of given activities in respective risk categories, enabling the legislation to adapt to the evolving landscape of AI systems. Furthermore, it would be a nice minor measure to consider and establish a European Artificial Intelligence Board to facilitate EU Member States’ adherence to the incoming regulations.

?

However, on top of the above, let’s raise the bar higher in order to secure democratic distributions of resources and access architectures. The first think that comes to mind, considers that individuals affected by the use of AI systems should be provided with the right to lodge a complaint before a competent authority, in case providers and users of AI systems infringe on the AI Act. Also, AI raises some more complex normative questions than technical standards typically tackle. The AI Act requires identifying and mitigating risks to fundamental rights. In the EU, these include non-discrimination and equality between men and women, for example; how is this rephrased, for instance, into accuracy and bias requirements for an employment decision tool? And should standards even attempt to make this rephrase? Currently, the AI Act gives no guidance on how fundamental rights can be protected via technical standards. I strongly believe that past standardization efforts have avoided addressing contested normative questions in a substantive manner, essentially kicking the can down the road for industry to decide for itself, and determine which normative decisions are appropriate to include in standards.

As an alternative proposed solution, Ι strongly advocate the view that standards could require “ethical disclosure by default” and focus on a minimum set of metrics rather than setting thresholds, informing stakeholders and allowing them to decide what’s acceptable. We have to answer hard normative questions themselves. This approach would raise concerns about its democratic legitimacy because standardisation is largely a technical discourse and tends to exclude non-expert stakeholders and the public at large. Check ?a well versed summary of the view here:

( https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4365079).

When identifying potential impacts of AI on fundamental rights, ?I think you should also resist the temptation to focus only on the issues with the highest public and political visibility, such as bias and discrimination cases, and avoid neglecting other rights and freedoms. Individual rights, such as the protection of an individual’s health data, may come into tension with society’s interest in effective healthcare. Balancing these involves thorny normative judgements.

And last but not least. The forward-looking, future(s) new world that has been opened up via the AI has made it compelling, and has set up a foothold among stakeholders, that the debate on redefining the concept of intellectual property from the perspective of the Commons has already been implicitly opened up (even of being included in the copyright Law). What foothold? That of not just introducing centralized governmental Prohibitions on the use of AI innovations and their products, but instead the free use by citizens and companies. But on one condition: under a very specific Creative Commons License. This, because realistically speaking, bans and prohibitions are in practice often violated and cannot be controlled by everyone, but also because all these products, before they were appropriated by companies were created by commons of researchers. The license I think is the Attribution-NonCommercial-NoDerivatives license defined via the acronym 'BY-NC-ND’. It claims 'everyone use the product without restriction, but ??its commercialization is prohibited. There are policy areas that this can be applied promoting the public good.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了