The right to automated decision-making and data maximisation ? No thanks!
Image by Gerd Altmann from Pixabay

The right to automated decision-making and data maximisation ? No thanks!

Last week, a post by Omer Tene caught my attention. There, he proposed to take a look at an important and radical read on artificial intelligence policy by Orly Lobel, “The Law of AI for Good”. Tene provided two especially unorthodox quotes:

On privacy: “Because AI requires better, more representative data, the right to privacy can conflict with the right to fair, unbiased, and accurate algorithmic decision-making.”
Instead (emphasis mine - A.T.) of a right to human-in-the-loop (GDPR Art 22) and the right to privacy (data minimization), Lobel proposes a right to automated decision-making (human-out-of-the-loop) and a right to complete and connected datasets (data maximization).

When I saw these excerpts, my first gut reaction was: what a horrible thing to propose!

Then, I’ve read Lobel’s actual article.

Turns out, the author doesn’t propose the potential rights to data maximisation and automated decision-making to replace the existing legal entitlements (the principle of data minimisation and the right to human fallback), but to complement them. With this clarification, Lobel’s ideas appear more palatable. In the end, the more entitlements people have, the better, right?

I’m still not sure about that, because enshrining something in the law as a fully-fledged legal right may have many implications. Notably, it may give the impression that the practices of maximising data collection and putting humans out of the decision loop are societally beneficial and worth striving for.

But should we really strive to pursue data maximisation and automated decision-making without human involvement? I’m not convinced. Let’s consider these two ideas in turn more closely.

Do we need the right to data maximisation?

Reasonable premise

How Lobel defends her proposal to introduce the right to data maximisation?

She states (p. 49):

When biases stem from partial, unrepresentative, and tainted data, the solution may be the collection of more, rather than less, data

This is a reasonable observation. We may also take an example from NIST’s AI Risk Management Framework (AI RMF 1.0). Summarising the characteristics of trustworthy AI systems, the document notes that for some applications, some of these characteristics might be at odds.

No alt text provided for this image
Illustration from NIST AI RMF 1.0, p. 12

In particular, in line with Lobel’s observation, the NIST’s document (p. 12) indicates that, in particular under the conditions of data sparsity, excessive focus on privacy enhancement may undermine AI model’s accuracy, and, as a result, reduce its fairness and affect other valuable characteristics.

Therefore, for each particular application, organisations may have to settle on appropriate trade-offs, balancing in particular between achieving, on the one hand, best possible privacy and, on the other hand, best possible fairness and management of harmful bias.

But does it follow that the solution to this practical trade-off problem lies in the pursuit of data maximisation, conceptualised as a fully-fledged legal right? Not at all.

Faulty argumentation

Lobel tries to argue her point by pointing to the EU’s AI and data regime, as an ostensibly problematic example (p. 53):

Privacy is also among the top three principles of trustworthy AI by the EU. Strikingly, among these principles, no right for full and representative data collection is mentioned in the principles. The most pervasive privacy regulation, Europe’s GDPR, presumptively prohibits all data collection or use, unless such collection is within the allowable exceptions to the privacy rule.

Firstly, there is nothing wrong in not prioritising data collection in contemporary society as a matter of principle. Already many decades ago, in “Privacy and Freedom”, Alan Westin showed us that humans cannot function properly without reasonable privacy. The need to preserve it has increased as new means for breaching privacy have become widely available. Eventually, the data minimisation principle has become necessary to balance out the increase in technology-driven surveillance.

Secondly, by representing the EU’s legal regime as lacking the ability to support full and representative data collection, Lobel unfortunately misinterprets both the EU’s existing data protection law and the forthcoming AI regulation.

As for the former, the EU’s flagship General Data Protection Regulation (GDPR) expressly covers the necessary trade-off between privacy and the need to collect representative data. The data minimisation principle, as provided in art 6(1)(c) of the GDPR, requires the collected personal data to be “adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed”.

As a result, if the purpose of processing necessitates collecting more data to make the available data as a whole more adequate, such collection is permissible.

The forthcoming AI regulation is also sufficiently permissive towards data collection.

Firstly, art 10 (3) of the draft AI Act provides that data which are used to build an AI system must be “relevant, representative, and to the extent possible error-free and complete”. These requirements are further clarified in that the data have to be representative of the population which will be affected by the AI system.

Secondly, art 10 (5) of the Act specifically authorises AI developers to process special categories of personal data to combat undesirable bias in the data sets they use to build AI systems.

Obviously, this implies that, contrary to Lobel’s assumptions, both the EU data protection law and the upcoming AI regulation envisage “full and representative data collection”, exactly as proposed by her, albeit without inventing a “new legal right” to “data maximisation”.

Hence, at least in the EU, there is no need for such unorthodox legal innovations.

Now, let’s proceed to the next agenda item.

Do we need the right to automated decision-making?

In some areas, automation surpasses human capabilities

Lobel claims that a policy envisaging the right to have a human as final decision-maker or replacement for an automated decision-making process is “myopic” (p. 43).

There is a grain of truth in this. Already today, in some contexts, automated systems do in fact fare better than humans. Lobel’s thinking mirrors some of my own thoughts about our tendency to focus on undesirable bias in automated systems while sometimes being dismissive of comparable or greater undesirable bias in humans.

But Lobel goes farther than that. Based on a few articles (p.p. 45, 78), she seems to question the utility of having humans in the decision-making loop altogether, as these articles suggest that humans and automated systems, when combined, sometimes exacerbate each other’s deficiencies rather than reinforce each other’s strengths.

I don’t find this argument very convincing, as it goes against many other opposing observations in the literature.

Human involvement offers a fail-safe

First of all, as noted by Amoroso and Tamburrini (page 253):

Contrary to what widespread recourse to anthropomorphic language may suggest, human and autonomous decision-making processes indeed remain qualitatively different and are likely to err in qualitatively different ways. The source of such qualitative differences lies in what AI experts call the “semantic gap.” This expression indicates the fact that machines do not perceive the world in the same way as humans.

Accordingly, involving humans in the automated decision-making loop serves as well-recognised fail-safe mechanism which addresses the obvious fact that automated decision-making making can and does occasionally fail, sometimes with disastrous consequences. And this fail-safe mechanism doesn’t necessarily need to take the form where a human operator reviews each particular operation. Depending on the area of application, there are other ways to keep people generally in or on the loop of an automated process in a practical and effective way.

On a broader level, such human participation is an element of meaningful human control over automation, which, according to Davidovich, has been consistently and repeatedly called for by governments and numerous scholars specialising in the field of automated decision-making. For more on this concept and its importance, see also this work by Beck and Burri and this article by Siebert and colleagues.

All of these important observations cannot be simply dismissed.

Trust in autopilots does not translate into trusting automation in society

Lobel further argues that (page 46):

a right and duty to automate are not only possible, but morally correct. To underscore this point: failing to acknowledge the possibility of legally prohibiting human decision-making under certain circumstances is a normative choice. It is a regulatory (in)action that may come at a serious cost.

In some contexts, especially where automated decisions are about operating machinery, this may be sometimes true. But even Lobel’s own argument pointing to plane autopilots as gold standard (showcasing putting humans out of the loop during bad weather) artificially focuses on a part of what needs to be viewed as a whole. The industry still maintains pilots in all commercial aircraft. Besides other reasons, precisely because humans are needed for flight safety.

In other contexts, where decisions are not about operating machinery but about creating legal or other consequences for people’s social lives, there is even less moral pressure for establishing a right and duty to automate away human involvement.

NIST’s AI Risk Management Framework document (page 40) rightly notes:

Many of the data-driven approaches that AI systems rely on attempt to convert or represent individual and social observational and decision-making practices into measurable quantities. Representing complex human phenomena with mathematical models can come at the cost of removing necessary context. This loss of context may in turn make it difficult to understand individual and societal impacts that are key to AI risk management efforts.

In other words, not all societal phenomena can be losslessly reduced to computable elements.

There are desirable biases which cannot be translated into computer code

Moreover, human decision-makers may exhibit not only undesirable biases, but also desirable ones, such as the bias of acting humanely, the bias of being driven by respect for human life, dignity and autonomy, as well as such notions as human rights and the rule of law.

In some instances, we can create machine-implementable proxies for certain elements of these desirable biases and value judgments. In other instances, we cannot, because they are not fully quantifiable and not fully translatable into computer code. Yet we consider them important for our decision-making.

To the extent automation, in some instances, can help us weed out the undesirable sort of human bias while preserving the opportunity to exercise the desirable sort, such automation could be welcomed. But we need to be mindful of additional complications.

Sometimes, human inefficiency may be a good thing

On the downside, automation in high-stakes decision-making implies the ability to harm more people in less time. And sometimes, less opportunities to correct course and the possibility to discover surprising downstream complications when it might be too late to remediate.

With all their flaws, modern liberal democracies based on the rule of law generally work most of the time. These are highly complex systems which found their way to equilibrium through hundreds of years of trial and error and eventual establishment of adequate safeguards.

By automating away some of their human-based elements we now perceive as bureaucratic inefficiencies, we might be making these systems more efficient but inadvertently also less sustainable.

For example, by automating important societal governance processes, we also make them more susceptible to be co-opted by nefarious actors, as soon as they gain political power or simply hack into critical societal infrastructure.

If we take automation and techno-solutionism to the extreme, it may well be that one day installing digital totalitarianism overnight may become as easy as distributing an over-the-air software update to a fleet of Teslas.

To sum up

Lobel’s article is a thought-provoking read. I have studied it with pleasure and some arguments and reminders resonate with my own thoughts. I do not insist that I've covered everything that was worth talking about from this article. Instead, I decided to focus narrowly on the arguments, which, to me, seemed most relevant for establishing the proposed “right to data maximisation” and “the right to automated decision-making”. I did not find these arguments convincing.

In my view, data maximisation is an absurd notion in the era of ever-increasing technology-assisted surveillance. Data minimisation principle is paramount and does not preclude the possibility to collect adequate and relevant data to form representative data sets for AI systems.

While automated decision-making may be meaningful in many contexts, it must at all times be subject to meaningful human control. Such control does not mean that a human operator has to intervene in each automated operation, in all contexts, at all times.

Where governance in modern rule-of-law democracies heavily relies on human decisions, this is not necessarily bad and what we may perceive as inefficient derelicts of the past might at some point save us from instant conversion to digital totalitarianism.


If you like my newsletter, please support me?on Patreon. It means a lot to me. Thank you!

Vincent (The Ethical One?) Leguesse

Team-driven and motivated by unity, love, respect and peace as well as co-existence to create solutions that are everlasting and unify the universe that brings on perpetual state of peace and respect. We care.

2 个月

A.I. Rights is the next chapter.

回复
William Love

Sr. Counsel | Commercial & Corporate Attorney | AI Legal Ops

1 年

Aleksandr Tiulkanov That's a great read. I agree with your criticisms of Orly's article. The idea of data maximization being a goal is problematic at best. As an attorney, I can see exactly how this can be abused. Similarly automated decision making is not particularly a great end game. Decisions are multifaceted and certainly aren't something I'd want to be stagnant. The law had similar issues quite a bit of time ago when causes of action were the main driving force within the law. There was a reactionary movement to judge things upon substance rather than form so that we didn't judge things on technicalities rather than on substance. From what I've seen of automated decision making and the proposed methodologies for such automated decision making this tends to have it so that form rather than substance governs the issue. Substance is a tricky thing. it is not obvious and it is not well expressed enough for a "parrot" like AI, or an automated system to confidently find substance through the massive amount of mess that is life. In short, excellent article. More people should read it and ignore my rambling.

Theodor Sachs Leschly

Open to Work - MENSCH Institute- Writes about: Legal Tech - Privacy - AI Ethics and Regulation

1 年

Id like to run your argument through a bias checking algo, just to make sure you aren’t guided by your priors. Whats your AllMyData handle?

Julien Etienne

Aider l'humain à se débarrasser de ses taches quotidiennes fastidieuses grace au numérique. ??

1 年

Totally agree about the risk of a digital totalitarianism... This is typically why Humans have to keep control of the machine... We need to avoid AI bias that would shape a normative, restrictive and inflexible society...?Otherwise, "Skynet" will be soon watching us... ?? Humans have a huge advantage over AI: their humanity and their ability to solve non-normative problems. But AI can be a fantastic way to help them to take shortcuts and decisions more quickly.

Natasha Khramtsovsky

PhD, Senior Expert at Electronic Office Systems LLP

1 年

Sadly, very few professionals are willing to recognize that bias (of whatever kind and origin) cannot be eliminated entirely; it only can be managed and some of its aspects could be minimized. And, of course, ML-based AI systems accumulate both human knowledge and human biases :) ? What Omer Tene is actually talking about, is a much broader issue of rights of individuals vs public good. And selecting one or the other is no good; a delicately balanced approach is needed. EU-leaning professionals prefer to ignore the quite evident fact that lately EU-style personal data protection was becoming more and more extreme, harming in the end the very individuals it pretends to protect! There was much more privacy in ancient times – do you really want to return there? :)

要查看或添加评论,请登录

Aleksandr Tiulkanov的更多文章

社区洞察

其他会员也浏览了