If automatic firing is like robots governing us…
Recently, a court case was started against Uber for automatically firing certain drivers without the intervention of any employees. See https://ekker.legal/2020/10/26/uber-drivers-challenge-dismissal-by-algorithm/.
In the world of ethics, the question about how automated systems might take over control of the world has been very prevalent already for 50 years. When Isaac Asimov published his story about robots with an independent thinking brain, his stories touched certain conundrums that still influence us today. For instance, if robots are as autonomous as we are, and one of them expresses a desire to live by itself, what would we need to do? The guiding principles for decision making in robots according to Asimov would be;
1. A robot shall protect human beings
2. A robot shall obey a human command unless it breaches principle 1
3. A robot shall protect itself unless it breaches principle 1 or 2.
The essence of these principles is that robots shall autonomously support the world of human beings. There will be cases where this is not straightforward, leading to conflicts. It’s those situations that Asimov took as starting point for his stories.
Asimov’s work did not just remain fiction, it was taken as the basis for many philosophical debates and academic work. These developed into a field called “Ethics of artificial intelligence”. See for instance research at University of Utrecht, MIT, and a publication in Harvard Business Review.
The argument in the new court case is that Article 22 of the GDPR forbids automated decision making. It doesn’t actually say this and it wasn’t intended for this, but one can appeal to the court to make it mean this. That is exactly what the lawyers have done.
The GDPR explains such nuances by prefacing the articles with a set of principles. One of the first principles is that
“The processing of personal data should be designed to serve mankind.”
The issue is about accountability for actions taken within an organisation. Organisations and their customer contact desks cannot hide behind automated processes or standard rules. A life changing decision needs to be signed off by someone in charge. The decision needs to be made consciously and the reasoning needs to be explained. A doctor has to explain why the hospital stopped treatment, car manufacturer has to take responsibility for a faulty design, an internet provider has to defend their inaction in removing harmful online content.
This issue didn’t just arise with automation. We have had to deal with accidents where for instance horses were the cause of death or injury, rearing, bolting or otherwise, in street traffic. One such incident was even the cause of death for the famous Antoni Gaudi. While this has happened numerous times, see for instance this site, never have people started laws to curtail the use of horses. We have accepted them and these accidents as a part of our daily lives. However, the owners could expect legal action taken if they were found to have not managed the situation well.
In medicine, we have used AI for years to support decision making. It will always require a final decision by a doctor. Even though diagnostic systems can do better than human doctors (reference), it’s still the responsibility of the doctor to discuss the outcome with the patient and make a plan. The fear of the robots taking over the world is academic because we only allow our machine learning and neural network decision making systems to support our decisions, we don’t let them make the decisions for us.
The world of “Robotic Process Automation” is a fancy name to explain the full end to end automation of a process. It doesn’t mean that mobile robots are involved, it implies that the entire process is governed by an automated system one could call a robot. Certain administrative processes can be executed with no human input because all steps can be automated. Take for example the purchasing of a can of cola from a machine. The buyer indicates the desired taste, holds a bank card in front of a reader, the machine executes a transaction with the bank, then dispenses the can of cola. There are even processes without a user; a copier machine runs out of toner and signals this to the vendor. The desired toner cartridge is added to a transport queue for the logistics team and the cost is added to the account of the company where the copier is operated. With automatic payment, the money is also deducted from their bank account and added to the vendor’s. All we need is for drones to automatically fly cartridges to the right address, a robot arm to slot the cartridge in, and the process is even completely automated from end to end.
At some point, automation becomes a burden for the stakeholders, the people who need something from the process. When you want to order a piece of clothing from the internet and the system isn’t letting you, even when you think you are credit worthy and that the item is in stock, things can become frustrating. These days, reducing cost is so important that online shops don’t want to have service people available for answering marginal requests. As a client, you want to be helped, you want to talk to someone, file a complaint, get someone to explain why it’s not working. This is even more important when you don’t want to buy an item of clothing but need to get disability or life insurance. The frustration is nicely visualised in Little Britain’s sketches where the administrative person shows absolute disinterested in the client and responds “Computer says no”.
After 15 years of internet and its related abuse, the EU decided to enact a law that would protect people from organisations mishandling information that relates to them personally. A few examples;
- Financial organisations would gather information to be able to determine whether a prospective client would be a risk.
- Hackers would find, download and distribute personal data for money, information that for instance marketing agencies would find valuable and of course so would scam artists who need information to get people to send them money.
- Businesses would retain information on anyone that ever contacted them so that they could offer them new products later on
- Business would approach customers for additional produts based on the subscription information they had of their customers.
- People, mostly famous people, would be embarassed by personal information being distributed; sex video’s, naughty pictures, embarassing texts or conflictual information of for instance politicians.
- Credit card data would be hacked from systems lacking sufficient protection, which would then be used to either get money from these accounts or order products on other people’s names.
Article 22; Automated individual decision-making, including profiling
A special article in the GDPR is article 22. It specifically provisions people not having to be afflicted by the adverse affect of fully automatiing processes to the extent that no human is involved in the decision making and the only response the individual might expect when asking why they could not receive what they need is “Computer says no.”
Originally, this article was there to prevent cases where the individual would object to their data being mishandled and the organisation processing the data defending this by saying they can’t change it because it’s a fully automated process. Organistions often take this line of defense becausee an individual has a very tiny case to make and there will be other people who will accept the circumstances. One fewer customer makes no difference. The same happens in employee contracts; “We cannot change this article because it is part of our standard contract”. Bureaucracy is an age old problem and according to Eugene McCarthy we can only be saved from it by its inefficiency.
A special provision in the GDPR that precedes this, is that the individual needs to be informed about the way their data is handled, including whether this is done in a fully automated fashion. Initially, this is a good thing; when data is handled by computers, no operator is involved who could use this data to their own advantage. It sounds pretty safe to have medical information moved from one medical facility to another without ever any risk of human failure along the way. However, in this court case the reasoning goes the other way; it is not fair if decisions are made that affect the individual without a human being making sure this was done taking the human factor in mind.
In this particular case, the organisation and the employee entered into a contract and we can assume they did so willingly. This means that the conditions were clarified in the employment contract. Given that the GDPR requires through article 13 full disclosure around the way data about an individual is handled, this would mean that it would be known by the employee how the termination process works. No matter what the grounds, whether they are valid or invalid, it can be argued that the employee accepted the method of handling terminations. In fact, initially a prospective employee could feel pretty secure in the fact that certain decisions in this process are based on fully verifiable steps and information because it was implemented as an automated process. There is nothing more frustrating than being handled hypocritically by an organisation where a person decides to terminate a contract based on their own personal feelings.
Article 22 explicitly states that an automated decision is allowed if the decision “is necessary for entering into, or performance of, a contract between the data subject and a data controller;” This further supports the case for automating the process in search for elimination of human fickleness. Also this statement was meant for something else. Principle 71 of the GDPR clearly explains that it was meant to protect people from automated ‘sniffers’ and ‘wide net fishing’ for information that results for instance in automatic rejections of credit card application because your Facebook profile shows you have quite many new expensive gadgets to show off on your profile each week or an increased car insurance premium because of your many pictures dancing yet again at a table at a party.
Whether the termination process indeed did not involve human activities and whether indeed no human accountability was employed when the termination process was defined and implemented is beyond this article. That is for the plaintiff to argue defending party to defend. The case is the case, but the application of the GDPR as an argument for being terminated unlawfully is definitely a first as far as I know.
Lawyers and judges will have to figure out how the GDPR can be reinterpreted for new applications like this one. Whether indeed no individual was involved in the decision making step is for Uber to explain. It does not mean that certain steps were performed manually, it means that the final decision was made by a human being. In parallel to the insurance case, one could have experienced a manager saying “According to the system, you are quite a high risk, but given the circumstances I am going to approve this application.” The manager might ask for a special condition to be added to the contract, that is the creativity that only human beings can display and it allows for the human factor in life changing decisions. The opposite is also possible “Although the system calculating all the stress factors on this building reports that it is safe, I am not going to approve your design because I still think it will not be able to handle the specific nature conditions where we want to build it.” Imagine the building would indeed fall down later and all we can say is that “the system said it was ok.”
Applying the principles of ethics in the world in general is a good thing by definition. What we can learn from AI in medicine is that every process should be goverened by human beings even if all of the execution is automated. Where automation brings us cost reduction and standardisation, it can also lead to imperonal and bureaucratic handling of people’s interests. We need to find out how to balance this.
Process engineer for multicultural organisations
4 年Anton Ekker, Uber article published.