AI, Covid-19, Privacy Rights and Data Sharing: finding balance amongst Individual, Public, Corporate and Societal Interests in the  Digital Age
All rights, including copyrights, image rights and intellectual property rights reserved. I am not the owner of any of the images presented here

AI, Covid-19, Privacy Rights and Data Sharing: finding balance amongst Individual, Public, Corporate and Societal Interests in the Digital Age

No alt text provided for this image

(…) Thoughtcrime, they called it. Thoughtcrime was not a thing that could be concealed forever. You might dodge successfully for a while, even for years, but sooner or later they were bound to get to you. It was always at night. The sudden jerk out of sleep, the rough hand shaking your shoulder, the lights glaring in your eyes, the ring of hard faces round the bed. In the vast majority of cases there was no trial, no report of the arrest. People simply disappeared, always during the night. Your name was removed from the registers, every record of everything you had ever done was wiped out, your one-time existence was denied and then forgotten. You were abolished, annihilated: vaporized was the usual word. (…)  

The excerpt above is part of a classic dystopian book about a society which is controlled by the so called "Big Brother". It was written in 1948, after the second world war. You probably know to what it refers to; it is the book 1984, from George Orwell, a masterpiece which raised concerns about totalitarian regimes, mass surveillance, violations of freedom of speech and other types of restrictions of rights. A number of practices that the book describe – yet with nuances - unfortunately were not constrained to the realms of science fiction and happened in reality. From government security apparatus of control of opinion, propaganda and subtle techniques of control of the masses (which have existed for a long time) the new digital age tools are allowing public and private actors to increase  data extraction from populations for various reasons, which can be summed up, nowadays, on two main clusters: i) public safety (for national security and sanitary grounds, specially after the COVID-19 pandemic)  and ii) personalization of experience (which gives user incentives to get rid of or share their personal data by the continuous use of digital platforms and tools). Never before have state agencies and Big Tech companies acted so closely or shared such similar interests than current times. Not to mention social media platforms, which can spread (mis)information very quickly which does not depict situations in the most reasonable and accurate way. The "cancel culture" is one of the closest practices that reminds us of the dangers of collectivism versus individual freedom. The use of marketing techniques in addition to fast scaling up of network effects can create hysteria and herd behavior patterns which are similar to what happened when “War of the Worlds” an episode of a series was aired in 1938, interpreted by Orson Welles.

Restriction of rights has been a topic I have been interested since I have graduated in Law, in 2004, in my hometown in the distant southern most state of Brazil, Rio Grande do Sul. At the time, I wrote about the restrictions that have been brought by a piece of Federal US legislation known as the Patriot Act, enacted after the terrorist attacks of September eleven (9/11). There were detentions for unlimited time, arrests without a specific accusation, crimes with a very broad and open definition. Communications could be intercepted on grounds of national security. Fast forward to the present, 20 years later, the digitalization of society is reaching levels never seen before. In the last two years alone, 90% of the world’s data has been created. In order to have a better grasp of this situation:

No alt text provided for this image

All this data which flows on the cloud and communication systems, virtual marketplaces and so on are not useless. They are extracted and used to guide a number of decisions which can restrict basic rights of people. As written by KIRSTEN MARTIN (2019): Algorithms silently structure our lives. Algorithms can determine whether someone is hired, promoted, offered a loan, or provided housing as well as determine which political ads and news articles consumers see. We are living, as raised by SHOSHANA ZUBOOF (2019), on an age of surveillance capitalism, which is defined, on her own words: 

1. a new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction and sales; 2. A parasitic economic logic in which the production of goods and services is subordinated to a new global architecture of behavioral modification; 3. A rogue mutation of capitalism marked by concentrations of wealth, knowledge, an power unprecedent on human story 4. The foundational framework of surveillance economy; 5. As significant a threat to human nature in the twenty first century as industrial capitalism was to the natural world in the nineteenth and twentieth 6. The origin of a new instrumentarian power that asserts dominance over society and presents startling challenges to market democracy; 7. A movement that aims to impose a new collective order based on total certainty; 8. An expropriation of critical human rights that is best understood as a coup from above; an overthrow of the people’s sovereignty. 


The multi folded definition above give us huge concerns; it almost give us a sense of urgency and panic, which by the way has been further fueled by the current COVID-19 pandemic. In the current context, the main issue businesses should have would be how to manage these sets of data on a i) reliable, ii) trustworthy; iii) quick to access and iv) last but not least, in compliance with legislation?

The 3 first questions are more related to the technical abilities of data scientists, program developers, marketing analysts, new business developers, operation flow managers, data lake and data warehouses systems and IT infrastructures. Nevertheless, in a world where competition is very harsh and business and investment decisions are taken within mili or even nanoseconds (for the case of High Frequency trade, for example, see LEWIS, 2015), regulators, compliance, risk and legal experts should also step in order to create a balance among business purposes, state objectives and basic fundamental rights, as it is a fact of life that a number of algorithmic and machine learning models have proven biased. 

For instance, in the Netherlands a Hague District Court ruled that AI algorithmic solution (Syri) used to try to detect fraud is in breach of human rights, as it was deployed toward poor (“problem”) neighborhoods collecting data variables which were supposed to be segregated, coming from different agencies like i) employment, ii) personal debt; iii) benefit records; iv) education; and iv) housing histories, analyzing them by means of a “secret algorithm” to identify which individuals might be at higher risk of committing benefit fraud. Similar examples showing biased algorithms vary from the use of AI tools to define criminal sentences – the so called “Compass Algorithm”  - taking in account the probability of recidivism of the offender (more details here https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm) and “Predpol”, which tries to anticipate where crimes are going to occur (more details here https://www.latimes.com/california/story/2019-10-15/lapd-predictive-policing-changes).

These examples indicate that Laws and Regulations (which normally are slower than the deployment of innovations) work like a sort of ex post buffer to avoid the so called algocracy (JOHN DANAHER:2016) of economy, politics, rights and at the core the human experience itself. It is as if law and regulations – at the current times -  have as one of the most important purposes to avoid the creation of a self constrained social sub-system that can indeed create a whole new configuration of society by itself (hence the danger of dynamic or advanced machine learning techniques, the menace of opacity, meaning decision taking by these AI techniques would become incomprehensible for mankind.). Along these lines, the questions raised by CHRISTOPHER PHILIP MARKOU (2017:247) are relevant and important to the field of Artificial Intelligence: i) do we allow industry to self regulate, thus ceding ground to the invisible hand of the market? ii) do we prohibit the development of AI along certain lines? Can we program AI to only ever do things we think are “good” or equip systems with literal or material “off” buttons to forestall harms? Or do we allow government to take the lead and establish strong centralized regulation to oversee it? 

Though these questions will live with us for a long time, one of the so called “laws of technology” by KRANZERG (“technology is neither good nor bad; nor is it neutral”) is a good overarching principle that we should have in mind and guide us further. We cannot be neutral. There must be regulation “on the go”. As a result of this this, a number of important judgements have come in place. The issuance of the EU GDPR in 2018 was only the beginning of a new ecosystem related to the enhancement of data and privacy rights and the perception that data (which is the ne new gold) may be invisible for us, but it flows around cloud systems, being much more dynamic than tangible assets. 

On this token, the Schrems II decision, recently issued (July 2020) is important and is related to the ownership by Facebook of data of a national citizen of an EU Member and the transfer of such data outside Europe. It has invalidated the so called EU-US Privacy Shield framework which was used by a number of US companies to engage in trade and commerce with Europe. What is the adequate level of data protection under the GDPR? How can it be accessed? What additional measures re needed in order to allow such transfer and avoid the access of this data by state agencies in other countries for example? Data localization, cloud restrictions, necessity of encryption of data are all relevant aspects to take in account.

In an ever connected world, there is no way back; but we can at least try to halt or diminish the way AI and data extraction is embedded in society. Some important guidance for achieving trustworthy AI has been listed by the European Union, including a human centric approach, taking in account i) human agency and oversight; ii) robustness and safety; iii) privacy and data governance; iv)transparency; v)diversity, non-discrimination and fairness; vi) accountability. If we follow these guidelines – basically establishing what has already been coined as sustainable data science, many important quality of life improvements can be achieved. As TAYLOR and PURTOVA (2019) observe, the direction goes in the way of promoting â€œmutually beneficial interaction between data science and society”. It is necessary to highlight that the incremental implementation of AI tools to pursue operational routines and also engage on decision-taking demands on a level playing field, as the risks of biased algorithms are a reality (discrimination, distorted data bases, “black box” algorithms). This can be done through regulation. Artificial Intelligence cannot become a “weapons of math destruction”. On the homonymous book, CATHY O’NEIL (2016) states that 

Data is not going away. Nor are computers. – much less mathematics. Predictive models are, increasingly, the tools we will be relying on to run institutions, deploy our resources, and manage our lives (…) these models are constructed not just from data but from the choices we make about which data to pay attention to – and which to leave out. Those choices are not only about logistics, profits and efficiency. They are fundamentally moral. If we back away from them and threat mathematical models as a neutral and inevitable force, like the weather and the tides, we abdicate our responsibility
Man in front of a big digital wave

That is our responsibility: to avoid that the AI and the data tsunami waves that are arriving on our age and shores to destroy our sense of morality, humanity and reasonableness, by implementing specific regulatory guidelines, restrictions and incentives, including a civil liability regime for Artificial Intelligence, something that has been already approached on European Union level by BERTOLINI (2020)  and EVAS  (2020).  Last but not least,  the right to be forgotten, the right to explanation and the right to audit are of utmost importance to avoid society and people becoming prisoner of AI and of powerful stakeholders on public and private domains. 

References:

  • BERTOLINI (2020) BERTOLINI, Andrea. Artificial Intelligence and Civil Liability, July 2020. Available at https://www.europarl.europa.eu/
  • DANAHER (2016), DANAHER, John. The Threat of Algocracy: Reality, Resistance and Accommodation. Philosophy and Technology 29 (3):245-268 (2016)
  • EVAS (2020) EVAS, Tajiana. Civil liability regime for artificial intelligence European added value assessment, September 2020 Available at https://www.europarl.europa.eu/
  • LEWIS, (2015) LEWIS, Michael. Flashboys Cracking the Money code. London: Penguin Books, 2015.
  • MARTIN (2019) MARTIN, Kristen. Ethical Implications and Accountability of Algorithms. Journal of Business Ethics, volume 160, p. 835-850
  • MARKOU, (2017) MARKOU, Christopher Philip. Law and Artificial intelligence: A Systems-Theoretical Analysis. 2017
  • O’NEIL (2016). O’NEIL, Cathy. Weapon of Math Destruction. How big data increase inequality and threatens democracy. London: Penguin Books, 2016.
  • ORWELL (1949) ORWELL, George. Nineteen eighty-four. London: Penguin Group, 2008.
  • TAYLOR, L., & PURTOVA, N. (2019). What is Responsible and Sustainable Data Science? Big Data & Society6(2)
  • ZUBOFF (2019) ZUBOFF, Shoshana. The Age of Surveillance Capitalism London: Profile books, 2019


Márcio dos Santos Vieira

Professor de Direito Bancário e Advogado Especializado em Dívidas Bancárias

4 å¹´

Muito bom texto. Levanta talvez as quest?es mais relevantes do nosso tempo.

要查看或添加评论,请登录

Vinicius Diniz Vizzotto的更多文章

社区洞察

其他会员也浏览了