AI and Privacy: The Needed Balance
Osama El-Masry
ME Lead - Data Responsibility & Privacy @ Cognizant | IAPP KnowledgeNet Chapter Chair | Ex-Vodafone DPO | FIP | CIPP/E | CIPM | CDPO | ISO27701 Sr. LI | ISO27001 Sr. LI | ITIL | PRINCE2 Certified & PECB Certified Trainer
No one can deny the rapid pace evolution of technology and how it benefits mankind, yet such benefits always come with a cost and it's the role of governments to lay down the rules and put in the proper governance to achieve the maximum benefits with the least impact out of these technologies.
One of these technologies is AI which in simple words is an Automatic Decision Making Process based on historical data analysis (Machine Learning Technique) and/or certain rules set based on experience/know-how (Traditional Technique) written in a form of an algorithm, that process can vary from simple data inputs/rules associated with certain outputs based on predefined criteria (e.g. assigning a gift/rate plan to customers with certain characteristics), going through the advanced analytics/data modeling that can predict a series of actions in the near future and make decisions accordingly (e.g. medical diagnosis and medicine development), and up to complicated analytics/data modeling that can predict/alter person's actions/decisions in the future e.g. marketing and targeted ads industries.
As stated above in AI Tech Industry, there are Non Privacy Related Fields (e.g. Telecom Network Enhancement and Medical Diagnosis) and Privacy Related Fields (e.g. Marketing and Law Enforcement) and consequently the potential privacy impact for those fields that deal with personal data is of a real concern on the level of protecting the personal data itself and the level of how it's being used and how decisions are made based on personal data involved, that privacy impact may be presented in the form of:
- Unintended Biased Decisions: Undesired effect of biased decisions may occur for several reasons, amongst which are a) the actual biased decisions that already exist as part of the historical data used in the ML Technique, b) Technology Limitation due to inability to detect certain cases over the others (e.g. facial recognition technology), and c) under sampling/oversampling of data used in building the algorithm
- Unethical AI Use Cases: That impact is occurring by technology developers due to the power of the technology and the lack of boundaries for the AI use cases and what constitutes ethical and unethical uses cases e.g. what occurred in the US Elections in terms of manipulating people's choice through tailoring posts that match their personal preferences/believes using their personal data and AI attempting to convince them with a candidate over the other.
- Data Leakage/Exposure: This is mainly due to the challenge of data disposal in AI where it's very challenging to delete personal data after the end if its purpose and consequently, hard to let people exercise their Right to be Forgotten as mandated under GDPR since their personal data are scattered in huge data lake environment and can't easily be located and erased and this eventually leads to the risk of data leakage with the unnecessarily huge data availed in these environments and the continuous attempts of hacking trying to get hold of such data for potential abuse.
- Decisions Ambiguity: This is mainly due to the lack of transparency and inability for people to exercise their "Right To Know" over their personal data where they don't understand how those automated decisions are made specially in the case of undesired decisions from their perspective and hence inability to validate if the decision was relevant or not
- Use of AI with Vulnerable Data Subjects: Using AI with vulnerable data subjects e.g. children, mentally ill persons, asylum seekers, elderly, patients, sentenced persons, and in general where there is an imbalance in the relationship between the position of the data subject and the organization using the technology towards them where those data subjects don't have the ability/privilege to express disagreement or emotional discomfort towards the imposed decisions on them.
Current privacy laws and regulations have barely tackled the use of AI without thorough consideration e.g. GDPR Art. 22 has restricted automated decision making that has legal or significant effect on people to three scenarios: a) Performance of a Contract, b) Authorized by MS Law, and c) Explicit consent of data subject where in case of (a) and (c), suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention to express point of view and to contest the decision must be availed. In addition to restriction to use sensitive data except in case of explicit consent or public interest, while on the other hand, Egyptian Data Protection Law didn't cover the use of AI and the associated privacy impacts. Such efforts to tackle AI in privacy laws is appreciated yet it is not the right place and it would be more convenient to have more inclusive dedicated law for AI in specific to lay down the rules regulating the technology from different aspects including privacy.
Having that said and due to the fact that AI Technology needs to be encouraged and boosted for the benefit of humanity in the different fields, the need for a governing law and regulation for AI Technology is becoming inevitable to regulate both the development and use of the technology to mitigate the various side effects of it including privacy as one of the major aspects and below are some of the suggested recommendations about the rules to be included in the said law:
- Mandating transparency to data subjects about how decisions impacting them are made and based on what criteria with the balance of not exposing the know-how or impacting the intellectual property of the used algorithm
- Giving data subjects the right to request proper justification and contest the decision as applicable if turned to be invalid
- Concerned Regulator to define high level criteria of what constitutes ethical and unethical use cases of AI along with listing predefined list of unethical use cases that shall be prohibited
- Mandating the consideration of setting certain foundation rules as applicable and feasible that shall correct AI Decisions Deviations e.g. Anti-Discrimination
- Mandating the proper documentation of the AI use cases that deal with personal data and involve decisions concerning people and how it complies with the requirements of the concerned laws and regulations
- Mandating a documented periodic revision of the developed AI Algorithms and their associated decisions made over time to ensure consistency with their initial intended objectives.
- Restriction on the use of AI with Vulnerable Data Subjects and keeping it to minimum and on necessity basis and subject to assessment and approval of concerned regulator.
Finally the above are just few recommendations and collaboration of the private sector leading the AI Industry, technologists, ethicists, researchers, academics, sociologists, policymakers, and governments is a must to create safeguards and proper governance/frameworks to regulate and guide the development of AI for decades to come.