Protecting People’s Privacy in an AI-Driven World
On November 29, 2021, Kenyan taxi driver John Bigini received a text message from a mobile network operator to repay a $72 loan within 14 days or risk informing his family and close friends about his default loan repayment. The 14 days did not elapse, and John’s wife and close relatives received calls and text messages about John’s loan default. How embarrassing.
How did the mobile network operator gain access to John’s close relative’s contacts? Through the Mobile Network Operator’s (MNO) artificial intelligence (AI) digital lending app. To perform risk assessment, the MNO’s digital loan app scans consumers' devices, including contacts, mobile money history, social media footprints, etc., before disbursing microloans. In effect, consumers' personally identifiable information (PII) is availed and potentially abused for companies' profit, as in this case. John was devastated, did not understand his privacy rights, and did not know where to report this incident.
Today, AI technologies are introduced across all sectors in our daily lives, such as telemedicine, social media, human resource management tools, Fintech applications, etc.
Most private and public institutions understand the benefit and impact of data analytics on operations, albeit whether this can be ethical or not. Ethical or otherwise, these AI-related activities and effects cannot be quantified unless backed by regulations. According to Ghana’s Data Protection Act, 2012?Processing of personal data 18. (1) A person who processes personal data shall ensure that the personal data is processed (a) without infringing the privacy rights of the data subject; (b) in a lawful manner; and (c) in a reasonable manner.
The General Data Protection Regulation (GDPR) safeguards the rights and freedoms of data subjects by keeping organisations in line with data protection, privacy, and ethics. This is notable, for instance, in the requirements of GDPR Art.5.1.a lawfulness, fairness and transparency, and Art. 5.1.b. purpose limitation providing for a heightened need to communicate and to align the processing to what is expected by the subject and what is necessary to the processing.
The foreseen and unforeseen risks to data privacy are real. Companies need to perform risk assessments from the perspectives of data subjects while fully reaping the benefits of AI. These risk assessments provide feedback to product development teams, who test and validate to ensure compliance and data protection laws.
A challenge with artificial intelligence and privacy today includes;
The European Union, in April 2021, proposed a new regulatory framework for Artificial intelligence. The Framework will complement the GDPR’s regulation of AI in Articles Art. 13, 15-22, 25, and 25 and intends to focus on specific utilisation of AI systems and associated risks.
领英推荐
Why wait for the framework to be published before implementing the requirements? Investing years in product development only to find out it violates the framework's requirement. Your guess is as good as mine.
To ensure artificial intelligence and privacy by design, the following is recommended;
1. Perform data privacy impact assessment as part of requirement gathering (SDLC) when developing AI/ML-driven projects. i.e., Privacy by Design.
2. Can AI comply with privacy by design? YES!. When training AI models, decouple users' personally identifiable information/data via anonymization and aggregation.
3. Seek the consent of users first before using their PII information.
4. Perform consumer data privacy awareness complemented with their rights.
Artificial intelligence requires large amounts of data to function to bring out a workable algorithm through data lakes. Nonetheless, the ability of the public to gain trust in technology is key, but not when it is used to disadvantage the consumer.
A respectable company using AI is expected to respect the privacy of consumers at all times.