On March 1, 2024, the Personal Data Protection Commission of Singapore issued advisory guidelines on the utilization of personal data within AI systems designed for making recommendations, predictions, and decisions. The guidelines serve as recommendations and do not aim to impose legal obligations on any party, including the Commission, organizations, or individuals. The guidelines seek to reassure consumers about how their personal data is utilized within these AI Systems, which are often deployed to autonomously make decisions or support human decision-makers with recommendations and predictions.
These Guidelines should be read in conjunction with the Commission’s Advisory Guidelines on Key Concepts in the PDPA, Advisory Guidelines on Selected Topics as well as its Guide to Basic Anonymization. The Guidelines are structured according to the common phases when organization engage to deploy AI systems, which are as follows:
Part III : Using Personal Data in AI System Development, Testing and Monitoring
Organizations can act as AI developers either by creating AI models internally or by contracting service providers to develop custom AI applications utilizing personal data that the organizations hold. During the personal data processing in this stage, several lawful basis which can be used to rely upon in this stage as follows:
B.?????? Business Improvement
The Business Improvement Exception enables organizations to use, without consent, personal data that they had collected in accordance with the PDPA, where such use falls within the scope of the following relevant purposes:
- Enhancing or creating new products and services.
- Improving or inventing new methods and processes for business operations related to products and services.
- Analyzing and understanding individual or group behavior and preferences.
- Identifying appropriate products or services for individuals or groups, and customizing these offerings for them.
The Business Improvement Exception enables organizations to use, without consent, personal data that they had collected in accordance with the PDPA, where such use falls within the scope of the following relevant purposes:
- Evaluating if the use of personal data enhances the effectiveness or quality of the AI systems and their outputs.
- Assessing the technical feasibility and cost-effectiveness of alternatives to using personal data for developing, testing, or monitoring AI systems.
- Considering common industry practices or standards in the development, testing, and monitoring of AI systems.
- Determining if the utilization of personal data contributes to the development of new product features and functionalities that enable organizations to innovate, enhance competitiveness, improve efficiency, and enrich consumer choice and experience.
C.?????? Research Exception
The Research Exception is intended to allow organizations to conduct broader research and development that may not have immediate application to their products, services, business operations or market.
Organizations may use personal data for a research purpose7, subject to the following conditions:
- The research objectives cannot be feasibly achieved without using personal data in a form that identifies individuals.
- Utilizing the personal data for research must offer a distinct public advantage.
- The research findings will not be used to make decisions impacting the individuals involved.
- Should the research results be published, they must be presented in a way that does not reveal the identity of the individuals.
Organizations can use the Research Exception as a basis to share personal data for research purposes, including sharing it with another company for collaborative research and development of new AI Systems
Applying Data Protection when using personal data in Development, Testing and Monitoring
- Data minimization should be practiced i.e. limit volume, only use what requires (subject for Data Protection for ICT System guidelines)
- Technical, process and/or Legal control for data protection should be required. Pseudonymization is encouraged to de-identify the personal data.
- If pseudonymization are impossible, due to nature of processing, organization are expected to apply data security and data protection countermeasure in development environment. In addition, organization should consider to conduct Data Protection Impact Assessment (DPIA) to identify legal, technical and process risks and put appropriate and necessary control to mitigate risk.
- Integrate a Privacy by Design approach in the development of AI systems, ensuring privacy controls are embedded throughout the Software Development Life Cycle (SDLC) and Product Development Life Cycle (PDLC) processes.
- Develop and implement privacy policies that include guidelines specifically for AI development.
- Anonymize the AI data models to the greatest extent feasible, carefully balancing accuracy, repeatability, and reproducibility against the data's usefulness, with organizations assessing the advantages and disadvantages.
- Consider additional factors related to anonymization, such as whether it is reversible, the extent of disclosure, and the organization's maturity in safeguarding the dataset.
Part IV: Collection and Use of Personal Data in AI Systems
Privacy area to consider: Consent, Notification and Accountability
Consent and Notification Obligation
- Consent for collecting and using personal data for AI is mandatory unless covered by deemed consent or exceptions such as Legitimate Interest.
- Consent should be complemented by notifications to users, detailing the purposes for collecting, using, or disclosing their data, including any secondary purposes.
- The combination of notification and consent ensures meaningful consent, which signifies a well-informed and clear agreement from individuals for processing their personal data.
- Notifications must include:- The functionality of the product that necessitates data collection and processing.- A general description of the types of personal data involved.- Assurance that the processing and collection of data are pertinent.- Specific features that require the processing of personal data.
- Notification mechanisms can include pop-ups or be made available upon request from the data subject. A layered notification approach may also be employed to highlight more critical information effectively.
- Legitimate Interest Exception - can be exercised but need go through PDPA guidance to perform the assessment (LIA), and customer need to be notified that organization is using this legal basis. Example -> using personal data in AI to detect and prevent illegal activities.
Accountability Obligation
- To uphold accountability, organizations must develop and implement written policies and procedures, ensuring they are well-documented.
- These internal policies and procedures should be consistent with the given notices.
- The use of AI systems must be transparent, incorporating documentation of the processes to ensure fairness and reasonableness.
- Policies and practices should be readily available upon request to showcase compliance with the Personal Data Protection Act (PDPA), demonstrating accountability.
- All policies ought to be straightforward, clear, and concise.
- Written policies serve as tools for education and building confidence, helping to meet regulatory requirements. These policies should cover: - Measures to ensure fairness and reasonableness during development and testing stages, such as bias assessment, dataset quality assurance, and data governance. - Safeguards and technical measures for personal data protection during development and testing, including anonymization, pseudonymization, and other security measures, both pre- and post-deployment of the AI system. - The implementation of proper accountability mechanisms, along with human oversight.
- Organizations must prioritize data quality and governance measures in AI system development.
- Utilizing technical tools like AI Verify to assess the AI system's performance can inform policy development.
- Regular reviews and audits are crucial to verify the documentation's validity, applicability, and effectiveness, facilitating continuous improvement in governance.
- Conducting a Data Protection Impact Assessment (DPIA) is highly recommended.
Part IV: Procurement of AI Systems - Best Practice for How Service Providers may support organization implementing AI Systems.
Business to Business Provision of AI solutions
When employing a Service Provider (System Integrator) for the complete or partial development of AI Systems, it is crucial for the provider to adhere to the Personal Data Protection Act (PDPA). They should take into account:
- In the preprocessing phase, implement data mapping and labeling to meticulously track the data used in the training dataset.
- Ensure comprehensive data lineage to trace the data source and its transformations throughout the process.
- The Service Provider must assist the organization in fulfilling their obligations regarding Consent, Notification, and Accountability. They should be ready to offer technical explanations or consultations to customers when needed.
Service Providers should execute the following steps:
- Gain an understanding of the information customers will likely need, considering their requirements and the potential impact on users.
- Design the system in a manner that ensures access to pertinent information.
The article is intended only for public consumption to give high level summary of PDPC Advisory on Personal Data in AI System, not intended use for any professional or business reference. Author is not accountable for any potential errors or oversights in this article.
Global Cybersecurity Executive & Collaborative Leader | CISO | Enhancing Cyber Resilience for International Governments & Fortune 500 | 20+ Years in Cyber Risk, Strategy & Compliance | Advisory Board Member
1 年Thx for the share
Digital Privacy and Trust - Deloitte (Indonesia) | CAIDP Research Group Member ??
1 年Thanks for sharing, Ko Hendro. I take note on the existence of "Business Improvement" exception to justify the use of personal data in training AI systems within the guideline without consent. Very interesting to see the scope and limit of this exception develops in the context of AI. Similiar nomenclature is not found under Indonesia's UU PDP or existing draft to the implementing regulations ??. As Indonesia continues to develop its privacy and AI regulations, it will be interesting to see if Indonesia will follow Singapore's approach and include similar exceptions in their upcoming developments.