Korean Generative AI User Protection Guidelines
Yesterday, the Korea Communications Commission released the "Guidelines for User Protection of Generative AI Services," which contain practical measures that operators can refer to in order to prevent damage in the process of using generative AI services such as text, audio, and images. These guidelines, which will be implemented from March 28th, present four basic principles that generative AI developers and service providers should pursue throughout their services and six implementation methods to realize them.
The guidelines are designed to promote responsible development and use of generative AI, emphasizing the importance of user rights, ethical considerations, and the establishment of effective self-regulatory frameworks. The guidelines emphasize a collaborative approach, encouraging the active participation of developers, service providers, and users in creating a secure and trustworthy AI ecosystem.
1. Key Themes and Ideas
1.1 Purpose and Goals
The primary aim is to ensure user rights are protected and that the benefits of generative AI are accessible to everyone. This involves preventing risks and building a reliable environment.
1.2 Definitions
Clear definitions are provided for key terms such as "generative AI," "developers," "service providers," "users," and "generative AI outputs." This establishes a common understanding of the concepts discussed.
2. Fundamental Principles
The guidelines are based on four fundamental principles.
2.1 Human Dignity and Rights
Services must protect human dignity, guarantee individual freedom and rights, and be controlled and supervised by humans.
2.2 Transparency and Explainability
Users should be provided with easy-to-understand explanations of how the AI system works, its outcomes, and its impact on them.
2.3 Safety and Security
Services should operate safely, minimize unexpected harm, and prevent malicious use or modification.
2.4 Fairness and Non-Discrimination
Services should avoid discrimination or unfair outcomes for users.
3. Specific Action Plans for User Protection
The guidelines detail six action plans.
3.1 Protecting User Personality Rights
This includes measures to prevent AI systems from generating outputs that infringe on users' privacy or other personality rights. This includes developing algorithms to detect and control elements that could violate user rights, implementing monitoring systems and reporting processes, and carefully considering the scope and methods of service provision to avoid infringing on user rights.
3.2 Promoting Transparency in Decision-Making
Service providers should inform users that content is AI-generated. Providing basic, understandable information about the AI's decision-making processes. This can be achieved through labeling outputs as AI-generated, providing information about the AI model used, and explaining the decision-making process when requested by the customer, within the bounds of protecting business interests.
3.3 Respecting Diversity
Efforts to reduce bias in algorithms and data and to ensure diversity in AI-generated content. This includes designing algorithms and collecting data in ways that minimize bias, establishing internal principles and standards to reduce bias in outputs, providing mechanisms for users to report biased outputs, and implementing filtering functions to prevent discriminatory use by users.
3.4 Managing Input Data Collection and Usage
Service providers must inform users if their input and generated data will be used for training the AI and provide them with the option to consent or refuse. They should also guarantee users' right to choose whether their data is used and establish internal oversight to ensure data is used safely and legitimately.
3.5 Addressing Issues Arising from Generative Content Use
Service providers should define the responsibilities of both themselves and users concerning generative AI outputs and inform users of their responsibilities during the usage phase. They also need to establish monitoring systems or other risk management frameworks to minimize the occurrence of unforeseen harm.
3.6 Promoting Healthy Distribution of Generative Content
Service providers should guide users to avoid creating or sharing inappropriate content using generative AI services. They also need to review and manage whether users' prompt inputs and generated outputs comply with moral and ethical standards and strive to prevent users from intentionally or unintentionally distributing harmful content to adolescents.
4. Other Items
4.1 Background and Necessity
The guidelines highlight the benefits and risks of generative AI, referencing ethical principles in AI model development, user policies for AI services, and guidelines from organizations like the OECD and UNESCO. It acknowledges that generative AI's rapid expansion has brought issues related to technical imperfections and user understanding. The guidelines aim to address these issues.
4.2 Digital Watermarking
The guidelines acknowledge the increasing prevalence of AI-generated content and the corresponding push for digital watermarking regulations globally, including in Korea, the US, EU, and China. It mentions that while digital watermarking can protect copyright and prevent unauthorized duplication, it still has technical limitations and needs to be improved for various types of content, including text, audio, and video.
4.3 Diversity, Equity, and Inclusion
The guidelines emphasize the need to address bias and promote diversity in AI systems. They highlight the potential for biased AI outputs to reinforce social prejudices and ethical problems, as well as hinder technological progress. The guidelines recommend that developers and service providers implement measures to mitigate bias, ensure fairness, and respect diverse perspectives.
5. Conclusion
These guidelines represent a significant step toward responsible development and usage of generative AI in Korea. By focusing on user protection, transparency, fairness, and ethical considerations, the guidelines aim to create a safe and beneficial environment for the adoption and advancement of generative AI technologies. The practical examples provided in the guidelines offer valuable insights into how these guidelines can be implemented in practice, helping developers, service providers, and users navigate the complex landscape of generative AI with greater confidence and responsibility. The periodic review requirement ensures that the guidelines remain relevant and effective as the technology evolves.