Asia AI Policy Monitor #8

Asia AI Policy Monitor #8

Competition

Korea’s Fair Trade Commission will examine the domestic and foreign genAI industry for competition issues through a survey starting earlier this month.

Intellectual Property

Korea’s Presidential Committee on IP published a report on AI and IP issues, calling for further stakeholder engagement and international cooperation on the issues, in particular around genAI content.

India’s Bombay High Court issued a judgement in favor of a plaintiff who sued a genAI company for infringing his right of publicity/personality in his image, name and voice.

Japanese manga artist-turned lawmaker suggests that genAI providers set aside 1% of earning to share with artists:

Akamatsu [the lawmaker] said Japan should be "cautious" about legal restrictions on AI, not just because of the party's pro-business policies, but also to protect creators.


AI image: prompt - make a manga character holding a microchip

Privacy

Hong Kong’s Personal Data Privacy Commissioner published a guide to prevent deepfake fraud, including these 6 tips:

Be vigilant: Think twice before providing any personal data, verify the purpose of collection of such data and whether it is mandatory to provide them. Do not disclose personal data to others arbitrarily, avoid clicking or scanning suspicious links and QR codes, and do not log into any suspicious websites;

Finance

Hong Kong Monetary Authority (HKMA) issued guidance for consumer protection in the use of genAI in the finance industry.

Additionally the HKMA is supporting a genAI sandbox approach to boost the use of AI tools in the financial industry. Per HKMA chief:

The new GenA.I. Sandbox is a pioneering initiative that promotes responsible innovation in GenA.I. across the banking industry.? It will empower banks to pilot their novel GenA.I. use cases within a risk-managed framework, supported by essential technical assistance and targeted supervisory feedback.? Banks are encouraged to make full use of this resource to unlock the power of GenA.I. in enhancing effective risk management, anti-fraud efforts and customer experience.

Trust and Safety

The Australian Office of the Information Commissioner issued a determination regarding ClearviewAI’s practice of the scraping of facial biometric data of Australians off the internet.

The UNODC finalized the Cybercrime Convention, which will impact AI-enabled cybercriminality in fraud, deepfake intimate images, and other criminal abuse material by genAI.

  • What we are thinking: Given Asia’s locus as a source and target for cybcercrime, policymakers should follow accession and support of the treaty.

India’s Ministry of Electronics and IT (MeitY) published guidance on preventing the use of deepfakes for misinformation:

Intermediary platforms [are] required to act expeditiously within the timelines prescribed under IT Rules, 2021, on grievances received…

Korea’s Ministry of Science and ICT is actively supporting the development and use of datasets related to cybersecurity and deployment of AI in the sector. According to the department spokesperson:

AI deployment is not an option, but a must, for evolving cyber threats.

Taiwan’s Ministry of Digital Affairs shared details of how they are addressing AI-fueled fraud:

Vice President Ying-Dar Lin of NICS stated we need to use AI to combat AI. According to AI detection results, approximately 16,000 fan page accounts have posted fraudulent advertisements….They post 5,000 to 10,000 fraudulent ads daily that only last 1-2 days, creating an illusion of diversity and popularity among audiences and exploiting echo chambers for free dissemination.

Singapore’s Cybersecurity Agency published a report on threats in 2023, noting the increased usage of genAI to enhance phishing through deepfake videos and audio:

Threat actors have weaponised AI to accelerate and scale up their malicious operations. The threat of AI-enabled attacks will only intensify as the technology improves, and it remains to be seen how threat actors will further exploit such technology for cyber-attacks on the horizon…

China’s CPC Central Committee 20th Third Plenum contained a statement on AI requiring:

…instituting oversight systems to ensure the safety of artificial intelligence.

Rights, Democracy, Environment

The AI and Human Rights Risk Management Profile was issued by the US State Dept.

  • What we are thinking: An interesting first step in this discussion, and more can be expected in this area per Business and Human Rights intersection, esp on supply chain due diligence in Asia as a node in AI infrastructure, development, deployment and use. Some gaps are the existing concerns around low cost labor used to train some models, and also the increasing tax on environmental sustainability. Digital Governance Asia will moderate a session at the UNDP Responsible Business and Human Rights Forum APAC on the topic.

Singapore’s Minister for Digital Development and Information Josephine Teo indicated that the country may target rules against the use of deepfakes near elections, similar to rules imposed by South Korea earlier this year, and to calls made in the Philippines for next year’s elections.

A Microsoft data center under construction in India has been sued by locals over allegations of Illegal dumping near the site, according to reporting by Rest of World.

  • What we are thinking: Environmental concerns around AI’s impact, in particular the increased use of electricity and water, are a growing concern globally and in Asia; Microsoft itself doubled electricity use from 2020-2023.

Multilateral

Japan and Vietnam sign an MOU on ICT cooperation including AI.

Japan and Costa Rica signed a memorandum on ICT, including provisions to promote the Hiroshima AI Process, AI governance and digital infrastructure.

The UK and India Technology Security Initiative was launched by the prime ministers of both countries, covering emerging technology, such as AI. The initiative covers joint university research, support for existing multilateral AI governance efforts including GPAI, G20, and the formation of a joint Centre for Responsible AI.

The Second US-Singapore Critical and Emerging Technology Dialogue took place, including a large section on AI-focused joint research, standards setting and convening of AI Safety Institutes. Further to these meetings, the US-Singapore Digital Economy Cooperation Roadmap includes important cooperation throughout the region:

The United States and Singapore have committed to establishing a Smart Cities Program on AI in February 2025 through the Singapore-US Third Country Training Program (TCTP) to deliver capacity-building to ASEAN and Pacific Islands Forum members.

Advocacy

Hong Kong’s Intellectual Property Department issued a public comment and consultation paper on Copyright and AI until September 9.

Taiwan’s draft AI Basic Law is open for comment until September 13.

Vietnam’s draft Law on Digital Technology Industry (including, but not limited to AI) is open for public comment until September 2.

Singapore’s Cybersecurity Agency is conducting a public consultation on Securing AI Systems until September 15.

Australia’s Competition and Consumer Commission is conducting a public comment period until August 23 on various digital platform service issues, including AI.

China’s Ministry of Industry and Information Technology is holding an public comment period until September 1 on IoT connected, smart or autonomous vehicles.

China’s Ministry of Industry Information Technology issued a public comment period to collect use cases for the use of AI in industry development by September 13.

UNESCO is conducting a public comment regarding its research on AI and the Judiciary until September 5.

Additionally, UNESCO is conducting a consultations on its paper on AI Regulation approaches globally until September 19.

To better understand the current AI governance environment, UNESCO has mapped the different regulatory approaches for AI. The consultation paper will be published as a policy brief to inform and guide parliamentarians in crafting evidence-based AI legislation.

The OECD (which includes Japan) is conducting a pilot survey of its International Code of Conduct for Organizations Developing Advanced AI Systems, based on the G7 Hiroshima Process, until September 6. The code of conduct for the G7 Hiroshima AI Process can be found here.

In the News

The New York Times reports on how China is leading in the deployment of autonomous vehicles in the city of Wuhan.

A recent report explore the problems that come from India’s deployment of facial recognition software in its massive railway system to fight crime. The rights implications to privacy, freedom of association, and movement are explored among other concerns around the technologies’ ability to detect emotions, micro-expressions, and even gaze directions.

Tech Policy Press has a great analysis out on the difference between Taiwan and India’s approach to disinformation - important in the context of AI-enabled disinformation:

These competing regulatory models illustrate a divide in technological governance as governments evolve strategies to deal with online harms. An important meta-question that regulators constantly grapple with is what is the appropriate level of intervention that state actors must exercise to ensure their public policy objectives.

404 Media published a great analysis of genAI content farm creation, which has flooded social media (eg Facebook) in the past few months. Much of the content is being made in India, Vietnam, and the Philippines, and viewed in the US.

China’s socialist chatbots may be doomed to failure. This of course points to the inherent issues with LLMs, confabulation (hallucination), and other alignment problems.

  • What we are thinking: as we have written previously, countries across Asia have focused on developing LLMs, primarily to have high quality products focused on local languages; but language and politics are never far apart.

Hong Kong’s Privacy Commissioner for Personal Data penned an op-ed for the South China Morning Post on the recently released AI Data Protection Framework published by her office.

China’s Cyber Administration published its 7th batch of genAI service providers.

China-based threat actor network was exposed, consisting of thousands of fake profiles on X recently by a cybersecurity firm:

Researchers believe the cluster of at least 5,000 unauthentic X accounts, dubbed the Green Cicada Network, is almost certainly controlled and coordinated by an artificial intelligence Large Language Model (LLM)-based system.

Government Policy

Australia’s Digital Transformation Agency released the “Policy for the responsible use of AI in government." The document requires agencies using AI (except those in defence and security) to disclose use of AI in their services.

Australia’s parliament passed a bill amending the criminal code to include provisions against deepfake sexual abuse material.

New Zealand’s Ministry of Science Technology and Innovation released a paper on recommendations for approaches to work on AI. This includes several recommendations to the cabinet such as implementing a strategic approach along the lines of OECD recommendations.

Korea establishes regulations for the National AI Council. Issue areas to examination by the committee would be: research and development, data center expansion, ethics, governance, labor and economic impact.

Analysis

The Center for Data Innovation published a report on the divergent views of Chinese and British experts on AI risk and collaboration:

Despite significant geopolitical differences, a series of interviews with AI experts in China and the United Kingdom reveals common AI safety priorities, shared understanding of the benefits and risks of open source AI, and agreement on the merits of closer collaboration—but also obstacles to closer partnerships. Fostering a closer relationship could help both countries achieve their objectives of developing innovative, safe, and reliable AI.

The AI Asia Pacific Institute published a report on the State of AI in the Pacific Islands:

Lessons from other regions point to the benefits of fostering digital literacy, developing comprehensive AI governance frameworks, and sharing resources and expertise. To address these needs, the report recommends the establishment of a Pacific Islands AI Technical Assistance Facility.

The Australian Institute of International Affairs published a report on The Indo-Pacific’s Artificial Intelligence Defence Innovation Race. In summarizing the strategies across the Indo-Pacific for AI and military use:

China is an exemplar of the guided innovation strategy. By percentage of GDP committed, China has the world’s largest national industrial plans and has influenced many to follow suit…

ConcordiaAI has translated and provided an analysis of China’s Third Plenum statements and associated study materials provided to party cadres regarding AI. Background of the explanations are the following:

Motivations for creating AI safety oversight systems are explained in terms of responding to rapid AI development, promoting high-quality development, and participating in global governance.?

The law firm Baker McKenzie published the APAC AI Governance Regulatory Primer with great information on the state of play regarding rules and regs within the region on AI.



The Asia AI Policy Monitor is the monthly newsletter for Digital Governance Asia, a US-based non-profit organization. If you are interested in contributing news, insight or analysis, or participating in advocacy to promote Asia’s innovation in AI and digital regulation, please reach out to our secretariat staff at APAC GATES, Seth Hays at [email protected].

visit us here: digitalgovernance.asia




Khanh N.

Principal at Rouse | IP Enforcement & Data Privacy in ASEAN | Dual-Qualified Lawyer (VN/US) | CIPM, CIPP/E

2 个月

Great insights, Seth! The growing focus on AI regulation across Asia is crucial, especially with its intersection in IP and data privacy. Looking forward to seeing how ASEAN's approach evolves, particularly in balancing innovation with compliance.

Godwin Josh

Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer

3 个月

The Asia AI Policy Monitor's focus on IP, Privacy, Finance, and Trust/Safety reflects a common Western-centric view of AI regulation. Does this approach adequately consider the diverse cultural and societal contexts within Asia, where concepts like collective ownership and social harmony might influence AI development and governance differently? For instance, how would your framework account for the potential impact of China's "social credit" system on AI-driven decision-making in areas like finance and employment?

要查看或添加评论,请登录

Seth Hays的更多文章

社区洞察

其他会员也浏览了