#13 AI Policy in Asia

#13 AI Policy in Asia

Military, GenAI, Human Rights, AI Safety Institutes, Trust/Safety in Thailand, China, Australia, Malaysia, Taiwan, Japan, Singapore, South Korea and more...

Thanks for reading along with over 1,600 other AI policy professionals across multiple platforms to understand the latest policies affecting the AI industry in the Asia-Pacific region brought to you by Digital Governance Asia. Do not hesitate to contact our editors if we missed anything at?[email protected]!

Governance

Australia’s Minister for Industry and Science indicated that the country will continue its regulatory push on AI and for child safety online, despite changes in US policy following a Trump-win in the US presidential election - which is expected to bring industry-friendly, low/no regulatory oversight. The Minister said:

“The US may adopt in time a different approach to what the Biden administration had undertaken – we’ll wait and see and let that play out. But there are a lot of other countries that are thinking deeply about this and acting on it. We have a job we’ve said we’ll do for the public, and there’s an expectation … we will continue to do that, and we will. We will harmonise where we can and localise where we have to.”

Taiwan’s Ministry of Digital Affairs initiated collaboration with local governments to utilize AI.

MODA stated that this meeting's theme focused on Innovative AI Applications in the Public Sector, featuring experts sharing practical applications of AI tools for writing and generating charts. Taipei Veterans General Hospital introduced the adoption of Voice Recognition AI to create nursing records in the medical field automatically. The Land Administration ?Department of Yilan County demonstrated the use of AI-Assisted Real Estate Registration Review, which quickly verifies identities and documents to prevent identity theft and document fraud. The MODA's Administration for Digital Industries discussed measures for procuring commercial AI services through joint supply contracts.

Thailand’s Ministry of Digital Economy and Society issued a Generative AI Governance Guideline for companies using the technology. The Guide focuses on 5 parts:

1) Understanding Generative AI, which will help lay a foundation for those involved in the organization to understand the consistent principles in terms of definitions, meanings, and related terminology. 2) The benefits and limitations of Generative AI, which will show the perspective of practical application along with interesting use cases. 3) The risks of Generative AI, to create an understanding of the risks of Generative AI along with guidelines for managing these risks appropriately for the organization's actual use context. 4) Guidelines for applying Generative AI , to create an understanding of both the structure and the form of the application appropriately for the organization's context. And the last part is 5) Considerations for the application of Generative AI with good governance, focusing on organizations being able to create a balance between the utilization and risk management of Generative AI, along with promoting relevant parties to participate in various processes appropriately.

Subscribe on Substack here!

Military

The UN approved a resolution proposed by the Netherlands and South Korea on the implications of artificial intelligence (AI) in the military domain.

…States would be encouraged to pursue efforts at all levels to address related opportunities and challenges, including from humanitarian, legal, security, technological and ethical perspectives, by one of 14 drafts passed today in the First Committee.

China has developed an LLM based on Meta’s open source LLAMA model, which reports indicate are being used by the PLA.

In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People's Liberation Army's (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta's Llama as a base for what it calls "ChatBIT"…

Human Rights and Environment

Malaysia’s Johor state next to Singapore is seeing a boom in data centers, driving AI development in the region, taxing resources.

“With Singapore’s moratorium, Johor was a natural recipient of these investments,” he said. “There’s access to power infrastructure, water availability, submarine cable landings, and abundant land. Because Malaysia was already prepared with the infrastructure, data centers found it easier to land in Johor.”

Bangladesh’s Ministry of Foreign Affairs advisor advocated for human rights centered use of AI.

In his address to the ministerial session, he called for responsible use of Artificial Intelligence (AI) in security and border management, emphasising that AI must respect human rights and be tailored to local contexts.

Trust, Safety, Cybersecurity

A recent article notes the rising debate about regulating deepfakes in Malaysia, covering how neighboring Singapore and South Korea have recently passed legislation around the genAI content:

Several high profile incidents have already occurred this year, with celebrities such as Datuk Seri Siti Nurhaliza Tarudin, athletes like Datuk Lee Chong Wei, and corporate figures like Petronas CEO Tan Sri Tengku Muhammad Taufik having their likenesses used in deepfakes promoting investment scams.

A recent Lawfare article sheds light on how AI will foster more disinformation citing a case from Australia:

Much ink has been spilled on the use of generative artificial intelligence (AI) in influence operations and disinformation campaigns. Often the scenarios invoked hang along pretty clean lines: a known state actor, a clear target, a specific goal. The archetypal examples are campaigns like Russia’s Doppelganger or China’s Spamouflage, both of which the U.S. Department of Justice has traced back to specific government-linked entities with clear political aims.?

Australia’s Cyber and Infrastructure Security Center report states that the country is susceptible to AI-driven malware attacks on critical infrastructure. Per officials with the Center:

Rapid uptake of artificial intelligence is enabling more persuasive and individually targeted cyber attacks, complicating mitigation. AI-driven attacks will further complicate the cyber security environment within Australia. Threat actors are embracing, integrating and evolving the use of AI in their operations. AI is already facilitating the creation of adaptable malware and enabling more realistic and tailored social engineering attacks to manipulate targets. AI is lifting the capability of all cyber threat actors to conduct attacks at greater speed, scale and effectiveness, and at a rate that may outpace many system defence capabilities. Less skilled threat actors are leveraging the increased commercialisation and public availability of AI tools to deploy ransomware, create deep fakes or conduct loweffort, yet high-yielding social engineering campaigns. These can be highly convincing and difficult to distinguish from authentic interactions, making detection efforts increasingly challenging for organisations and individuals.

South Korea’s President has initiated a 7-month police crackdown on deepfake pornography (more on Korea’s Privacy regulator’s participation below):

President Yoon Suk Yeol quickly confirmed the rapid spread of explicit deepfake contents and ordered officials to “root out these digital sexual crimes.” Police are now on a seven-month special crackdown that is to continue until March 2025.

Privacy

South Korea’s Privacy regulator contributed to the President’s initiative to address deepfake non-consensual intimate imagery crackdown covering 4 areas and 10 initiatives:

1. Strong and effective punishment 2. Improving platform accountability 3. Rapid victim protection 4. Public Awareness

Multilateral

The forthcoming AI Safety Institute International Network will conduct their first meeting in the US, and the Center for Strategic and International Studies (CSIS) published a piece asking 9 questions on how the network will feed into other international AI governance and safety initiatives. Members of the network from Asia include Japan, Singapore, South Korea and Australia.

The AISI International Network marks a significant next step in global AI safety efforts. The network provides an opportunity to build international consensus on definitions, procedures, and best practices around AI safety; reach economies of scale in AI safety research; and extend U.S. leadership in international AI governance. The similarities between currently established AISIs in terms of size, funding, and functions provide a strong basis for cooperation, though network members must be aware of the different institutions in which different AISIs are housed.

The AI Safety Institutes of Singapore and the UK signed an agreement to bolster AI governance and safety:

The new MoC will strengthen cooperation between the AI Safety Institutes (AISIs) of both countries. Key areas of collaboration include: I. AI Safety Research: Enhancing joint efforts to advance the science of AI safety, focusing on developing safer AI systems and risk management. ii. Global Norms: Collaborating on international AI safety standards and protocols, including through possible cooperation with the Network of AI Safety Institutes, ensuring a global approach to AI risk mitigation. iii. Information Sharing: Expanding knowledge exchange between the two countries’ AI Safety Institutes to ensure that AI systems are developed and deployed in ways that are trustworthy and safe for global use. iv. Comprehensive AI Testing: Joint development of safety testing frameworks that provide robust evaluations throughout the AI lifecycle.

China hosted a World Customs Organization meeting on the use of AI technology to facilitate cross border trade and risk assessment.

Discussions also highlighted the tangible benefits of AI integration, such as greater risk management accuracy, reduced repetitive workloads, enhanced operational coverage, accelerated clearance times, and improved consistency in decision-making. Achieving these benefits, however, requires ongoing investments in specialized expertise, advanced computational resources, robust data analytics infrastructure, and well-defined policies.

In the news

Malaysia’s Prime Minister Anwar stated that the country will not get caught in the US-China competition around AI at the recent APEC summit in Peru.

Japan released a plan to boost the AI and chips industry with JPY10 trillion in support for domestic chip manufacturers and AI talent development.

Google, Temasek and Bain’s 2024 E-conomy Report highlights Southeast Asia’s AI industry potential and user base.

Taiwan’s TSMC will stop producing advanced AI chips for Chinese customers per US sanctions:

Taiwan Semiconductor Manufacturing Co (TSMC) has notified Chinese chip design companies that it is suspending production of their most advanced AI chips from Monday, the Financial Times reported, citing three people familiar with the matter.

Advocacy

Japan’s Fair Trade Commission opened a public comment period until 22 November on Generative AI Market Dynamics and Competition:

Given the rapidly evolving and expanding generative AI sector, the JFTC has decided to publish this discussion paper to address potential issues and solicit information and opinions from a broad audience. The topics outlined in this paper aim to contribute to future discussions without presenting any predetermined conclusions or indicating that specific problems currently exist. The JFTC seeks insights from various stakeholders, including businesses involved in different layers of generative AI markets (infrastructure, model, and application layers as described in Section 2), industry organizations, and individuals with knowledge in the generative AI field.

Sri Lanka’s National AI Strategy is open for consultation until 6 Jan 2025.


The Asia AI Policy Monitor is the monthly newsletter for Digital Governance Asia, a non-profit organization with staff in Taipei and Seattle. If you are interested in contributing news, insight or analysis, or participating in advocacy to promote Asia’s innovation in AI and digital regulation, please reach out to our secretariat staff at APAC GATES, Seth Hays at [email protected].

Asia AI Policy Monitor? is free. Let us know if you want to contribute or support the network. To submit articles, or join our network of policymakers, and analysts, please email our editor at [email protected]. To support financially, click below.

要查看或添加评论,请登录