AI-Pac October edition: the Flint digest on AI policy in Asia-Pacific

AI-Pac October edition: the Flint digest on AI policy in Asia-Pacific

AI-Pac is Flint’s digest providing our take on the major developments in Asia-Pacific AI policy and regulation. In a fast-moving environment, this newsletter is designed to help you stay on top of the regulatory agenda and to consider what it means for your business. Please see here for our first six blogs, covering January to September. In each update, we will set out:

  • Major regulatory developments across Asia-Pacific regarding AI regulation;
  • Overall trends of AI regulation in Asia-Pacific;
  • What this means for your business and how business might best prepare.

In October, we saw the evolution of several APAC governments’ AI governance approaches. Key AI policy updates included:?

  • India is contemplating the creation of an AI safety institute as part of a global network of such institutes, consulting on its remit, structure, and funding in a closed-door stakeholder meeting.
  • Singapore implemented legislation aimed at countering election-related deepfakes, reinforcing its targeted approach aimed at regulating specific AI harms.
  • Hong Kong issued its first policy blueprint for responsible AI use in finance, previewing key financial regulators’ AI workstreams.
  • Australia faced pushback from its regulators and from industry about its proposed mandatory guardrails for high-risk AI, but it is unclear how the government will respond in a politically charged context.

If you would like to discuss how AI policy and regulation in Asia-Pacific will impact your business, please get in touch with David and Ewan here.

India: AI safety institute, the Indian way?

On 7 October, the Indian government convened a closed-door meeting to seek input on creating an Indian AI safety institute. Attendees from industry, academia and civil society agreed on the need for an Indian AISI, while its specific mandate, structure, and budget require further deliberation.?

In designing the mandate for its safety institute, India will, in part, draw on the AI safety institutes in the UK, US, and Japan, recognised as the global “first-wave” of such organisations. This suggests a mandate for foundational AI safety research, domestic standards-setting, and international cooperation. So far, there is little clarity on what the mandate will look like, but it is likely to also reflect the distinctive circumstances of AI development in India. The safety institute could, for example, focus more on application-level safety frameworks than testing frontier models (a focus of the UK’s AI safety institute) or on providing tools to strengthen AI literacy, a key issue in light of India’s particular emphasis on the inclusivity of innovation in AI.

So far, officials have indicated that the safety institute will not act as a separate regulatory body. It will primarily help set safety standards, frameworks and offer regulatory guidance on mitigating AI risks. The ongoing delay in releasing the proposed Digital India Act (DIA), intended to replace the IT Act and potentially incorporate a governance framework for AI, has created a vacuum in the country’s AI regulations. The safety institute might, therefore, serve as a proxy for immediate AI regulation until a comprehensive law such as the DIA is implemented.

The announcement forms part of India’s broader ambition to carve out a leadership role in the development of AI, in particular among countries in the global South (see our previous blog on India’s areas of focus in global forums). Since India’s election in April-June, the pace of digital policymaking has slowed as the government now has to work with its coalition partners, but AI remains a priority. Developing domestic capabilities is a key area of focus, with NVIDIA’s Jensen Huang recently exhorting India to leverage its human talent and data resources to “manufacture its own AI”.?

Singapore: a surgical approach to election deepfakes

On 15 October, Singapore’s Parliament enacted the Elections (Integrity of Online Advertising) (Amendment) Bill, amending both the Parliamentary Elections Act 1954 and the Presidential Elections Act 1991. With a general election due by November 2025, this legislation prohibits the publication of convincingly manipulated digital content during the election period that could mislead voters about a candidate’s statements or actions.

Under the legislation, candidates can request the Returning Officer (RO) - a public officer appointed by the Singaporean Prime Minister to oversee the election - to issue corrective directives to those disseminating such misleading content. The RO can issue corrective directions to individuals who publish such content, social media services, and Internet Access Service Providers to take down or disable access to offending content. However, some experts have raised concerns that this process could be abused, allowing candidates to falsely claim genuine content as AI-manipulated. It also remains to be seen whether this approach of focusing on individual pieces of content would be scalable to other countries with greater volumes of content being produced and rapidly going viral. At that point, a removal order may be too late to undo the damage of it having been viewed by large numbers of voters.

This surgical approach to addressing a specific instance of AI harm exemplifies Singapore's inclination towards vertical AI regulation, preferring to regulate on a specific issue rather than implementing horizontal legislation aiming to counter all AI-related risks. In December 2023, Korea implemented similar legislation ahead of its National Assembly elections in April 2024, banning the production, editing, distribution, screening and posting of deepfakes during the 90 days leading up to the election. Policymakers across Asia and globally will be closely watching the role that AI is playing in the US Election and its aftermath. This could potentially hasten the onset of more drastic regulatory approaches aimed at protecting election integrity from AI-related risks.?

Hong Kong: reinforcing financial regulators’ specific remit in AI guidelines?

On the opening day (28 October) of Hong Kong’s 2024 Fintech Week, the Financial Services and Treasury Bureau (FSTB) issued the city’s first AI policy statement for financial services. The 12-page policy blueprint outlines a “dual-track” approach to promote responsible AI use in finance that considers both opportunities and risks.

To become an AI innovation hub, the Hong Kong government has revealed plans to strengthen AI infrastructure, such as computing capacity and start-up funding. Reflecting this commitment, the policy statement notes that Hong Kong University of Science and Technology will make its InvestLM, an open-source large language model, and computing resources available for financial industry practitioners. It also referenced the mainland government’s “AI+” action plan, though without offering further detail on the potential for linkages between mainland China and Hong Kong on AI.?

The statement encourages financial institutions to adopt a risk-based AI governance approach. While the recommendations are not binding, they signal upcoming regulatory scrutiny of key risks associated with AI use in finance: cybersecurity, data privacy, IP rights, bias, consumer protection, and operational resilience, amongst others.

The FSTB asserts that AI’s potential risks have been “suitably reflected” in regulations and guidelines by financial regulators; but also previews that key financial regulators will update their supervisory guidelines on AI. For instance, the Securities and Futures Commission (SFC) will issue a circular to licensed corporations regarding risks related to generative AI by November. The statement’s language suggests that Hong Kong will likely retain its pro-innovation approach that relies on tailored guidance by financial services regulators instead of horizontal, economy-wide legislation. It also suggests that Hong Kong’s approach to AI for now continues to diverge from mainland China’s greater emphasis on regulation, though the reference to the “AI+” action plan underlines the uncertainty over whether there will be greater convergence in future.

Australia: Government urged to avoid broad AI laws, tighten language on “high-risk” AI?

On 4 October, the consultation closed on the Australian Department of Industry, Science and Resources’ paper canvassing mandatory guardrails for “high-risk” AI, receiving over 300 responses. Several regulators offered views on the three proposed regulatory options for enforcing the guardrails (see our previous blog) - 1) domain-specific approach (adapting existing regulatory frameworks to incorporate the guardrails), 2) framework approach (introducing high-level framework legislation), and 3) a new, comprehensive cross-economy AI-specific Act.

The four regulators comprising Australia’s Digital Platform Regulators Forum (DP-REG) raised varying levels of concern that a standalone AI-specific Act could result in overlap and duplication of regulatory efforts. For instance, both DP-REG itself and the Office of the Australian Information Commissioner (OAIC) indicated a preference for the framework approach, underscoring that an AI-specific law would cause fragmentation and public confusion.

Meanwhile, members of the tech industry pushed back against the proposed “high-risk” AI definitions. DIGI, a key Australian digital industry association, disputed the classification of all GPAI models as “high-risk”. It warned that this broad, catch-all definition would capture low-risk applications and impede AI innovation. To caution the Government not to follow the EU’s broad AI law, it commented that the effect of the AI Act’s operation alongside existing sectoral and horizontal regulations remains unclear.

However, there were also influential voices of support for economy-wide regulation, notably human rights organisations, media organisations, advocacy groups and academic institutions. Media, Entertainment and Arts Alliance (MEAA), the union for Australia’s creative professionals, argued that Australia needs a comprehensive AI Act and an independent AI regulator to align AI regulations with existing workplace, copyright and privacy regimes to protect workers and consumers.

Ahead of the election that will be held in the first half of 2025, the Labour government and the opposition Coalition are competing to take a tough stance against large tech companies, particularly in the context of a showdown with Meta over the news media bargaining code and with X over its approach to moderation of content. The dynamics of the election campaign could see each party commit to introducing wide-ranging AI regulation rather than implementing a full and sober review of the consultation responses. Ultimately, Australia is likely to become one of the leading advocates within the Asia-Pacific region for stricter AI rules, with its approach potentially influencing other countries around the region in the years to come.


This newsletter was written by Abigail Chen, Ewan Lusty and David Skelton, with the input of Flint’s Senior Adviser network based across Asia-Pacific. Abigail, based in Hong Kong, has experience in technology and financial policy from her prior roles in a law firm and think tanks. Ewan and David are both based in Singapore. Ewan has been supporting Flint clients on a range of digital policy issues for over four years, prior to which he advised UK government ministers on online safety regulation. David spent seven years at Google, most recently working within Asia-Pacific.

要查看或添加评论,请登录

Flint Global的更多文章

社区洞察

其他会员也浏览了