Navigating Indonesia’s Emerging AI Regulations

Navigating Indonesia’s Emerging AI Regulations

Indonesia is a strategic country with a large population that contains a million opportunities for the utilization of Artificial Intelligence (AI). As a country with the largest digital economy market power in Southeast Asia, the Indonesian government needs to develop policies that continue to evolve in the face of AI technology which has already affected various sectors, such as telecommunication and finance. Currently, Indonesia does not have specific regulations governing the development of AI as well as restrictions and limitations on its use, such as the one already in place in Europe through the European Union Artificial Intelligence Act (The EU AI Act). Although there is no specific regulation, Indonesia has several regulations related to the use of AI such as Law No. 11 of 2008 which was last amended through Law No. 1 Year 2024 regarding Electronic Information and Transactions ("EIT Law"), Law No. 27 Year 2022 regarding Personal Data Protection ("PDP Law"), as well as other sectoral provisions governing the ethical use of AI. Provisions governing the development, restrictions, prohibitions, and sanctions on the use of AI are important given the rapid development of AI technology that can bring various kinds of legal risks that must be supported by regulatory readiness to protect public interests in the use of AI.

To face these opportunities and challenges, the Indonesian government is currently developing an Indonesian artificial intelligence strategy by taking into account the issues in other countries. In 2020, the Indonesian Agency for the Assessment and Application of Technology (Badan Pengkajian dan Penerapan Teknologi or "BPPT") issued the National Strategy on Artificial Intelligence 2020-2045 (Strategi Nasional Kecerdasan Artificial Indonesia 2020-2045 or "Stranas KA"). The creation of Stranas KA is a product of discussions between BPPT and government institutions, universities, industries, and other stakeholders. Stranas KA serves as the foundation/reference for the government in formulating laws and regulations related to the application of AI.

Current Related Regulations

Although it has been 4 (four) years since the issuance of Stranas KA and many companies have implemented AI technology in their production processes, to this day, Indonesia still does not have a specific binding regulation related to AI. However, the function of AI in processing data/information automatically makes it alignable as an "Electronic Agent" as regulated in the EIT Law. Based on Article 1 point 8 of the EIT Law, an Electronic Agent is defined as a device of an Electronic System made to automatically perform an action on certain Electronic Information which is operated by a Person. The phrase "device made to automatically perform on action" describes AI technology that operates automatically, thus the regulations governing "Electronic Agent" also apply to AI technology.

Furthermore, the EIT Law also states that the responsibility for the legal consequences of the implementation of electronic transactions carried out through Electronic Agents is borne by the provider, namely the party that organizes AI services. Further explanation on this matter is provided in Government Regulation No. 71 Year 2019 regarding the Implementation of Electronic Systems and Transactions ("GR 71/2019"), where Article 39 of GR 71/2019 implies that AI service providers in implementing AI must pay attention to the following principles:

a.???? Prudential;

b.???? Security and integration of the Information Technology system;

c.????? Security control for the Electronic Transaction activities;

d.???? Cost-effectiveness and efficiency; and

e.???? Consumers’ protection in accordance with laws and regulations.

While the provisions in the EIT Law and GR 71/2019 can be aligned on AI technology, these provisions are only general principles of Electronic Agents that do not contain specific stipulations on AI and the scope of AI.

AI technology used by various industries in practice requires large amounts of data (big data) to be processed in order to generate patterns and provide solutions to problems. The data-driven business process used by AI creates new risks that may not have been discovered before. In relation to this practice, on October 17, 2022, the government issued Law No. 27 Year 2022 regarding Personal Data Protection ("PDP Law"), which aims to safeguard the rights of citizens to protect themselves from unauthorized misuse of personal data. As personal data information is one of the types of data processed in the implementation of AI, therefore the responsibility for the acquisition and processing of personal data is also regulated in the PDP Law.

The responsibility of AI service providers as personal data controllers is regulated in Chapter VI of the PDP Law. In processing personal data, the controller is obliged to have a legitimate basis for processing, such as explicit consent from the owner of the personal data, fulfillment of obligations under an agreement, performance of duties in the public interest, and others in accordance with statutory provisions. Furthermore, the personal data controller shall also protect and ensure the security of the personal data it processes, shall maintain the confidentiality of personal data, and shall protect personal data from unauthorized processing. Additionally, the personal data controller also has an obligation to conduct a data protection impact assessment related to the processing of personal data with automated decision-making that has legal consequences on the owner of the personal data.

Failure by a personal data controller to comply with these obligations may be subject to significant sanctions, such as administrative sanctions set out in Article 57 of the PDP Law:

1.???? a written reprimand;

2.???? temporary suspension of personal data processing activities;

3.???? erasure or removal of Personal Data; and/or

4.???? administrative fines (two percent of the annual income or annual revenue at the maximum against the violation variable).

In addition to administrative sanctions, Chapter XIV of the PDP Law also provides for criminal sanctions, namely any personal data controller who intentionally commits:

1.???? obtaining or collecting personal data that do not belong to them with the intent to benefit themselves or others to the detriment of the owner of the personal data;

2.???? disclose and/or use personal data that does not belong to them; or

3.???? creating false personal data or falsifying personal data with the intent to benefit oneself or others to the detriment of others;

may be subject to sanctions in the form of imprisonment and/or a maximum fine of Rp6,000,000,000 (six billion Rupiah). AI Organizer, as the controller of personal data in the form of a corporation, may be subject to a fine with the provision of a maximum of 10 (ten) times the maximum fine. Furthermore, corporations can also be subject to additional criminal sanctions in the form of confiscation of profits obtained from criminal acts, business suspension, license revocation, and dissolution of the corporation. In the event that the act is committed by a corporation, criminal sanctions can also be imposed on the board of directors, controllers, commanders, and beneficial owners if it can be proven that the criminal act was committed by the persons responsible for it.

Although it does not specifically regulate AI, the PDP Law is the main foundation for regulating personal data processing activities in Indonesia, which is made to fulfill citizens' rights to personal protection in accordance with statutory provisions. In its implementation, the government is still providing opportunities for personal data controllers to adjust to the provisions of the PDP Law, as it is currently still in the transition process. The PDP Law will become effective in October 2024.

Ethical Guidelines for Artificial Intelligence

As an implementation of the Stranas KA, in December 2023 the Financial Services Authority (OJK) and the Minister of Communication and Information Technology (MOCI) issued ethical guidelines for artificial intelligence, where OJK issued the Guideline for Responsible and Trustworthy Artificial Intelligence (AI) Code of Conduct in the Financial Technology Industry ("OJK AI Guideline") and MoCI issued Circular Letter No. 9 of 2023 on Ethics of Artificial Intelligence ("MOCI CL 9/2023"). In principle, both OJK AI Guideline and MOCI CL 9/2023 explain that AI ethical guidelines are needed to ensure the effective implementation of AI technology and to minimize the risk of harm caused by AI technology in social, economic & financial, and national defense. For reference, here is a comparison between OJK AI Guideline and MOCI CL 9/2023:


This ethical guideline for AI is created with the hope that every electronic system provider, both public and private, can regulate the use of AI appropriately, including in making decisions that affect the wider community.

Following the issuance of MOCI CL 9/2023 and OJK Guideline, on 27 May 2024 MOCI collaborated with the United Nations Educational, Scientific, and Cultural Organization (“UNESCO”) in implementing the Artificial Intelligence Readiness Assessment Methodology (“AI RAM”) for Indonesia which was published through Press Release No.369/HM/KOMINFO/05/2024. This cooperation is carried out to measure readiness to implement AI ethically and responsibly and to develop a comprehensive artificial intelligence/AI governance framework.

The AI RAM itself is an assessment tool designed to help countries understand how ready they are to implement AI ethically and responsibly for all their citizens. AI RAM consists of 5 (five) dimensions: Legal and Regulatory, Social and Cultural, Economic, Scientific and Educational, as well as Technological and Infrastructural. Each dimension/indicator is divided into sub-categories containing qualitative and quantitative indicators for an integrated assessment. The end result of the assessment will include a report containing a comprehensive overview of the status of the readiness in the country, summarizing where the country stands on each dimension, detailing ongoing initiatives, and summarizing the state of the art. This report will help identify what institutional changes are required to elaborate or strengthen a National AI Strategy, allowing UNESCO to tailor capacity-building efforts to the specific needs of different countries to ensure the ethical design, development, and use of AI. Through the press release, the government also expects the assessment to be completed by September 2024.

Transition from Ethical Guidelines to Binding Rules

Currently, Indonesia relies on ethical guidelines to govern AI usage. As mandated by the Stranas KA, MOCI is now working on legally binding regulations, recognizing the need for more robust measures. The government has plans to issue a ministerial regulation as a stepping stone towards comprehensive AI regulations. However, this is still unclear, given that Indonesia has just held a presidential election, which will affect all government positions and the regulatory process. ??


Author: Ichsan Perwira Kurniagung ([email protected]) & Andreas Christian Hamonangan Panggabean ([email protected])


要查看或添加评论,请登录

社区洞察

其他会员也浏览了