ChatGPT, Open Source, and The Future of Law

ChatGPT, Open Source, and The Future of Law

The Indonesia Data Protection Law is a regulation that governs the collection, use, storage, and protection of personal data in Indonesia. The regulation is officially known as "Undang-Undang Nomor 11 Tahun 2008 tentang Informasi dan Transaksi Elektronik" or commonly known as the Electronic Information and Transactions Law (EIT Law).

The EIT Law was enacted on April 21, 2008, and it applies to both Indonesian individuals and foreign individuals who process personal data in Indonesia. The law covers a wide range of personal data, including but not limited to, name, address, phone number, email address, financial information, and biometric data.

Under the EIT Law, personal data must be collected, used, and stored for a specific and lawful purpose. The data controller must obtain consent from the data subject before collecting or processing their personal data, except for certain circumstances such as when the processing is required by law or to protect the vital interests of the data subject.

The EIT Law also requires data controllers to take appropriate security measures to protect personal data from unauthorized access, disclosure, or destruction. In case of a data breach, the data controller must report the incident to the affected data subjects and the authorities within 72 hours.

In addition to the EIT Law, Indonesia also has a number of sector-specific regulations that govern the use of personal data in specific industries such as healthcare, banking, and telecommunications.

Overall, the Indonesia Data Protection Law aims to protect the privacy of individuals and promote the responsible use of personal data by organizations operating in Indonesia.


The above text is derived from my direct interaction with ChatGPT (link: https://chat.openai.com/), an Open Source Artificial Intelligence (AI) Chatbot that is designed to public use. You can use it by signing up your account, reading all the terms and conditions, and typing your first inquiry to the bot. It was pretty easy and efficient when I used the application. However, as you can see the answer, it cited the limited regulation as currently Indonesia has the Personal Data Protection Law enacted in 2022. So, Why this application is so special that in some articles lawyers and other professionals express their concern?

It demonstrates that legal topic can be elaborated by the bot

When I tried the bot, it really emphasized the logical structure of an legal explanation. It explained what is the title and its well-known referral name, what are the scope of the regulation, the obligations applied to stakeholders, until the purpose of the law. Further, I was surprised that it mentioned about the detailed data incident report. I remember vividly when I was a junior lawyer, at least the legal research that I made must explain these logical structure.

It proves that law can be automated, in some way

ChatGPT showed me that AI can be an efficient tool to understand and get faster legal research result. I often heard that one of the obstacles to utilized AI for Legal was the limited data source, on which it became the reason of why AI for legal was considered as the slowest pace for AI innovation. Now, I can say that the stigma was broken. However, ChatGPT could not answer specific legal question, for example: I asked what was the judge opinion for "Jessica-Mirna Murder" Case, it could not provide the expected response.

I have mentioned the two takeaways from using it, now I want to elaborate the legal and ethic aspects of ChatGPT in Indonesia Law Perspective.

First, AI is still not specifically regulated. Under ITE Law other regulations, AI is not defined properly. Thus, it creates confusion on how to respond when an opensource AI like ChatGPT is being used by the society. This is why the importance to address the policy of AI has become the ongoing suggestion to Minstry of ICT/Kominfo. The only AI regulation that we can find is the Risk-Based Licensing Regulation in which AI is defined under KBLI.

Second, ChatGPT has declared that some of the answers may produce bias or harmful content. In a ethical side, this risk must be mitigated prior the technology is deployed. All the AI developers must ensure that the risks are identified and the mitigation steps are acknowledged as the solution to reduce its damage, both in severity and likelihood level.

Third, ChatGPT has mentioned that some information may not accurate. Even though this is not mentioned under their formal terms and conditions, this may be considered as disclaimer. In my opinion, ChatGPT may add information that users must check the information and report if some information is accurate or potentially bias, because the disclaimer itself cannot relieve the liability of the developer. This is merely because the developer has the capacity to review their datasets and train the bot to provide accurate and non-bias information.

要查看或添加评论,请登录

Matheace Ramaputra的更多文章

社区洞察

其他会员也浏览了