When the Law Comes First
Lina Albahouth
Commercial Contracts SPC at Saudi Air Connectivity Program (ACP) | ?????? ????? ?????
Last May, a USA Senate hearing was held in the presence of the CEO of Open IA, a professor from NYU, and the chief privacy and trust officer at IBM, to testify before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, where the committee is studying possible rules for the use of artificial intelligence. The Senate discussed a few important points, starting with previous practices of Congress dealing with new technologies, then proposals for dealing with artificial intelligence in terms of whether to legislate it nationally or internationally, and the effects of artificial intelligence on intellectual property and the American elections in terms of influencing public opinion with misinformation. All the way to how to mitigate the risks of artificial intelligence. In this article, I address the answers of the senators and detail some of the aspects that I studied in the Internet law course while studying for my master’s degree. Then it will conclude with an example related to the leadership of the Kingdom of Saudi Arabia in the technology field at the international level.
?The saying is always repeated that the law is driven by the business. Personally, I sometimes disagree with this statement, and the US Senate hearing on the legalization of artificial intelligence is a confirming example. The CEO of Open IA, the developer of the artificial intelligence tool (Chat GPT), stated the necessity of having a legislating body in addition to a personal data protection law on a national level, because there is no law in the United States, and finally licenses issued by this competent authority in addition to an independent auditor to verify the compliance of companies providing artificial intelligence services or products with those laws issued by the competent authority. Also, all these legal suggestions were consented to by the New York University professor.
?
1- Previous practices of Congress dealing with new technologies:
Returning to the practice of the US Congress with previous technologies, especially (social media platforms), the basic rule is that those companies that own social platforms did not want legislation and were not satisfied with that, but rather fortified themselves with an exception to ensure that they would not be prosecuted in the context of the work they carry out, provided that they provide policies Use where a user often agrees to it without even reading it. For more information, the law used to deal with technical companies that own social media platforms is the Communication Decency Act of 1996, as this Act is the first attempt by the US Congress to prevent minors from accessing sexually explicit material on the Internet. In addition to preventing individuals from sending what is obscene or inappropriate to anyone who is under 18 years old, and not displaying it clearly or clearly to anyone who is under 18 years old.[1]
The purpose of this Act is to control content on the Internet and to protect those under 18 years of age. Because the burden of censoring content is not a simple matter, in the same law there was Section No. (230), which simply includes that companies (platforms owners) are not responsible if anything is found on their social media platforms that is insulting, aggressive, or offensive because their direct justification is that they did not write or create that offensive content. But the user is the one who did it.[2] Accordingly, the US Supreme Court has ruled on this through a body of case law such as GONZALEZ, ET AL., PETITIONERS v. GOOGLE LLC,[3]in which the plaintiff claimed that YouTube is encouraging terrorism by allowing the presence of ISIS clips on its platform. The defense’s response was that the platform does not encourage but rather shares what has been uploaded by users and that the algorithm that the user uses to search for any topic other than ISIS is the same that is used when searching for ISIS, and whoever searches, for example, for cooking clips, the algorithm will suggest more clips to him/her, as is the case with ISIS clips.[4] Therefore, a summary of what Congress has done in dealing with the new technology (communication platforms) is by stipulating Section 230, which gives technology companies judicial immunity for everything published on the social media platforms that they own on the pretext that it is a platform that publishes content and is not itself the one that produces it. American society has seen the results of not legislating platforms through an increase in cyberbullying, suicide rates, and other negatives. Accordingly, this experience has formed the current reaction to Congress’ seriousness in legislating artificial intelligence. Professor Gary Marcus from NYU also added that Section 230 of the Communications Decency Act is not only appropriate to the context, but artificial intelligence differs from what social media platforms do in that artificial intelligence generates content while social media platforms publish it, and this is a differentiating characteristic.
2- Proposals for legislative tools with artificial intelligence at the national and international levels:
The senators discussed with the three representatives many ideas about the tools that are envisioned to be found in the law, as the CEO of (Open IA) suggested that there be licenses based on specific standards provided by a competent authority, as there is no authority specialized in artificial intelligence yet, while the chief privacy and trust officer at IBM did not support the idea of having licenses, but rather simply adhering to standards, such as disclosing all of the data used in the smart algorithm that is being trained, the way in which the model is trained for the smart algorithm, how this model performs, in addition to determining the levels of risk and submitting an impact assessment report, while professor Marcus from NYU responded to the importance of reviewing before publishing any model for public use, establishing a responsible body and finally providing financial support for the so-called (AI constitution), as the goal of constitutional artificial intelligence is for the algorithm to be trained on human principles and values. The emphasis came from the three testimonial givers on the necessity of having a competent authority at the national or international level, especially since artificial intelligence is based on processing and generating new data from existing data coming from all parts of the world, and the senators’ point of view was consistent with the international organization setting a baseline for everything and a common and unified understanding for all.
Side Notes:
The keenness of the CEO of Open IA was remarkable in terms of requesting legislation, and the senators expressed their surprise that for the very first time, the private sector was requesting to be regulated because the response that is always repeated is that legalizing technology limits innovation, progress, and speed. The CEO of Open IA and his body language, and facial expressions were apprehensive throughout the hearing. There are several justifications for this apprehension, and it may be due to the danger of artificial intelligence.Therefore, he does not want to bear the responsibility alone. It is known that regulation limits the private sector in certain activities that cannot be exceeded. Therefore, it is possible that the reason for his keenness to have a law is to be aware and informed that everyone in the sector or its competitors works compulsorily within a certain scope. Therefore, no company will precede another, nor a certain company can attract investors on it is own.
3- Effects of artificial intelligence technology and mitigating its risks:
The senators and all three testifiers discussed the dangers of moving forward without limits or regulation of AI. They also addressed many of the implications of other technologies that precede AI, such as the impact of social media on misinformation and fueling misinformation for use in the US election. One senator also asked a question. It is important related to national security when he touched on the possibility of artificial intelligence giving specific commands to direct drones to a specific place and blow it up. The CEO of Open IA responded that they would not train AI in that, but the answer is yes! As for the mechanism for mitigating risks, or even their meaning, the Chief Privacy and Trust Officer at IBM stressed the necessity of adopting the principle of transparency and disclosure of everything that artificial intelligence technology companies do, as mentioned in point No. (2) above. Regarding intellectual property, the CEO of Open IA confirmed that protected materials will not be used except with the approval of their owners and that the tools that have been published, such as presenting a song that resembles a famous song, are research tools and to display the capabilities of artificial intelligence to the world.
?Details about intellectual property[5]:
领英推荐
One of the senators raised the Napster case, which is a case named after the defendant company Napster, which was working on a platform (website) for buying and selling songs in MP3 format among users of the platform. What has become clear is that what Napster does is copy songs when they are uploaded by users, and this is a direct violation of intellectual property rights, and not, as Napster claimed, what it does is provide a tool or means to transfer songs. Therefore, the context of the comparison here is that there are challenges in intellectual property law.
4- The leadership of the Kingdom of Saudi Arabia in the technical field at the international level:[6]
The phrase “work where others left off” is always repeated. In the same context, the Kingdom of Saudi Arabia sought to strengthen and unify efforts in the field of technology by establishing the Digital Cooperation Organization with twelve other countries. Certainly, by establishing the organization, it became independent as an international governmental entity concerned with achieving social prosperity and economic growth. Its vision is “A world where every country, business, and person has a fair opportunity to prosper in a cross-border and sustainable digital economy.” Its mission is “Achieving social prosperity and growth of the Digital Economy by unifying efforts to advance digital transformation and promote common interests.” One of its most important initiatives and programs has been dedicated to enhancing cross-border data flows, promoting market expansion for emerging companies, empowering digital entrepreneurs, and promoting digital inclusion among women, youth, and other disadvantaged populations. In addition to its role in exchanging knowledge and best practices among members to create optimal infrastructure, policies, legislation, and educational solutions within member states in order to rapidly create inclusive and equitable digital economies in which all people, businesses, and communities can innovate and thrive, it also seeks to advance digital transformation agendas globally. Therefore, it may be appropriate for the organization to begin work on writing legislation related to the practice of artificial intelligence activities within the physical borders of member states. If international practice exists by an international organization, other countries will follow it either as a practice or even by joining.
?
?
[6] https://dco.org