Governance and Artificial Intelligence: Navigating Regulatory Landscape –  An Overview

Governance and Artificial Intelligence: Navigating Regulatory Landscape – An Overview

INTRODUCTION:

Artificial intelligence, a? terraformer of our world at an unprecedented pace. From grading essays to powering self-driving cars, AI is already being used in a variety of ways. Artificial intelligence (AI), while immensely useful, has its own set of challenges. The recent surge in AI applications has significantly impacted daily life, affecting a wide range of people, from students to working professionals. AI, as the name suggests, refers to systems and products powered by advanced code that can perform supercomputing tasks without the need for extensive hardware. These systems are capable of handling various day-to-day tasks in the digital world. Unlike traditional products, AI is unique in its ability to evolve continuously. The more data it receives, the more it improves. However, this rapid evolution raises several concerns. One major issue is the vast amount of data being fed into these systems. For instance, consider Chat-GPT, a popular chatbot AI that provides information up until January 2024, with premium users receiving the most current data. This scenario is common with many other AI bots as well. Issues such as intellectual property rights (IPR), privacy, and the lack of proper regulatory frameworks become increasingly problematic as AI continues to advance, The purpose of regulation is to maintain a minimum level of behaviour by people (in this case AI) who are free to behave in countless other ways.?

AI did not emerge overnight; it has evolved through continuous development. The reason it Gained? so much attention recently is due to the advancements in machine learning, which have enabled AI to not only perform tasks specified in its code but also to surpass these limitations by evolving its own dataset without assistance. This progress highlights the need for regulations to ensure the technology is used safely and responsibly and not exploited by its vast user base.

The functioning of artificial intelligence (AI) can be traced back to the early days of the calculator as it takes a query as input, processes the information using algorithms and chips, and delivers the processed data as an output, or "answer." However, not all AI operates in this manner. There are various types of AI, including deep fakes, chatbots, voice assistants, and personalised AI tweaks developed by the mobile phone industry. AI has the ability to learn from prior experiences, gain confidence with new inputs, and execute desired activities. It was designed to reduce, if not eliminate, the need for heavy labour by developing software capable of reasoning over input and communicating output. AI aims to provide human-like interactions with software and offer decision support for specific tasks.

Future capabilities of AI:

AI is a self-learning machine which has the capacity to adapt and learn new things based on the query the user requests and the amount of information it gathers and the sources it takes the information from, although we can not to say that AI is not an advanced machine yet as AI has been majorly impacting most of the sectors as of now already a few could be, automotive sector- tesla cars use self-driving AI which also has the capacity to select the route on itself based on the behaviour of the owner, face recognition technology- the most common example of this is LIDAR scanner provided in iPhone, it uses AI to register the owners face and recognise it to unlock upon detecting its face, it already has the capacity of deep learning as well as data ingestion and finally the recent trend of chatbots-CHATBOT, it collects data based on the query provided by the user

The Future capabilities of AI can either be good or unimaginably terrible such like:

1) They become self-aware

2) AIs will take control over human lives

3) AIs will cause an apocalypse

4) Humans will become slaves to AIs

And hence there rises the reasons to regulate AI again so that AI can proceed in the path of good and ensure the safety for humans.

RESEARCH PROBLEM:

  1. The need for a proper regulatory statute for the governance of AI.
  2. Should AI be considered a product, or should it be granted the status of a legal personality?


  • The need for a proper regulatory statute for the governance of AI:

The need for proper regulations arises as AI evolves on a day-to-day basis. In the event of an incident occurring somewhere in the world, AI assimilates that information into its servers, becoming one of the most knowledgeable and up-to-date entities in existence. It's often compared to a supercomputer, capable of performing tasks with even greater efficiency. However, a significant issue arises from the unrestricted influx of information. For instance, consider ChatGPT or Gemini AI; they source information from public data, whether accurate or not, or even from data specifically fed to the model. This situation poses risks such as potential misuse of personal data, unintended exposure of sensitive information, and the amplification of privacy disparities through algorithmic biases.?

A move? that surprised many, India's Minister of Economics, IT, and Telecom, Ashwini Vaishnaw, reiterated in April 2023 the government's decision to refrain from legislating the growth of Artificial Intelligence (AI). This stance distinguishes India from the global trend, where many countries are rushing to establish frameworks for this rapidly evolving technology.

Since 2016, 31 nations have passed at least one AI-related bill, but India remains unyielding. The European Union recently approved the ambitious "EU Artificial Intelligence Act," while the USA and Singapore have announced plans for an "AI Bill of Rights" and a "Model AI Governance Framework," respectively as well as Several countries banning AI and some by-products of it such as CHAT GPT by Several countries including Italy, China, Russia, and Iran due to privacy violations.

The European AI Act aims to enhance Europe's role as a global leader in AI by promoting its industrial use while ensuring that AI technologies adhere to European values and regulations. The legislation employs a 'classification system' to assess the potential risk an AI technology may pose to the health, safety, or fundamental rights of individuals. This system categorises AI technologies into four risk levels: unacceptable, high, limited, and minimal. Accordingly, the Act imposes varying degrees of checks and balances based on the identified risk level.

For example, the Act bans AI systems that employ subliminal techniques or exploit the vulnerabilities of specific groups, as these pose an unacceptable risk [Article 5 of The Artificial Intelligence Act]. High-risk AI systems are subject to ex-ante conformity assessments and other stringent requirements [Article 6 of The Artificial Intelligence Act]. In contrast, AI systems deemed to pose a low or minimal risk are only subject to transparency obligations under the Act [Article 52 of The Artificial Intelligence Act].

?

  • Should AI be considered a product, or should it be granted the status of a legal personality?

A product encompasses any item or service offered to fulfil a customer’s need or desire. These products can be tangible or intangible. Tangible products include durable goods (such as automobiles, furniture, and electronics) and nondurable goods (such as food and beverages). Intangible products provide services or experiences (such as education or digital media). Additionally, products can be hybrids of these categories, such as a kitchen appliance accompanied by a mobile app.?

AI, like ChatGPT, fits the requirements of a product, serving as a virtual service to meet user needs. However, unlike traditional products, AI possesses a unique characteristic: it has a mind of its own, thanks to the power of machine learning. Machine learning, a subset of artificial intelligence, enables machines or systems to autonomously learn and improve from experience. Instead of relying solely on explicit programming, machine learning algorithms analyse extensive datasets, identify meaningful patterns and insights, and make informed decisions. This capability renders artificial intelligence remarkably human-like in its capacity to adapt and evolve based on experience.

AI as a product could be:

  • AI as a Product is going well with companies who have updated their products with AI elements. A good example is Amazon's Alexa. The virtual assistant is built into a lot of Amazon's products like the Echo, Dot, Show, and Spot.?
  • AI as a Service is also popular. Microsoft offers Azure Machine Learning and Google Cloud Platform has its Cloud Machine Learning Engine.?
  • Machine Learning refers to the use of algorithms to extract insights from data.?
  • Natural Language Processing (NLP) is the process of understanding human language.?
  • Computer Vision is the ability of computers to detect and classify visual information.

The concept of legal personality extends to any entity, including artificial intelligence (AI), that can perform the functions typically associated with human beings under the law. Granting legal personality to AI could establish a framework for regulating its behaviour, especially as AI systems become increasingly autonomous. Without legal status, it becomes unclear who should be held accountable if an AI system causes harm. By granting legal status to AI, it becomes feasible to hold the AI itself accountable for its actions. There are at least two primary reasons why AI may be recognized as a legal person in the future as they become more intelligent and integral to society. Firstly, it provides a clear entity to hold responsible in case of malfunctions or harm caused by the AI system, addressing accountability gaps due to factors like speed, autonomy, and opacity. Secondly, recognizing AI as a legal person ensures that there is someone to credit for positive outcomes.?

An illustration for this trend is Saudi Arabia granting "citizenship" to the humanoid robot Sophia in 2017, and an online system with the persona of a seven-year-old boy being granted "residency" in Tokyo. These instances demonstrate the evolving nature of legal personality and its application to non-human entities like AI.

The question of whether AI should be classified as a product or a legal person is complex and may vary depending on the specific circumstances and jurisdiction. In Indian law, for instance, only a "legal person" is considered competent to enter into a valid contract, the general rule thus far has been that an AI does not qualify as a legal person and AI has not been recognized as a legal person.? However, as AI technologies continue to advance and play a more prominent role in society, legal frameworks may need to evolve to address the unique capabilities and challenges posed by AI. The issue of legal personality for AI raises important questions about accountability, responsibility, and liability in cases where AI systems are involved in contractual agreements or other legal matters.?

CONCLUSION:

The rapid advancement of artificial intelligence (AI) presents both opportunities and significant challenges. AI has already transformed various aspects of our lives, from education to transportation, showcasing its potential to revolutionise numerous industries. However, with this transformative power comes the need for robust regulations to ensure the responsible development and deployment of AI technologies. The recent surge in AI applications has raised concerns about privacy, accountability, and the ethical implications of AI-driven decision-making. Issues such as the misuse of personal data, unintended exposure of sensitive information, and algorithmic biases highlight the urgent need for regulatory frameworks to govern AI development and usage.

Countries around the world are grappling with these challenges and are taking steps to establish regulations tailored to the unique characteristics of AI. Initiatives such as the EU Artificial Intelligence Act and proposals for AI Bill of Rights and Model AI Governance Frameworks reflect the growing recognition of the importance of regulating AI to protect individuals' rights and promote ethical practices. Furthermore, the question of whether AI should be considered a product or granted legal personality status adds another layer of complexity to the regulatory landscape. While AI exhibits human-like capabilities and autonomy, granting it legal Personalityraises questions about accountability and liability. In the face of these challenges, it is imperative for policymakers, industry leaders, and stakeholders to collaborate and develop comprehensive regulatory frameworks that balance innovation with ethical considerations. By promoting responsible AI development and usage, we can harness the full potential of AI while safeguarding individual rights and societal values in the digital age.

There is a saying “cut diamond with diamond” which means a force can only be stopped by the same counterforce. This method can also be used in regulating AI in real time. The governments should design such codes which should become a mandate in incorporation in any work on AI. The codes act the same as how normal regulations work in regulating human activities and it is also necessary for that regulatory body to have a vigilante behaviour so that they could keep up with the standards of regulations. The government is also suggested to bring a regulatory board consisting of people with professional expertise and knowledge in knowing the behaviour of AI.

要查看或添加评论,请登录

Regalwhiz Law Chambers的更多文章

社区洞察

其他会员也浏览了