LAWS AND REGULATIONS - ARTIFICIAL INTELLIGENCE

LAWS AND REGULATIONS - ARTIFICIAL INTELLIGENCE

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by humans or animals. AI applications include advanced web search engines, recommendation systems (used by YouTube, Amazon and Netflix), understanding human speech (such as Siri or Alexa), self-driving cars (e.g., Tesla), and competing at the highest level in strategic game systems.

No alt text provided for this image

The artificial intelligence industry is growing at an incredible speed. Nations around the world are competing to win the ‘AI race’. Companies are investing billions of dollars to secure the largest market share. Simulations show that by 2030 about 70% of companies will adopt some sort of AI technology. Whether modelling climate change, selecting job candidates or predicting if someone will commit a crime, AI can replace humans and make more decisions quicker and cheaper.

We need to regulate AI for two reasons. First, because governments and companies use AI to take decisions that can have a significant impact on our lives. Second, because whenever someone takes a decision that affects us, they have to be accountable to us. Human rights law sets out minimum standards of treatment that everyone can expect. It gives everyone the right to a remedy where those standards are not met, and you suffer harm. Governments are supposed to make sure that those standards are upheld and that anyone who breaks those standards is held accountable - usually through administrative, civil or criminal law.

AI systems that produce biased results have been making headlines. One well-known example is Apple’s credit card algorithm, which has been accused of discriminating against women, triggering an investigation by New York’s Department of Financial Services. A study published in Science showed that risk prediction tools used in health care, which affect millions of people in the United States every year, exhibit significant racial bias.

No alt text provided for this image

Another study, published in the Journal of General Internal Medicine, found that the software used by leading hospitals to prioritize recipients of kidney transplants discriminated against Black patients. Applying AI to make a diagnosis regarding mental health, where factors may be behavioral, hard to define, and case-specific, would probably be inappropriate. It’s difficult for people to accept that machines can process highly contextual situations.

Companies with AI-based products and services should understand the changing regulatory landscape. Google, Microsoft, BMW, and Deutsche Telekom are all developing formal AI policies with commitments to safety, fairness, diversity, and privacy.?To follow the more stringent AI regulations that are on the horizon (mainly in Europe and the United States), companies will need new processes and tools: system audits, documentation and data protocols (for traceability), AI monitoring, and diversity awareness training.

INDIA

Currently, India has no codified laws, statutory rules or regulations, or even government-issued guidelines, that regulate AI per se. The obligations on this subject are set out in the Information Technology Act 2000 under Sections 43A and 72A which protects personal data, and the rules and regulations framed thereunder. The NITI Aayog has come up with a list of seven principles for responsible AI that includes principles of safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability and protection and reinforcement of positive human values. These principles are expected to safeguard the public interest and promote innovation through increased trust and adoption.

Of late, the, Ministry of Electronics and Information Technology (MEITY) has constituted a few committees and has also released a strategy for the introduction, implementation and integration of AI into the mainstream. Similar to GDPR, Sections 43A and 72A of the Information Technology Act 2000 provides a right to compensation for unauthorized disclosure of personal information. The right to privacy was deemed a fundamental right protected by the Indian Constitution in 2017 by the Honorable Supreme Court. New Education Policy places a strong emphasis on starting to teach coding to pupils as early as Class VI. In the coming years, India will serve as a center for cutting-edge AI technologies.

ABDM Ecosystem

No alt text provided for this image

The Indian healthcare system is heterogeneous and the Ayushman Bharat Digital Mission (ABDM) under National Health Authority (NHA) was launched with the aim of creating digitized records of all doctor-patient interactions. With the recent notification of new regulations on telemedicine, it has been acknowledged that AI may be used for the purpose of evidence-based decision making. The ABDM is leveraging machine learning and AI with the aim of digitizing healthcare records and assisting in building evidence-based healthcare delivery tools.

CHINA

In terms of advancing artificial intelligence laws and regulations past the proposal stage, China has acquired the lead. China approved a law in March 2022 regulating how businesses utilize algorithms in online recommendation systems, mandating that these services uphold moral and ethical standards, are accountable and transparent, and “disseminate positive energy.”

According to the law, businesses must warn users when an AI algorithm is used to decide what content to display to them and provide them with the choice to stop being targeted. The law also forbids the use of algorithms that present consumers with varying rates based on personal information. We anticipate that when AI legislation spreads over the world, it will reflect similar themes.

EUROPE

The European Commission proposed a regulation (EU AI Act) in 2021 to harmonize AI rules. It takes a risk-based approach to controls on using AI systems, depending on the intended purpose of the system. The EU AI Act proposes a sliding scale of rules based on risk that would classify AI applications as unacceptable, high, limited or minimal risks. The proposal will become law once the European Commission and the European Parliament agree on a common version. Negotiations are expected to be complex, with thousands of amendments already proposed by political groups in the European Parliament. Once adopted, the regulation will apply across the EU, possibly as early as 2024.

If adopted, the regulation would have significant consequences for companies that develop, sell or use AI systems. Those consequences include the introduction of legal obligations and a monitoring and enforcement regime with hefty penalties for non-compliance. Specifically, the regulation will require companies to register stand-alone, high-risk AI systems, such as remote biometric identification systems, in an EU database. Potential fines for non-compliance range from 2-6% of a company’s annual revenue.

The regulation has striking similarities to the General Data Protection Regulation (GDPR), which already carries implications for AI: Article 22 prohibits decisions based on solely automated processes that produce legal consequences or similar effects for individuals unless the user has explicitly consented or the AI system meets other requirements.

UNITED STATES

Unlike the comprehensive framework proposed in Europe, regulatory guidelines have been proposed by several federal agencies in the United States as well as by several state and local governments. Here are key U.S. developments in AI regulation as well as ways companies can avoid potential regulatory pitfalls.

Department of Commerce / National Institute of Standards and Technology

A flurry of AI-related activity has emanated from the Department of Commerce, including a move towards a risk-management framework.

In September 2021, the Department of Commerce established the National Artificial Intelligence Advisory Committee to offer recommendations on the “state of U.S. AI competitiveness, the state of science around AI, issues related to the AI workforce” and how AI can enhance opportunities for underrepresented populations, among other topics.

National AI Initiative Act

In January 2021 the National AI Initiative Act (U.S. AI Act) became law. It created the National AI Initiative, which provides “an overarching framework to strengthen and coordinate AI research, development, demonstration, and education activities across all U.S. Departments and Agencies.” The U.S. AI Act created offices and task forces aimed at implementing a national AI strategy, implicating a multitude of U.S. agencies including the Federal Trade Commission (FTC), Department of Defense, Department of Agriculture, Department of Education, and the Department of Health and Human Services

Algorithmic Accountability Act of 2022

If passed, the Algorithmic Accountability Act would require large technology companies to perform a bias impact assessment of automated decision-making systems in a variety of sectors, including employment, financial services, healthcare, housing, and legal services. Introduced in February 2022, the bill defines “automated decision system” to include “any system, software, or process (including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques and excluding passive computing infrastructure) that uses computation, the result of which serves as a basis for a decision or judgment.” The bill seeks to improve the 2019 Algorithmic Accountability Act.

Federal Trade Commission

The FTC made clear in 2021 that it will pursue the use of biased algorithms. It provided a roadmap for its compliance expectations in saying companies should “keep in mind that if you don’t hold yourself accountable, the FTC may do it.”

The White House

In November 2021, the White House Office of Science and Technology Policy solicited engagement from stakeholders across industries in an effort to develop a “Bill of Rights for an Automated Society.” It could cover topics like AI’s role in the criminal justice system, equal opportunities, consumer rights, and the healthcare system.

Food and Drug Administration

The FDA’s Artificial Intelligence/Machine Learning Based Software as a Medical Device is meant to treat, diagnose, cure, mitigate, or prevent disease or other conditions. An action plan outlines how the agency intends to oversee the development and use of the software.

National Security Commission on Artificial Intelligence and Government Accountability Office (GAO)

The National Security Commission on Artificial Intelligence recommended in 2021 that the government protect privacy, civil rights, and civil liberties in its AI deployment. It notes that a lack of public trust in AI from a privacy or civil rights/civil liberties standpoint would undermine the deployment of AI to promote U.S. intelligence, homeland security, and law enforcement. The commission advocates for public sector leadership to promote trustworthy AI, which will likely affect how AI is deployed and regulated in the private sector.

Baheer Noori

IT | Software Engineer

1 年

People need to be taught about AI so they don't assume it's some kind of horrible invention that will destroy the world. However, it remains unavoidable that some form of regulation will be required to guarantee that AI is created and applied in a way that is secure, just, and equitable. By the way, sir, the passages are quite nicely written; excellent work.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了