India’s Attempt to Regulate AI: Innovation Stifling or Necessary Oversight
Credit: unsplash.com

India’s Attempt to Regulate AI: Innovation Stifling or Necessary Oversight

Is the new AI advisory a step towards a fair and just AI governance model?

The latest advisory by the Union Ministry of Electronics and Information Technology (MeitY) on March 1st, 2024 asks AI platforms to seek government approval before deploying them for public use.?

In this article, I explore the implications of this advisory for the platforms, users, and India’s techno-strategic vision for the global AI race.?

Salient Points of the Advisory

While it was long in the making, the trigger for the latest advisory seems to be the recent Google Gemini row where the platform declared our current PM, Mr. Narendra Modi as a fascist. In response, the minister of state for electronics and information technology, Rajeev Chandrasekhar remarked that Gemini’s responses violated IT Rule 3(1)(b) of 2021, which mandates digital intermediaries not to host content that is defamatory, libelous, violates laws, or threatens the unity and integrity of India.

While the advisory’s stated intent is to safeguard electoral integrity, it’s crucial to understand it in more detail.?

1. Advisory for Social Media vs. AI Platforms

The new advisory has a new set of rules aimed at the AI platforms that are servicing Indian users. The previous advisory dated 23rd December 2023 aimed to prevent the spread of misinformation through social media platforms. However, the content generation methods for social media and AI platforms are different; the former being user-generated content while the latter is machine-generated. Unlike social media platforms, AI platforms don’t host or share content. They merely generate and modify content.?

The question of liability

This leads to the question of who’s liable for inaccurate information generated on the AI platforms. China’s internet regulator, the Cyberspace Administration of China (CAC) has deemed it “silly” to require complete accuracy in the training data for generating low/no-risk content because most of the training data comprises social media data and public image databases, which are known to contain errors. The CAC agency doesn’t mandate absolute accuracy but calls for “effective measures” to ensure the accuracy of the training data.?

Despite China’s mandate to the AI platforms to uphold the core socialist values while providing their service, it seems like China has prioritized speed of innovation over bureaucratic processes.?

Moreover, it’s generally understood that the output generated by these platforms is probabilistic and, therefore, users have to validate the information for accuracy.?

Given this, it’s difficult to lay the responsibility of factuality solely at the AI platform’s door. The users are partly responsible too for the information they disseminate.?

2. Preserving the integrity of the electoral process

Given how Deepfake technology could be used to sway elections and defame people, the apprehensions about AI abuse are valid.?

As I said before, the government/regulator has to approach governance separately for each type of output generated by the AI platforms. For example, the EU has identified four types of risk associated with the AI system. Rules and regulations around research, development, and deployment are made based on risk assessments. Even the more lax and market-friendly US has empowered individual regulatory bodies to come up with their own method of AI governance. This will ensure matters of national importance and high-risk sectors are governed strictly enough not to create terror while allowing enough room for innovation. But do disclosures and disclaimers serve as adequate safeguards in AI systems??

Importance of consent and disclosure

Full disclosure through “consent pop-ups” will give the users the freedom to choose to live a non-AI-based digital life. However if our easy-going approach to marketing pop-ups is any indication, it’s likely many users do not actually read them.? All players in the AI ecosystem, across all streams, should make a full disclosure that the output generated is through AI and hence prone to mistakes. However, it may be too much to expect voluntary disclosures. Therefore, “consent pop-ups” might not be the best and the most effective way to do it.?

Metadata Disclosure Signal

The onus of disclosure should be on the user to share the AI-generated content on other platforms. In my research on best practices for disclosing AI-generated content, I have found that there are three kinds of disclosure signals: behavioral, verbal, and technical signals. While most of them are not effective for text-based output, few would be highly potent against deepfake videos and images, like metadata, technical watermarking, and cryptographic signatures, which is one of the points laid down in the recent advisory. This will discourage users from disseminating false and unlawful information. Large players like Anthropic, Inflection, Amazon.com, OpenAI, Meta, Alphabet, and Microsoft have “pledged” or “announced” that they are currently working on foolproof watermarking mechanisms. However, its application is yet to be seen.?

3. Pre-deployment Government Approvals

The advisory asks all intermediaries who use under-testing, unreliable AI models, LLM, Generative AI, software, and algorithms to seek government approvals before deployment of AI platforms to users on the internet. Later, in a clarification on the microblogging site X, Mr. Chandrasekhar “clarified” that the advisory is aimed at significant and large platforms and not start-ups. The DPIIT (DIPP) defines a startup as a private limited company, partnership firm, or LLP, with turnover less than INR 100 Crores for any previous fiscal year, and up to 10 years from incorporation. But this definition is specific to India. This raises the question if this advisory is only for India-based AI companies. Moreover, large entities are crucial to India’s global competence in AI. To place the burden of AI innovation on the start-up community is just too much! And, what are the “guidelines or rules” for non-Indian AI start-ups of less than 10 years but more than INR 100 Crore revenue that are operating in India? Let’s take a look at what other economies are doing vis-a-vis pre-deployment approvals.?

China

CAC has awarded two licenses to the incumbents and three to startups to ensure the safety protection of a license-based system without losing the spur of innovation that comes from competitive startups. China’s AI licensing requirement applies only to companies providing generative AI services to the public and not to companies engaged in research and development or using it for internal organizational operations. However, the agency has laid down specific guidelines on pre-deployment regulatory audits which will help provide more insights if the models on which the apps have been developed are registered or not.?

EU

?The EU has a more command and control style of regulatory approach where it believes in primacy over member states. Hence, the EU has proposed a risk-based approach that cuts across all sectors, based on the level of threat from AI systems to safety, livelihoods, and rights of people. The EU will assess the risk through a conformity assessment of all commercially active models live, before approving the deployment of the AI systems for public use. It also notes that “the vast majority of AI systems currently used in the EU fall into this [minimal and no risk] category.”?

UK

The UK approach to AI governance is pro-innovation and pro-business. The UK's AI policy vision is riddled with uncertainties about its integration with existing legal frameworks. No new laws or regulatory bodies are being created. Instead, the UK is passing on the responsibility to the Information Commissioner’s Office (ICO) and Financial Conduct Authority (FCA). A central AI risk function has been newly established to guide the regulatory agencies, however its effectiveness is yet to be tested. Success will depend on this central body's communication with other agencies and the ability of sector-specific regulators to deeply comprehend and manage the impacts of AI.?

While there is no explicit requirement for pre-deployment approvals, any instance of failure of AI systems will be dealt with within the existing regulations and law.?

US

The U.S. has traditionally favored a hands-off approach to AI regulation to foster innovation. An executive order on October 30, 2023, introduced robust measures to ensure AI safety, privacy, equity, and leadership globally. Before this, federal guidelines were non-binding, and regulation was primarily industry-led or based on existing laws. Per the proposed AI Bill of Rights, AI “Systems should undergo pre-deployment testing, risk identification, and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards.”?

Some states had already begun to legislate on AI, focusing on employment and privacy, while federal agencies like the FTC and Department of Transportation managed AI issues within their domains. The U.S. is now transitioning to a more structured legal framework for AI.

Lessons for India?

A country as diverse as India cannot have a “one law fits all” approach to AI governance. Hence, pre-deployment approval rules should be modified to include sector-specific, risk-specific, and use-case-specific approaches. The advisory must specify clear guidelines for the approval of AI systems and models deployment, identify the assessment parameters, who will conduct these assessments, and outline the expected timelines for the process, as this essential information is currently missing.

4. Reporting and Compliance

Fifteen days seems like very little time for the qualifying entities to submit their action taken-cum-status report. There is a possibility that this deadline gets extended like all other government deadlines, however, smaller intermediaries must stay prepared and seek legal counsel to determine the right course of action concerning the advisory and the upcoming legislation on AI governance.?

India and the AI Global Race

According to the 2021 AI Global Vibrancy Index, India ranks third, right behind the US and China. While everything looks rosy and picture-perfect on the Stanford Institute for Human-Centered Artificial Intelligence (HAI), India ranks 14th on the Global AI Index by Tortoise Media. This difference is primarily because of the assessment approaches in terms of the indicators chosen to rank the nations. Since more feedback means more opportunities to be better, for this article I would choose to study the one by Tortoise Media.?

Experts fear that the licensing approach to AI governance as mentioned in the advisory leads to the slowing of innovation, thanks to the never-ending bureaucratic ordeal. Some said if India follows the footsteps of the EU’s command and control approach then it will always lag behind the current superpower, that is the US. However, from the chart one can see there are 5 EU member states above India on the list. They must be doing something right. Despite having the second-best talent in the whole world, India massively lags in infrastructure, R&D, government strategy, and AI intensity.?

In image: Countries are ranked by their AI capacity at the international level. This is the fourth iteration of the Global AI Index, published on 28 June 2023.

In a time like this, the new advisory will only hamper the growth of AI innovation in the country. For now, the kind of muscle power that is required to top this chart can be largely provided by the bigger intermediaries. It’s a lot to expect India to ride the AI wave on the backs of the nation’s startups.?

According to Rajeev Chandrasekhar, India aims to have a thoroughly discussed and debated AI governance framework ready for formal adoption by June-July this year. The government's focus will be on three key areas: fostering economic growth, identifying and mitigating potential risks and harms, and developing talent to secure global competitiveness. The recent advisory has indicated tighter regulations for platforms, potentially restricting innovation to some extent. However, there is an opportunity for adjustments as the legislation process, involving discussion and debate among various stakeholders, is expected to yield a balanced and equitable AI regulation framework.


End Notes:

  1. ?Centre says approval must for under-testing AI platforms before launch. (2024, March 3). India Today. https://bit.ly/43kzjvr
  2. ?Gemini AI’s reply to query, ‘is Modi a fascist’, violates IT Rules: Union Minister Rajeev Chandrasekhar. (2024, February 23). The Hindu. https://bit.ly/3ToKYWk
  3. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. (n.d.). https://www.meity.gov.in/. Retrieved March 5, 2024, from https://bit.ly/4bVHOAV
  4. D. (2023, December 28). MeitY issues advisory against AI-Deepfakes on social media | SCC Blog. SCC Times. https://bit.ly/3uXaTuX
  5. MacCarthy, M. (2023, October 19). The US and its allies should engage with China on AI law and policy. Brookings. https://bit.ly/4c36yqP
  6. A. (2024, February 26). Rival campaign consultant behind deepfake Biden AI call. Hindustan Times. https://bit.ly/3Tnr6Dc
  7. Emerging best practices for disclosing AI-generated content | Kontent.ai | Kontent.ai. (n.d.). Kontent.ai. https://bit.ly/3TnqfT6
  8. DIPP: Department of Industrial Policy and Promotion; DPIIT: Department for Promotion of Industry and Internal Trade
  9. DPIIT Startup Recognition & Tax Exemption. (n.d.). Startup India. Retrieved March 5, 2024, from https://bit.ly/3Ip2Gmw
  10. Hansen, U. S. (2023, September 15). Proposed AI Regulation: EU AI Act, UK’s Pro-Innovation, US AI Bill of Rights. https://bit.ly/49ZCg6x
  11. Greene, N. (2024, February 28). UK Versus EU: Who Has A Better Policy Approach To AI? Tech Policy Press. https://bit.ly/3T6QWKt
  12. L. (2024, February 6). Legal framework for artificial intelligence: What is the approach of the European Union, the United States and China? - Langlois lawyers. Langlois Lawyers. https://bit.ly/4a2lT9t
  13. Global AI Vibrancy Tool – Artificial Intelligence Index. (n.d.). https://bit.ly/3V8xaRd
  14. The Global AI Index - Tortoise. (n.d.). Tortoise. https://bit.ly/48GrBMT
  15. Desk, T. T. (2024, February 21). Centre working on draft AI regulation framework: Three things the government is focussing on. The Times of India. https://bit.ly/4a4vqfV


Harkirat Singh

Product @Zivy | Ex-Aekyum, HelloMeets | Community Builder

1 年

Monalisa, thanks for writing this. It's quite detailed. I do read news around this but your article gave me a deep dive into complexities behind his issue. Will be curious to know further development on this policy.

Digvijay Singh Parmar

American Express | IIM-Bangalore | NIT-Surat

1 年

Great article and very well researched.?

Somya Aggarwal

MBA Co'25, IIM Ahmedabad | Ex-Teach For India, Hansraj College

1 年

Very interesting ?? ??

Munna D.

Deputy Manager - Marketing at Refyne || Innovative Marketing Strategist | Driving Growth Through Data-Driven Campaigns ??

1 年

Very well written Monalisa Sethi, we need to discern between keeping a strict protocol mechanisms that restricts exposure and sets back the pace of innovation of Ai platforms and a carefully dissected regulatory framework that take measure of safety along with controlled relaxation in areas that doesn’t threaten security. The latter choice will encourages further advancement in AI tech but the former may actually get castigated as a significant obstacle

要查看或添加评论,请登录

社区洞察

其他会员也浏览了