Regulating AI and the Safe Harbor for Good Samaritans
There is a buzz in tech and legal circles in India for the last couple of weeks with the draft Digital India Act (DIA) expected sometime this month. India’s Ministry of Electronics and Information Technology (MeitY) released a?document?in March 2023 to kick off consultative dialogues with industry on the proposed DIA. It’s a fairly high-level document that provides context for this Act, the need to replace the IT Act 2000 and (hopefully) to create a nimble framework for governance of this space in the future. It also aims to cover the Personal Data Protection legislation, the National Data Governance Policy and provide penal code amendments for cybercrimes. Clearly a broad-spectrum legislation is being envisaged.?
The document reflects the concern of the government about big-tech’s predilection to retain its power by leveraging all means at its disposal. In talking about online safety and trust, this document refers to “user harm†and the need to provide safeguards against it. It mentions:
“Definition and Regulation of hi-risk AI systems through legal, institutional quality testing framework to examine regulatory models, algorithmic accountability, zero-day threat & vulnerability assessment, examine AI based ad-targeting, content moderation etc.â€
The thinking around regulations (or “guardrailsâ€) for AI platforms continues to be fluid. Industry voices notably those of Sam Altman (CEO of OpenAI) and Sundar Pichai (CEO, Alphabet) are for increased government(s) involvement in regulating AI as it continues to evolve. Altman’s blogpost talks about the need for higher coordination among those leading development efforts. He also makes a pitch for an international authority to be setup which can monitor developments in AI (governance of superintelligence), usage of resources, conduct safety audits etc. Something along the lines of the IAEA (International Atomic Energy Agency, the intergovernmental nuclear energy watchdog set up within the UN). Pichai also states in his?article in the Financial Times?that “..AI is not something one company can do aloneâ€. He also makes a strong point that “..AI is too important not to regulate, and too important not to regulate wellâ€.
Altman testified last month before the members of a US Senate subcommittee where he made a fervent pitch for regulation of AI. He made similar comments earlier this week speaking at Tel Aviv university. He also added that “.. it would be a mistake to go put heavy regulation on the field right now or to try to slow down the incredible innovation..â€. He has been clear that he would obey regulations unlike some?social media companies?(emphasis mine).
It appears that some of the creators and proponents of AI platforms are trying to create a positive pitch to enable further investment and development of their platforms by highlighting the tremendous potential and simultaneously voicing concern on the technology’s ability to do harm if not used with the right intent and therefore voluntarily seeking regulations. Of course, highlighting at the same time that over-regulation can damage innovation! All of this seems to be a bit muddled up for me at the moment. Altman is currently on a whirlwind global tour presumably to further propagate his message (he is visiting Jordan, Qatar, UAE and South Korea this week and is speaking at a sell-out event in New Delhi later today).
Earlier this year,?Meta made its LLaMA (Large Language Model Meta AI) available to researchers?under a non-commercial license. Effectively making it open source. Clearly this move seeks to move away from the more centralized approach taken by OpenAI, Google and Microsoft. It also ensures that it would be near impossible to have a coordinated approach to developing or regulating AI platforms.
The Indian government has indicated that there will be a whole chapter in the new Digital India Act that will be devoted to AI and other emerging technologies. The Minister of State for IT and Electronics also?stated?that “..if there is eventually a ‘United Nations of AI’ as Sam Altman wants, more power to it but that does not stop us from doing what is right to protect our digital nagriks (citizens) and keeping internet safe and trusted,â€.
Meanwhile a number of jurisdictions have already announced preparation for new legislation to govern/ regulate AI platforms. Key among these is the EU (The EU AI Act). The US government has also taken steps by publishing an AI Bill of Rights and an AI Risk Management Framework and also a roadmap to create a National AI research AI resource. The UK is already?reportedly?considering setting up a London based AI monitoring authority (presumably picking up on Altman's idea).
I believe that we will see a rash of litigation in this space in the next few years with a triangular tug of war (if such a thing is possible) with regulators/ governments, tech companies and users at the three corners.
It will also be interesting to see how the proposed regulatory frameworks will be created in different countries. Traditionally some of the most active regulators (in my view) are in the fields of banking and finance/ insurance, pharmaceuticals and aviation. Banking particularly has seen regulators evolve with the recognition that they are dealing with entities and their customers that operate globally. This has led to a remarkable collaboration across jurisdictions (for the most part) and an effort to harmonize standards that global financial institutions have to adhere to while also meeting specific local requirements (regulators oversee global banks using supervisory colleges across jurisdictions).?
Technology products cannot be contained by borders. This perhaps will be an argument for globally coordinated regulation. There will certainly be a large amount of local flavoring that will be adopted for each country. I do not, for instance expect that we will see anything close to Section 230 of the Communications Decency Act, 1996 (enacted by the US Congress) in many countries. This is also known as the “Protection for ‘Good Samaritan’ Blocking and Screening of Offensive Materialâ€. This is the section of the legislation that has provided a shield particularly for social media companies in the US, holding that they are not responsible to accurately screen any content posted by users of the platform. Recent cases in the US Supreme court (Twitter, Inc v Taamneh et al.?and?Gonzalez v Google LLC?have demonstrated the might of the Section 230 shield.
领英推è
26 words that created the internet - “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.â€
Section 79 of India’s IT Act 2000, provides a similar?safe harbor?for social media companies. This was modified in 2021 with amendments that state that the social media platforms need to identify within 36 hours the originator of a flagged message, conduct additional due diligence, appoint designated officers for compliance, grievance handling etc. Failure to comply with these rules will result in the indemnity of Section 79 being taken away. The new DIA will presumably further refine this safe harbor clause.
The tech giants also have a long track record of successfully defending themselves from becoming subservient to regulators. They also drive a very large part of global economic activity and a large part of their influence is derived from here.?
In the list of the top 10 companies in the world by market cap, just 5 of these (Apple, Microsoft, Alphabet, Nvidia and Facebook) account for ~USD 8.6 T (62% of the market cap of the top 10). To put that in perspective, in 2022 only the US and China had a GDP of over USD 10T (20,89 and 14.72 respectively). Japan at number 3 on the list had a GDP of USD 5.06T.
The?largest banks?by market cap in contrast? All of them together amount to USD7.3T.
I expect the next 6-12 months will see a lot of progress in AI regulations globally while industry parallelly continues to grow and launch new products and services. There will be conflicting forces at work to ensure that regulation is meaningful and will prevent user harm. Whether this will stifle innovation or not remains to be seen. I also believe tech companies (and those that will incorporate AI tech into their offerings) will be sensitive to user protection and the possible backlash they can experience for crossing the line. Ultimately it will not be regulations and laws alone that ensure corporations will do the right thing for its users.
I hope that you enjoyed The Lateral View!?
Look forward to connecting with you!?
Every fortnight, I'll share my perspective on topics relating to technology, banking, insurance, capital markets, financial services, leadership etc. To make sure you don't miss an issue, if you haven't subscribed yet, just click the "Subscribe" button in the upper right corner above.?
-Shrinath?
Shrinath Bolloju?is an independent management advisor. He has spent over 30 years in the banking and securities services industry and has worked in technology, operations and run a business, in various geographies across Asia, Europe and the Americas.
Head - Transaction Banking, Digital Payment, Wealth Management Operations
1 å¹´Good perspective ????..need more in-depth thinking to combat the pitfalls and the Governance around the implementation..thanks for sharing.
Professor Emeritus in Marketing & International Business Former Director & Dean K J Somaiya Institute of Management
1 å¹´Good thought provoking article ...I think these things are bound to happen
Professor Emeritus at N.L. Dalmia Institute of Management Studies and Research
1 å¹´Very informative write up Shrinath Keep it up ??
SVP - Tech Delivery Mgr. - L3, Risk & Control - Payments, Realtime data & Open banking APIs at UOB Ltd.
1 å¹´Good article