AI Policy Making: The Challenges
AI Policy Making: The Challenges

AI Policy Making: The Challenges

While the conversations around AI regulations are heating up in all major countries of the world, I would like to take this opportunity to enlist some of the major challenges in drafting and enacting AI legislation.

Regulatory Lag: By the time the regulation is enacted, AI capabilities has advanced ahead by light years making said regulation outdated. We saw this firsthand when EU had to publish fresh amendments to the EU AI Act after Generative AI came into picture. So again, when AGI and Super AGI will come, we need to update again. When a new computing paradigm(Quantum) is adopted, AI might become even more powerful and widely adopted.

Example: ?In enacting the Storage Communication Act in 1986, US Congress provided substantially less privacy protection for emails more than 180 days old. At that,? time storage was very costly so data retention was a challenge. So protection for? communication data beyond 180 days was not considered essential. Now, storage is not an issue, and we get 15 GB space free for creating a gmail account.

One New AI Law vs existing laws with AI caveats: In some scenarios, extending and modifying existing industry and sector specific laws with AI provisions would be more effective than a single AI regulation. For example, to tackle recruitment bias we can update existing labour laws. For tackling discriminatory lending, we can augment the fair housing act to cover the outcomes of AI powered decision making. ?These factors needs to be assessed thoroughly and regulators need to chart out the right course of action

Difference between standards and regulations: Policies and standards are different, and we should demarcate clearly what is under the purview of which. ?Industry standards like the ISO, IEC, CMMI talk about best practices and provides a benchmark that organizations can be audited and certified on. These can go in depth and give detailed? guidelines on AI quality, how to prepare data-sets, documentation to be maintained, how data and models should be tested and what kind of properties AI models should have etc. These are not enforced, but organizations adhere to it because it serves as a baseline on which they can build and improve. But regulations should focus on the hard do’s and don’ts, and should be enforced strictly. This distinction is hard for a technology like AI, as it is not just one technology but a constellation of capabilities being applied to diverse domains.??

Global Coordination and increasing cost of compliance for companies: AI technologies operate on a global scale, making it challenging to regulate them within the confines of national borders. International collaboration is necessary, but achieving global consensus on AI policies is difficult due to varying cultural, ethical, and economic perspectives. Companies like Infosys which are spread across borders and have customers across borders will have to comply with multiple laws across geos. This will result in huge cost of compliance, unless the policies are in sync with each other

Defining and Understanding AI: AI encompasses a wide range of technologies and applications, from simple algorithms to complex machine learning models. Defining what constitutes AI for regulatory purposes is challenging, and the lack of a standardized definition can lead to loopholes and inconsistencies in regulations.

The Bespoke Nature of AI use-cases makes it hard to regulate : Different industries use same AI models differently depending on their business needs and depending on what kind of data they have. A financial services company and a retail company and a healthcare company uses AI differently for their specific purposes. Model might be the same, but the way it is applied and built could be very different. For example, a retail company could use an AI model for recommending which chocolates to buy to the customer, and same type of model can recommend stocks and financial products, and also maybe recommend treatment for healthcare. Last two use-cases are much more critical than the first one. ?While the EU AI Act does a nice job of tackling this by segregating use-cases into different risk buckets, it does not cover all possible risks for all industries, and sometimes falls short of defining and classifying risk clearly. ?A policy? cannot focus on all nuances of all industries, business functions and doman , so better we define the outcomes expected and not expected and leave the enforcement to the enterprises. The policy should not be? highly prescriptive of all detailed technical and governance requirements at the granular level.

The complex nature of the AI value chain makes it hard to regulate: Policy makers should keep in mind that AI projects are developed by multiple agencies and seldom it is done end to end by one company alone. Some entities are?

  • the original provider of the model (open source or commercial)
  • the compute provider and SDK provider
  • the deployer and system integrators
  • The data provider which might be the organization that uses it?

A one-size fits all policy will not work but the obligations need to be framed for each role and entity separately. Proper accountability matrix is needed for all stakeholders. A policy that is framed must be clear on accountability and compliance requirements for each of the roles separately instead of bucketing all the personas under single umbrella

?Concept of fair use and intellectual property can be hard to interpret: Complete clarity is needed on what is the regulatory obligations on those entities using IP data to train their models, how should they disclose it, what kind of IP they can use, whether the permission of the entity owning the IP is needed etc.

  • Clarity on who owns the IP for a AI model which is built when a foundational model is finetuned on enterprise data. Our view is the organization that finetunes it should own the IP.
  • Clarity on who owns the IP for the generated content of a Gen AI system should be clearly spelled out. Our view is when an organization is generating the content, they should own the IP regardless of what was there in the training phase.
  • If a generative model is trained on GPL based code, it is not clear whether the output generated by the model would also need to be licensed under the GPL.

Lack of availability of an array of technical guardrails: For many AI models, we do not have adequate technology in the market for enforcing transparency and explainability. Same is the case for other principles like fairness, security and harm prevention in LLMs. Entities using AI, particularly smaller enterprises and startups might lack the technical means and governance mechanisms to enforce Responsible AI. A strict regulation might hamper their AI adoption and throttle innovation.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了