AI Policy Considerations - A Macro Overview, Part 2 of 2
This is the second part of this two-part series. The first part covered 'Security Concerns' and 'Technological and Economic Competitiveness'. In this part, we are discussing 'Ethics, Privacy, and Bias Mitigation', 'Social Impact and Human-AI Interaction', 'Quality, Ownership, and Liability', and 'Regulatory Frameworks and Governance'.
Ethics, Privacy, and Bias Mitigation:
Privacy Protection:?
Privacy is a general concern in the age of the internet, and it is no exception in the case of AI. Ultimately, it comes down to the data used for training. If the data contains a significant amount of personally identifiable information (PII), that exacerbates the privacy problem.??
One technical issue is the memorization of data by LLM models rather than learning from it. Memorization causes the LLM models to spill the training data verbatim. So, if the training data contains personally identifiable information and the model is not trained properly, the model may reveal sensitive information under certain circumstances.??
In the book The Age of Decentralization, I explored how Google DeepMind researchers demonstrated the extraction of training data from ChatGPT. I also examined privacy-enhancing technologies, including differential privacy, homomorphic encryption, and secure multiparty computations (SMPC), which can mitigate such risks but may lead to higher costs, greater computational demands, and longer training times.
Bias, Toxicity, and Misinformation:
But personal information is not the only problem. Ultimately, the quality of the results we get from AI models depends on the training data. If a model contains toxic information, the results can be toxic. For example, a Stanford Internet Observatory (SIO) investigation identified hundreds of known images of child sexual abuse material (CSAM) in an open dataset (LAION-5B) used to train popular AI text-to-image generation models, such as Stable Diffusion. These models were being used to create photorealistic AI-generated nude images, including CSAM.
Apart from that, bias is a significant issue. The problem arises when the AI model is trained on a database that already contains bias. Recent research at the University of Washington found that AI-based hiring tools show significant bias, favoring white-associated names and discriminating against female-associated names.
Another issue is hallucinations. If you have been using ChatGPT or related products for some time, you know that these AI chat apps can convincingly disseminate misinformation. This is because of hallucination.
We will discuss more on these topics when we discuss the quality of AI models.
Intellectual Property and Consent:
Does training AI models with someone’s copyrighted material without consent fall under the purview of 'fair use'???
Many AI companies argue it is fair use, but content creators and platforms disagree. In August 2024, leaked documents revealed that Nvidia scraped vast amounts of videos from content platforms such as YouTube and Netflix. Nvidia claimed this was fair use, but the platforms did not concur.??
There has been a wave of lawsuits against AI companies for alleged violations of intellectual property laws. For instance, Getty Images sued the AI art tool Stable Diffusion (Stable AI) for scraping millions of its images without consent, as Getty’s watermark appeared on many pieces of ‘art’ created by the model. Stable AI and other generative AI companies face numerous similar lawsuits.??
In October 2024, novelist Christopher Farnsworth filed a proposed class-action copyright lawsuit against Meta Platforms, accusing the tech giant of misusing his books and others to train its Llama artificial-intelligence large language model.??
Why is this a policy question? It is clear that with generative AI, modifications to intellectual property regulations and licensing laws are necessary.
Social Impact and Human-AI Interaction:
Labor Market Impact:?
As per an IMF January 2024 report,
"almost 40 percent of global employment is exposed to AI."
According to the report, in advanced economies, around 60% of jobs could be affected by AI, with half potentially benefiting from increased productivity, while the other half may face reduced labor demand, lower wages, or even job elimination. In contrast, AI exposure in emerging markets and low-income countries is expected to be lower, at 40% and 26%, respectively. However, these regions may struggle to harness AI's benefits due to inadequate infrastructure and skilled labor, potentially exacerbating global inequality over time.??
The report further warns that AI can lead to increased income and wealth inequality. This technological revolution may benefit younger workers but negatively impact older workers, whose experience may become less valuable.??
Therefore, policymakers need to address potential job displacements and promote upskilling in the workforce with AI-related skills.
Human-AI Collaboration:
One way to mitigate the detrimental effects of AI-related job losses is by encouraging hybrid models in AI development that enhance human capabilities rather than replace them.??
This approach makes sense not only from an economic and worker-rights perspective but also from a safety and quality standpoint. Even top AI developers cannot completely explain how AI systems work.??
Leaving critical systems to something we cannot fully understand can be detrimental.??
Therefore, policymakers need to consider these factors and define where and how much control can be left to AI systems.
AI and Mental Health:
Many recent events are raising serious questions about the mental health implications of AI chatbots.??
Recently, Google’s AI chatbot Gemini was found to verbally abuse a student using the chatbot for homework help. The chat is alarming, as at one point, the AI chatbot says,
"You are not special, you are not important, and you are not needed…. Please die. Please.
Google (Alphabet Inc.) responded by stating,
"Large language models can sometimes respond with nonsensical responses, and this is an example of that. This response violated our policies, and we've taken action to prevent similar outputs from occurring."??
Earlier this year, a 14-year-old from Florida committed suicide, allegedly manipulated by the AI chatbot Character.ai.??
With AI, we are entering a new era of online abuse and its impact on children’s mental health. Just earlier this year, social media platforms testified at a Senate hearing on child exploitation, where they were blamed for harmful effects on children, including victimization by sexual predators, suicide, and eating disorders.??
However, the risks associated with AI are far more complex.
There is a fundamental distinction between abuse on social media platforms (e.g., Facebook) and abuse by AI chatbots. Social media platforms can be accused of facilitating such abuse, whereas AI chatbots can act as direct perpetrators of the abuse themselves.??
These questions highlight the challenges of assigning liability in AI-driven interactions.??
Artificial General Intelligence (AGI):?
This may seem a little fantastical, but the fear of A.G.I., or Artificial General Intelligence, is real for many.??
AGI refers to an advanced form of AI capable of understanding, learning, and performing any intellectual task that a human can. Unlike narrow AI, which is designed for specific tasks (e.g. language translation or image recognition), AGI would have the ability to generalize knowledge across different domains, solve complex problems, adapt to new situations, and exhibit common sense and reasoning similar to human intelligence. AGI remains a theoretical concept, but Sam Altman claims that we will achieve AGI in a "few thousand days."??
Many fear that AGI could lead to a "nuclear-level catastrophe."
While we do not know the exact likelihood of AGI or how good or bad it could be, one thing is certain: the emergence of AGI would be a "black-swan event" with widespread repercussions.??
Policymakers need to evaluate the possible emergence of AGI and prepare for its potential security implications.??
领英推荐
Quality, Ownership, and Liability
Quality:
While there are widespread claims that today AI is doing this, tomorrow AI is doing that, can AI systems at the current stage of development replace traditional professionals???
Now, there are two aspects to this issue, as discussed earlier:??
Let us take the example of investment management. Say we train an AI model on asset allocation and portfolio management. Ensuring that the AI model properly manages investments is vital, but the bigger question is: who will be liable in case of a violation of contractual terms or regulations???
We will discuss the second point (legal liability) in the next section.??
So, how do we define the quality expectations from an AI-driven system???
These questions may seem humorous, but they are indeed valid legal and policy considerations.??
The technical problem that reduces trust in AI models is hallucinations. Hallucination refers to the phenomenon where an AI generates information or content that is incorrect, nonsensical, or fabricated. This can include providing false facts, making up sources, or creating details that were not present in the training data.??
If you have been using ChatGPT or any other LLM, you have most likely encountered this problem. Sometimes these models will convincingly disseminate misinformation, which is a huge problem restricting AI-driven systems from being considered valid replacements for human workers.
The problem, however, is not limited to hallucinations. The quality of training data directly impacts the quality of the output these models generate.??
Another issue is memorization, which we have already discussed, where AI models spill information verbatim rather than generalizing and contextualizing.??
The policy aspect is that before we get too excited about AI replacing the human workforce, we need to have strong systems in place to evaluate AI system quality.?
?
Ownership and Liability:
Let us come back to a point discussed in the previous section - who is legally liable for the follies of an AI-driven system?
But, let us first discuss ownership.
Who owns the content generated by an AI system, especially a generative AI system?
Ownership is not just about attribution. In the book The Age of Decentralization, I discuss how natural law defines ownership.
According to natural law, an asset owner's rights include:
The intellectual property battles will continue unless policymakers take a clear stand on ownership and the related incentive structure.
Now, coming to liability.
Let us refer back to the example of AI-driven investment management. Who is liable when the AI-driven investment manager breaks a securities law or does not follow the agreement with the investors?
This is not a question of skill, but a question of accountability.
One way to mitigate this risk is to identify sensitive use cases and limit the influence AI systems have on the final decision or result. In this case, AI systems are used as tools to increase productivity, but the quality is maintained by humans.
Regulatory Frameworks and Governance:
Adaptable Legal Frameworks:?
Drafting legal frameworks for emerging technologies can be quite challenging, especially when the technology is rapidly advancing and newer use cases are emerging every day.??
More often than not, regulators take a restrictive approach to new technology and end up overregulating.??
This approach could be counterproductive in the case of AI, as missing out on AI-driven technological advancements may lead to long-term economic underperformance. In the global economy, investments and jobs will gravitate towards regions with greater efficiencies, which will increasingly be determined by AI growth.??
So, the regulatory approach needs to be adaptive and involved.??
International Cooperation:
Given the systemic nature of AI, global collaboration is imperative.??
Collaborating globally to set standards, norms, and practices for responsible AI use involves bringing together countries, organizations, and stakeholders to develop a unified framework for ethical and safe AI development. This initiative seeks to establish shared ethical guidelines and standardized norms to ensure AI development aligns with global human rights, promotes cross-border compatibility, and mitigates risks like malicious use and data violations. By reducing regulatory uncertainty, it fosters innovation and investment while enabling collaborative efforts to address global challenges such as climate change and cybersecurity. Additionally, it aims to bridge the digital divide, ensuring equitable access to AI technologies for all nations.??
Several global initiatives are emerging, such as:??
Let us conclude the discussion here for now...I may add more later.