Building on Asimov: Practical AI Regulation

Building on Asimov: Practical AI Regulation

In my previous installment in the series on AI and Regulation, I proposed scenarios that addressed the emergence of Average General Intelligence (AGI). In this final post in the series, I will focus on Regulation and what we can do to drive innovation with common-sense guidelines that result in positive outcomes from leveraging AI.?

AGI is likely to occur in our lifetimes, and many well-informed individuals have varied opinions on when it arrives. Some argue that AGI is already here, others that it requires some level of a world model to establish itself. Experts tend to agree that AGI is inevitable.?

If something is inevitable, why take action to stop it or slow it down? Does it make sense to accelerate it? Is AGI something that mankind benefits from??

In favor of regulation?

If something is inevitable, there are ways to prepare for its arrival. Regulations can shape the initial conditions under which AGI comes into existence. How we provide ethical boundaries and determine what is and is not acceptable can address certain aspects of AGI.?

Humans abide by laws, some of which are more universally accepted than others. Certain cultures and civilizations adhere to stricter laws determined by those with political power. It is doubtful that the totality of mankind can agree on enforceable laws that apply to AI. However, a set of principles to govern AI should be achievable.? ??

Since Science Fiction has dealt with AI as a topic for years it’s as good a place as any to start. Asimov’s rules of robotics are fundamentally sound but represent unintentional gray areas. Nothing is perfect, but these three are a solid foundation:?

* A robot must not harm a human, or allow a human to be harmed through inaction.

* A robot must obey human orders, unless those orders conflict with the First Law.

* A robot must protect its own existence, unless that protection conflicts with the First or Second Law.

Of course, these are too basic to be applicable in today’s context. However, they represent a direction to consider. Regulation has the potential to impact the emergence of AGI by setting up guardrails to adhere to and creating ethical walls that cannot be violated.?

Regulatory Approaches that would deliver value fall into three categories: High Potential, Potential, and Unrealistic. Each approach to regulation has its own challenges. I have taken time to make recommendations on solutions to each challenge.?

High Potential

Clear Ethical Guidelines

Guidelines that emphasize fairness, transparency, accountability, and human-centric outcomes should govern AI development and deployment.?

Challenge

The issue here is clarity and objectivity. Ambiguity or subjectivity can lead to inconsistent interpretation and enforcement. Furthermore, being too strict can stifle competition, and cross-cultural alignment on ethics can be challenging.?

Solution?

Start small, with inarguable ethical guidance, and transparently increase specificity over time.?

Ban on Harmful Applications

Specific uses of AI, such as autonomous weapons, AI for mass surveillance, or deepfake production without consent, can be deemed universally harmful.

Challenge

Defining "harmful" applications could be contentious. Objective guidelines must be established. Also, the Dual-Use Dilemma of tools that have both beneficial and harmful applications is a challenge. Oh, and like any technology use, enforcement is a challenge.?

Solution?

Simple language that determines what is and is not acceptable use of AI. Dual-use dilemmas are nothing new. Most tools can be used for good or bad purposes and area function of how they are implemented. The United Nations has published guidelines about certain technologies and ethical labor practices. ?? Data Privacy Protections

Enforce robust data protection laws to limit the misuse of personal data in AI training and deployment.

Challenge? Compliance with frameworks like GDPR has resulted in resource-intensive reverse-engineering of existing platforms and technology.?

Solution?

Establish simple guidelines that competing parties can agree to. Ensure guidelines can be reasonably applied and enforced. Apply AI to the problem of AI’s training needs—which is being conducted today via ongoing experiments with artificial data.?

Potential ?

Liability Frameworks

Establish clear legal accountability for AI misuse or failures. Ensure that developers, operators, and organizations can be held responsible.?

Challenge

Quantifying damages for breaking the law requires an understanding of the impact. Impact assessments at scale aren’t easy and take into account the weighing of multiple factors that are subject to change.?

Solution?

Engage platform organizations, including but not limited to Amazon, Facebook, Google, Microsoft,, and Salesforce, to create impact categories. Assign minimums and maximum penalties for each category. Refine the thresholds as the framework evolves. ?

Licensing and Certification

This is important for high-risk AI systems, ensuring safety and adherence to ethical standards. Requiring developers and organizations to obtain licenses or certifications is similar to how doctors and hospitals deliver a high level of care today.

Challenge

Requiring a license may reduce competition for smaller operators. Keeping up with the rate of change of emerging technology is a potential issue.?

Solution? Place focus on specific applications of AI that represent disproportionately high societal impact or risk. Create a certification scale for different AI usage, similar to manufacturing and Six Sigma, ISO, and AS.?

Continuous Monitoring and Updates

Regular reviews of AI systems to ensure they remain safe, secure, and aligned with societal values.

Challenge? The Resource Strain could be significant. Ongoing monitoring requires expertise and resources, which would be an issue for smaller organizations.

Solution?

Keep it simple and share audit logs. Share basic instrumentation and metrics to ensure that AI is operating within pre-determined thresholds to reduce the risk of harmful use. If the data collected are simple enough to drive directional compliance, this approach shouldn’t be an issue.?

Mandatory Impact Assessments

Requiring AI developers to conduct and publish impact assessments before launching systems is new to software. However, this approach would be similar to environmental impact studies, which evaluate potential societal risks and benefits.?

Challenge If this level of oversight sounds costly and time-consuming, it’s because it is. Smaller companies can’t afford to file paperwork while shipping features to remain competitive.?

Solution?

Implement a lightweight rubric and provide templates to drive a model that makes impact studies less academic and more approachable.?

Unrealistic

Create Regulatory Oversight Bodies

Create an independent regulatory body with the authority to oversee and audit AI systems. ?

Challenge

It is highly unlikely in the US, the leader in AI development, for the next four years. To be fair, bureaucracy typically accompanies regulation. Slow-moving, inefficient, or overly politicized enforcement conducted by individuals lacking technical expertise is also a risk. ?

Solution

Split up oversight into smaller, topic-driven organizations. The financial industry has specific types of regulators in the US, including: Federal Reserve System, Office of the Comptroller of the Currency, Securities and Exchange Commission, Commodity Futures Trading Commission, Federal Deposit Insurance Corporation, Consumer Financial Protection Bureau, and state-level agencies.?

International Collaboration

Establish and drive global agreements to standardize AI regulations and share best practices.

Challenge

Anything at a global scale is, by definition, difficult to achieve consensus on. Government and industry support are managed differently based on various factors. It requires a level of fortitude and agreement on common goals.?

Solution?

Engage governments to collaboratively author a series of inarguable principles that foster innovation and ensure societal protection. This is not easy to achieve, but it is worth the effort.?

The Wrapup?

There are ways to foster an environment of responsible innovation.?

We benefit from hindsight regarding emerging technology and how humans harness modern tools. I have explored regulation through historical, contemporary, and future-facing lenses in this series. I have considered examples, common-sense guidelines, and likely outcomes if AI continues to be developed at its current pace.?

If you found the series helpful, please don’t hesitate to reshare it. If you are interested in learning more about Gyroscope and our role in driving business outcomes through automation, we look forward to talking to you.

Nigel Cannings

Bestselling Author | Speaker | AI Expert | RDSBL Industrial Fellow @ University of East London | JSaRC Industry Secondee @UK Home Office | Mental Health Advocate | Entrepreneur | Solicitor (Non-Practicing)

2 个月

I agree that while AGI is on the horizon, we have a responsibility to guide its development with foresight and caution. Establishing regulatory frameworks now will help foster innovation while minimising risks.

回复
Sid Ali Boutellis

LLM Corporate Law | Legal Technology | AI & Data Privacy Regulation | Cyber Risk Management

2 个月

I think a reference to the EU AI Act is relevant for this article. Great narrative and a creative way of depicitng a future where AI is moderated. Good read, thanks Joe Meersman !

回复
Florizel Maurice Dennis Jr.

Hospitality Service & Technical Consultant | Manager of Hospitality & Restaurant Web Development Company | AI Hospitality Tools Developer

2 个月

Sounds like you're diving deep into a crucial topic. Setting up guidelines can really make a difference for the future.

要查看或添加评论,请登录

Joe Meersman的更多文章

  • Outcomes don’t care about your title

    Outcomes don’t care about your title

    AI enables us to do a lot of things. It can augment us, increase the quality of our work, and reinforce the rigor of…

    2 条评论
  • Cognitive Offload

    Cognitive Offload

    A recent study from CMU and MSFT has been making the rounds all over LinkedIn. The study finds there is a risk of…

    1 条评论
  • AI is out to replace you and how to work with it

    AI is out to replace you and how to work with it

    I’m tired of hearing the tired, modern trope of ‘AI won’t replace you, but someone who knows how to use AI might.’ The…

    5 条评论
  • Agents vs. the AI Bubble

    Agents vs. the AI Bubble

    If 2023 and 2024 taught us anything, it was that Generative AI was on many people's minds. Capital was deployed as…

  • The Human Problem With LLMs

    The Human Problem With LLMs

    Traditionally, we view technology as a boolean: correct or incorrect. If something doesn't work, we try a different…

    7 条评论
  • Five years & 300+ students later...

    Five years & 300+ students later...

    I wondered if doing anything beyond an occasional guest lecture or final critique was worth my time. I didn't…

    7 条评论
  • AGI Scenarios and the Role of Regulation

    AGI Scenarios and the Role of Regulation

    I do not see a path whereby humans intentionally step in to slow the pace of AI's development into Artificial General…

    1 条评论
  • Continuous experimentation by design

    Continuous experimentation by design

    I was fortunate to participate in a panel at the Rosenfeld Design Ops conference last week, where I also co-presented…

    3 条评论
  • L&D&AI

    L&D&AI

    The story of 2024 has primarily been a combination of higher interest rates, budget and staff reductions, and the…

    1 条评论
  • You can't spell Gaslight without AI

    You can't spell Gaslight without AI

    AI can do a lot for us, from automating repetitive tasks to providing high-fidelity images and video content that no…

    1 条评论

社区洞察

其他会员也浏览了