What AI Regulations Should Go on the Napkin?
A simple framework based on an acronym - PILOT - is proposed for US regulation of AI using an oversight board.

What AI Regulations Should Go on the Napkin?

So, there’s been lots of discussion including some testimony recently on the topic of regulating Artificial Intelligence (AI) in the United States – and this seems sensible. And yes, we should stay focused on creating legislation that helps to avoid the bad stuff (like ending humanity) and focuses on supporting the good stuff (like helping humanity). This is clear.

But I spent some time this week reading through various papers, reports , and write-ups by experts, mostly legal, who are opining on this topic. And I think they are making this too complicated. I understand that this is a complex topic, but KISS seems a more suitable first step. We should start simple, and let things get more complex later.

To that end – I took a stab at developing something simple and easy to remember. It even has an acronym – PILOT, that can be written on a napkin. And this is something an AI will never, ever do. Napkins are for we organic beings. Napkins are where God decided that We-the-People should sketch our ideas. Below are the regulations I recommend for Your Napkin:

Purpose (P)

Our regulations should provide clear guidance on what purposes are considered acceptable (and unacceptable) for AI development. Killing weapons would be an example purpose that should be specifically illegal, even though our Defense Department will probably build a new wing on the Pentagon to do this sort of work. But for researchers and commercial AI developers, this should be a no-no.

A process could be put in place to review the purpose of any serious AI research and development. This sounds bureaucratic, but this is done now for many other areas. The FDA does this for food, the FCC does it for comms, the NIH does it for medical research, and so on. We’d need to set up a National AI Oversight Board (NAOB) to review purpose statements.

International (I)

Focus on international involvement would need to be a major component of the regulations – but this does not mean setting up cooperation with global organizations, even though this is an important government function for AI. Rather, it references the degree to which AI work can be diverted to other countries that have more lax AI regulations.

It will be essential to prevent the offshoring of AI research, or the establishment of subsidiary companies that operate in some country that provides a big Welcome Sign for irresponsible AI research. Just ask US tax professionals about this challenge. If it can happen for corporate tax inversions, it could happen for AI. We’d need to regulate, probably using the NAOB.

Learning (L)

It is better to focus on the AI learning process, than to demand some sort of avoidance of bias, whatever that is. The problem is that all algorithms are biased toward something. That’s how algorithms work. Instead, the AI must be explained in the context of how, where, and what data is used for construction of large data stores to support the analysis.

That said, regulations can certainly identify some example bias cases that would be considered obviously illegal to create. (I’ll let you create some examples in your mind. I'll wait here a moment while you think.) The best review process here would be for the NAOB to provide clarity on acceptable and unacceptable sources of learning for AI systems. This is a tough one, by the way.

Ownership (O)

This is for We-the-People (organic humans) to protect their creative sentient work from the clutching paws of AI systems. You’ve all seen AI-generated images that include embedded components with watermarks from Getty Images or the like. And this is not cool. It takes money to create these images, and AI systems should have to pay up.

The intellectual property (IP) rights here can pull from existing law to drive AI regulatory frameworks. It shouldn’t be too tough to impose regulations that more-or-less follow the idea that if you take something from another owner, that you will need to follow a procedure (usually paying) to reuse the IP. This will keep lawyers busy. They will like this work.

Transparency (T)

I saved the most important regulatory control for last (and also so that I could use the PILOT acronym for your napkin). AI researchers and developers should have to provide utter clarity in the design, development, and operation of their AI system – presumably to the NAOB. I cannot see this being open to the public unless the company decides to do so.

The transparency should include how the company created the software, how it is updated, what the decision-making process is, and on and on. The NAOB will need to include good computer scientists with experience who can try to make sense of the AI. This will not be easy, and meta-research-within-the-research will be needed to use AI to explain the AI. (Ugh.)

Action Plan

The US should begin now to develop a PILOT NAOB infrastructure, including membership, charter, website, outreach, and the like. I’d put it inside NIST for now, and let the scientists there start to develop the approach. (They already have an Advisory Committee .)

The NAOB can become its own thing after they get some experience working through practical use-cases. It should be an honor for a computer scientist to be appointed to this board, and membership should be rotated. As always, let our TAG Cyber team know what you think.

Joel Caminer

Cybersecurity and Risk Management Executive | CISO/BISO/BIRO | Cyber NYC

1 年

Seems like a reasonable start Edward Amoroso. Let's not let “perfect” be the enemy of "good" and get bogged down in the details quite yet.

Dan Solero

Assistant Vice President @ AT&T | Certified Information Security Manager

1 年

Great start on a necessary regulatory framework. I hope the government is listening. By priority I’d probably do TOPLI, but that isn’t very catchy. For enterprise security practitioners there are a pretty impressive number of challenges to be addressed outside of regulatory controls.

Alton Drake

Drake family investments

1 年

Ed, in my experience working in your organization, you promoted keeping things simple. How does that quote go “design it to help users understand the threat so that they can better protect themselves.” Right, how simple is that?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了