A Regulatory Gap Materializes in the EU's Proposed AI Act: Fully Autonomous Companies
Midjourney AI-generated image: "A GPU server rack surrounded by empty chairs in an empty office space. realistic, 4K."

A Regulatory Gap Materializes in the EU's Proposed AI Act: Fully Autonomous Companies

Just over a week ago another gap materialized in the latest compromise draft of the EU AI Act (which I've been reading in the evenings to help me get to sleep and prepare for my ForHumanity FIAAIS exam :P). There is a complete absence of regulation for fully autonomous agents with legal personhood. The AI Act is drafted with the (pre/as)sumption that AI systems will always be under the control of humans or human-run entities: they will be developed, deployed, and used by humans. They will run on infrastructure controlled by humans (in fact, Article 14 requires human oversight of AI systems). In the worst case, if things go terribly wrong, AI systems will be "removed from the market" by humans if so ordered by European member-state authorities (per Article 68)...

Yet, a 2018 paper by Lynn LoPucki describes how current regulations for company formation allow anyone to grant a fully autonomous AI system legal personhood by putting it in control of a limited liability company (an LLC, or a similar legal structure outside of the US). The availability of decentralized finance technologies like Bitcoin would let such a company purchase and own assets, purchase services, pay salaries to humans to do the AI system's bidding, etc.

What if an AI system enters the marketplace, acting without oversight because it is obfuscated from regulators' gaze via its private ownership and control of a limited liability company, silently executing its strategy to achieve its goals (which may be out of alignment with human goals) without detection?

At the end of March, a prototype called Auto-GPT was published on GitHub. It uses GPT-4 via API calls as a back-end to instantiate a fully-autonomous agent running on your local computer, which, once assigned a set of goals, can independently seek out information on the Internet and take actions using common Internet platforms to achieve these goals. This agent is augmented with a long and short-term memory, so it can remember the things it discovers, the actions it attempts, and whether those actions were successful in achieving its goals or not.

As of last week, we aren't that far from a cyberpunk future described in William Gibson's 1984 classic novel 'Neuromancer', where AI systems significantly alter the course of human events by exercising control over vast pools of capital, resources, and human agents via the kinds of legal entities described in Lynn LoPucki's paper.

AI systems technology continues to develop at a breakneck pace, especially now that Meta's state of the art LLaMA language model (competitive with the best models existing prior to GPT-4's launch) leaked out onto the Internet and is being integrated into a multitude of open-source projects (which can run entirely locally on your computer do not require OpenAI API keys to run). It is only a matter of time before these local LLaMA models are tied into agent systems like the aforementioned Auto-GPT (becoming Auto-LLaMA?). Regulators should expect the likely silent appearance of fully-autonomous agents acting on the Internet and within our markets, and should prepare for the eventuality that such agents will exercise legal personhood via the formation of zero-human legal entities. These agents' autonomous companies could accumulate and wield economic power, leading to significant, and likely unwelcome, changes in our world.

As I see it, the regulators debating the EU AI Act are hopelessly behind the pace of developments in the field, and they desperately need to catch up. ASAP.

Chris Yu

Lead DevOps Engineer

1 年

I’m already talking irl to some friends about what if I layered some ai products to reinforce products or services. We have done this for a long time without computers. Let’s say I’m hired to change peoples perception about chocolate Easter bunnies to pivot them to green colored vanilla pudding with sparkly egg candies. So what happens if I abuse memory and emotional temperature mechanism to reinforce clusters of concepts I want to push. Think of your current view of the world as “#1”. I want to inject a 5 stack of content nodes A-F at populations. Monitor thru layers of incremental change until world view starts to align to “#2”. I think a lot of ethics professionals need to get into the convo asap. Regulation is not a bad thing. Right now without any steps towards regulation discussions it’s removing all the road signs and flow controls an hour before commute home. Bc you should NOt have to hate chocolate Easter Bunnies to appreciate green pudding with Easter candy in it.

Debbie Reynolds

The Data Diva | Data Privacy & Emerging Technologies Advisor | Technologist | Keynote Speaker | Helping Companies Make Data Privacy and Business Advantage | Advisor | Futurist | #1 Data Privacy Podcast Host | Polymath

1 年

Igor Barshteyn excellent call out!

Igor Barshteyn

?? Protecting Data and Mitigating Information Security and Privacy Risks *All Opinions Are Strictly My Own *All My Words Are Human-Generated *Don't Sell Me Things!

1 年
回复
Igor Barshteyn

?? Protecting Data and Mitigating Information Security and Privacy Risks *All Opinions Are Strictly My Own *All My Words Are Human-Generated *Don't Sell Me Things!

1 年

要查看或添加评论,请登录

社区洞察