Containment
"Pandora's Box" by OpenAI's ChatGPT 4o

Containment

where we are now

According to a recent article in "The Information" an unnamed employee at Meta has stated that they plan to release, in the near future, a 405 B parameter version of the large language model Llama. It is hard to know of course what this means - who is the employee, what is really being released, how capable is the model actually... but in my mind it is a wake up call for all of us about where the line is in, and what civil society can do about, influencing the containment or proliferation of powerful technologies. And I think we should be asking, do we want criminal hackers or rogue nations to have uncontrolled access to the most powerful artificial intelligence tools on the planet?

On the one hand the idea that academic research should be broadly shared in order to inform all scientists, limit unnecessary repetition of expensive research, advance fields more rapidly, and thereby expedite benefits for all of humanity is a long honored tradition that "open source" falls somewhere within.

Contrast this against the convicted spies Julius and Ethel Rosenberg sharing science related to a wide range of US military programs, not just the Manhattan project for which they were executed. This espionage arguably advanced the Soviet Union in the development of their own war machine (including nuclear weapons) making the world less safe. Stealing military secrets clearly doesn't fall within the academic tradition of sharing research.

Some historians have argued though that by providing the Soviet Union with this information and allowing them to build their own bombs, the resulting "detente" may have reduced the chances of the US authorizing use of nuclear weapons during the Korean war. Regardless of your belief in this topic, today nations of all kinds have largely agreed that allowing these weapons to fall into truly criminal hands is in no one's interest and thus we have succeeded so far in containing nuclear technologies.

Turning back to AI, it is worth asking in a world in which numerous criminal hacking gangs and rogue states are already using advanced computer technology to undermine western democracies, is it a good idea to provide them with the most powerful artificial intelligence technologies? Whether through espionage or through the "free speech" alternative of open source, should munitions grade technologies be shared equally to all -- good actors and bad actors alike?

There has to be a balance in a democratic society between freedom and controls. As Mustafa Suleyman wrote in his book "The Coming Wave"

"The likelihood of an AI war or conflict increases as more people have access to the technology, just like the risk of nuclear war increases when more countries develop nuclear weapons."

Mustafa Suleyman has shared his views on the containment of AI in various articles and interviews. Some of the key points include:

  1. Suleyman emphasizes the need for containment mechanisms that combine advanced engineering with ethical values to guide government regulations.
  2. He advocates for the creation of technical, cultural, legal, and political mechanisms to maintain societal control over AI.
  3. Suleyman also highlights the importance of holding technologists accountable for the impacts of their work.
  4. He suggests that powerful AI systems should be licensed by governments to ensure that they are used responsibly and to mitigate potential risks.

I personally agree with Suleyman's views on AI containment and his focus on the need for a multi-faceted approach that includes technical, regulatory, and societal measures to ensure that AI is developed and used ethically and responsibly. A release of powerful AI to open source is, in my view, precisely the wrong thing for achieving any reasonable controls on the proliferation and misuse of this technology.

Kim Jacobson

Fractional executive, JW Design Partners

3 个月

We are Icarus

回复
Michael C Bond

Content Marketing | Business Content Powered by AI. More Content for More Sales

4 个月

The lightning ain't going back in the bottle. After a lot of thinking, I pray that "superman AI" gets here and is stronger than "skynet ai"

回复
Emad Hasan

Product Builder | Former Founder | Board Member | Vice-Chair YPO AI/ML Subnetwork | Speaker

4 个月

Ted Shelton recent discussion on Stratechry podcast on this was really good (after Meta's Llama launch). I think the Unix analogy is right and wrong as the same time here. Unix based Linux was open source and did a lot of good and we saw closed versions of OS come out of it as well. AI can be similar but it's power is somewhat unknown and thus case of containment is relevant.

Jennifer McDonald

North America Area Vice President at UiPath Strategic Portfolio High-Tech|Retail|Telco|Media|Logistics|Airlines

4 个月

Great points as always! And especially the super computer add on!

Bronwyn Kunhardt

Co-Founder Polecat | Trustee 42 London

4 个月

“A release of powerful AI to open source is, in my view, precisely the wrong thing for achieving any reasonable controls on the proliferation and misuse of this technology.”Amanda Brock OpenUK Emma Thwaites Open Data Institute really interested in your perspective. My view is open source/open data is what enables this general purpose technology to drive new innovations and efficiencies for public good by providing as few restrictions as possible for good actors. Big AI (open or otherwise) has to take this into account in business models and governance. I do sometimes think we are only drawing on what we already know about control and power dynamics with new tech, which might not be enough to imagine a more open world, with collective governance, and lessons learned by those who work every day to protect (at the individual level) those most vulnerable to the bad actors Refuge Emma Pickering Niamh Kilalea

要查看或添加评论,请登录

社区洞察

其他会员也浏览了