Will Regulation Kill Open Sourced LLMs?
Alp Arhan U.
Product and Solutions Engineering | Future of Work, Artificial Intelligence (AI), Intelligent Process Automation
Grow Your Perspective Weekly: Do we risk open source community with regulations?
? Reading Time:?3min 10sec?
Will Regulating AI Kill Open Sourced LLMs?
?? Why is this important?
Misinformation has, historically, spread through word-of-mouth.
As Yann LeCun and Andrew Ng have said, it’s important to distinguish between regulating technology (such a foundation model trained by a team of engineers) and applications (such as a website that uses a foundation model to offer a chat service, or a medical device that uses a foundation model to interacts with patients). We need good regulations to govern AI applications, but ill-advised proposals to regulate the technology would slow down AI development unnecessarily. While the EU’s AI Act thoughtfully addresses a number of AI applications — such as ones that sort job applications or predict crime — and assesses their risks and mandates mitigations, it imposes onerous reporting requirements on companies that develop foundation models, including organizations that aim to release open-source code.?
How can we protect open source while regulating to control bad actors?
In the U.S., a faction is worried about the nation’s perceived adversaries using open source technology for military or economic advantage. This faction is willing to slow down availability of open source to deny adversaries’ access. I, too, would hate to see open source used to wage unjust wars. But the price of slowing down AI progress is too high. AI is a general-purpose technology, and its beneficial uses — similar to other general purpose technologies like electricity — far outstrip the nefarious ones. Slowing it down would be a loss for humanity.?
What happens if the government controls open source tighter?
Many nations and corporations are coming to realize they will be left behind if regulation stifles open source. After all, the U.S. has a significant concentration of generative AI talent and technology. If we raise the barriers to open source and slow down the dissemination of AI software, it will only become harder for other nations to catch up. Thus, while some might argue that the U.S. should slow down the dissemination of AI, that certainly would not be in the interest of most nations.?
??
Never place your trust in us. We’re only human. Inevitably, we will disappoint you.
?— Westworld
?? The New Accord for the Actors Explained
If you wondered what happens if the faces of the actors or actresses are used to generate movies, this one is for you:
This groundbreaking New Accord agreement ensures that actors’ consent and compensation are central when their digital likenesses are used in film production. Key provisions include:
领英推荐
?? Behind the Agreement
The deal follows intense negotiations, reflecting the complex interplay between technology and traditional acting roles. It's not just a contract; it's a framework for future collaborations between humans and AI in the creative process.?
?
For the curious minds of the day
???Now that is said, here is what is new in the world of AI and automation:
Up next in our series: The AI-powered revolution in Synthetic Biology. Explore its transformative impact, from food production to advancing longevity.
?
Stay Curious. Stay Informed. Join us every week as we delve deeper into the challenges and triumphs of automation in the modern age.
New Episode Alert!
I had the absolute pleasure of sitting down with Jason Rosoff, the co-founder and CEO of Radical Candor. Radical Candor is the work culture methodology that established the work cultures at Google, Apple, and many other Silicon Valley companies.
From Steve Jobs and Eric Schmidt to Sergey Brin, Jason and his co-founder, Kim Scott, helped many companies establish how to build collaborative cultures that foster innovation and trust.
How was the newsletter?
?