What does generative AI mean for copyright and privacy laws?

What does generative AI mean for copyright and privacy laws?

Another day, another lawsuit… it seems you can’t open your news feed these days without stumbling across some technology related litigation, whether real or threatened.

This is not exactly a new thing. From Apple’s 1988?copyright infringement case?against Microsoft to?its own legal battles over?anticompetitive behavior, the Federal Trade commission suing Amazon for supposedly underhand tactics around its?Prime subscription scheme, and ongoing spats between certain?tech billionaires?(or with their own company’s?previous lawyers), the sector has always been ripe for legal disputes and counterattacks.

I guess we shouldn’t be too surprised by this. After all, when an industry is built around innovation and finding novel ways of doing things, that means breaking norms and new ground in equal measure. And so monopolies are able to rise up relatively quickly, existing arrangements and agreements are disrupted, and it becomes clear that new boundaries and frameworks are going to be needed to protect people’s rights and prevent wrongdoing.

As generative AI continues its unyielding ascent, it is making its presence felt here, as well as in the workplace. Several high profile cases are currently in progress, including one brought by the American comedian and author Sarah Silverman, who has joined two other writers to sue?Meta and OpenAI, the creator of ChatGPT, for unauthorized use of copyrighted material. The claims are that AI models developed by both firms were trained using the authors’ original work without their permission.

In another,?Microsoft, GitHub and OpenAI?are embroiled in an ongoing dispute over their alleged violation of a range of laws by using public sourced code from GitHub to create a programming assistant called Copilot, and the Codex machine learning model for OpenAI. Elsewhere,?Google?is facing a class action based on claims that it scraped data from millions of users without their consent, also to train its AI products.

What’s really interesting to me is where exactly the new boundaries lie. Take the data scraping issue. This is something that has been happening for years: essentially, it’s gathering and analyzing information from websites and using it for a specific purpose. Examples include everything from monitoring internet usage and social media sentiment to gathering customer information for lead generation and searching specific product information. In other words, it’s a standard and (generally) accepted part of the online experience.?

Where it gets tricky is around the issue of consent. We might all be quite happy to post our photos and updates online, or share our contact details and preferences with the brands we want to interact with, but what happens when that information is hoovered up by an artificial intelligence training model without our knowledge or approval?

This is where sites like?haveibeentrained.com?will start to come into their own, as checking whether your information has been used to feed an AI becomes the new Googling yourself. And yes, I have tried it (see below), but I have no idea who some of those guys are ??. I also searched for my dog, but she seems to have escaped the bots so far.

Pictures of Olaf Swantee found during a search on haveibeentrained.com
Source: haveibeentrained.com

Because everything is moving so quickly with generative AI, some commentators have been moved to compare it to the Wild West. With a similar frontier spirit, sense of unbounded possibility – and outright lawlessness – the operational and ethical concerns around the technology will only continue to ramp up as it outpaces legal and commercial frameworks.

This makes it really important to get on top of the various issues – copyright and otherwise – that tools like ChatGPT are now throwing up. Some firms are already developing a position on the use of AI-generated material – although these will probably need to be relatively fluid as laws and regulations evolve.

For example, the stock photography company?Shutterstock?has said it will not allow content that has been generated by an AI to be submitted for licensing on its platform, while Valve, which runs the?Steam games store, is understandably cautious about accepting new games containing AI-generated assets if the right to use the relevant data isn’t clear.

These are not simple problems to solve, and it could be tempting to put it all in the “too difficult” bucket, but that’s not really an option. Before we all get too depressed, however, we should remember that when?Napster?burst onto the scene in 1999 and enabled free music sharing, it not only broke just about every copyright law going, it had the knock-on effect of bringing in streaming licensing deals – and changing the music industry forever.

Who knows what the solutions for generative AI will be? But one thing is for sure, and it’s that the lawyers are going to be kept very busy as this new frontier matures. In fact, law could actually turn out to be the tech industry’s most lucrative profession. Is it too late for me to retrain?


Photo credit:?Gerd Altmann, pixabay

J Kramer

JOLIE KRAMER GM - RECRUITMENT SALES

1 年

Hello, Ref : to your posting on Linkedin .We're a management consulting and a global search recruitment firm. Smartbridge helps companies find the right candidates in the shortest time possible. Pls pm me OR share me your email address if you're open to hire candidates through a recruitment firm .(We work on success model and also monthly retainer) to meet the recruitment needs of our customers. thanks, Jolie| Smartbridge SKYPE /EMail ID : [email protected]

回复
Marc R. Esser

Business & IT Transformation | Your Partner for Change & Project Success

1 年

Great post! It's amazing to see how quickly generative AI is changing the way we work and live. It's definitely raising some interesting questions about copyright and privacy laws. I'm curious to see how governments and businesses will respond to these changes and how they will shape the future of work.

Marc Vontobel

Bringing people together through technology | Founder & CEO at Starmind | MIT Technology Review Global Panel

1 年

Olaf, you've spotlighted a crucial issue that's often overshadowed in the current hype. The main hurdle lies in the inherent nature of large language models, which don't track the origins of the information they generate, making proper attribution challenging. Yet, if we can somehow overcome this, a whole new vista of opportunity opens up. Consider the imminent disruption of the advertising industry. The era of banners and website ads is nearing its end, soon to be replaced by ultra-personalized in-text or conversational messages. This shift will potentially bring in huge revenue. If we can attribute a portion of these profits to the original content creators, it would create a more equitable system than what we currently have. Although it may seem like a unrealistic aspiration at the moment, it's a vision that excites me.

要查看或添加评论,请登录

Olaf Swantee的更多文章

社区洞察

其他会员也浏览了