Copyright uncertainty? Not sure about that...

Copyright uncertainty? Not sure about that...

The UK government's consultation on AI and copyright has reawakened my inner blogger. Longer rants over on Medium and my trusty old Copyright Blog (links below)... but I'm putting this one here too...

So this consultation. They want to make a new exception to copyright law to allow AI companies to crawl and copy content without a licence. It's prefaced by a minister saying the law on copyright and AI is "uncertain". So they want to replace an imaginary uncertainty with an unnecessary and value-destroying inefficiency, Grrrr...

  • There’s no uncertainty. The consultation makes that clear (see above). What’s uncertain is what AI companies should do about the fact that they have infringed billions of copyrights. That’s their problem, not the government’s.
  • The proposed exception to copyright, which would give AI companies the right to copy content for training except where they have been explicitly told not to, depends on every creator using as-yet non-existent technology to assert rights they currently have by law, every time they produce something or someone publishes it. Not very efficient vs the current regime where they don’t have to do anything at all. Also will take a while to get going because the technology to do this doesn’t exist.
  • Anyone who can’t afford, doesn’t have the capability or doesn’t know about this will have their rights removed. Regressive — copyright is a form of property. Removing it from those least able to defend it is unfair and illiberal.
  • AI companies will have to check these “rights reservations” every time they find something they want to copy (that’s everything on the entire internet). Where rights are reserved, and assuming they have decided they want to comply with the law, they’ll have to either not copy the work or seek permission — just as the law demands they do right now.
  • This means that a licensing marketplace will need to develop whatever happens — assuming of course that AI companies don’t decide to just keep ignoring the law, as they have done to date. If they do that, courts will have to decide — the lawsuits are happening already and this law won’t stop them.
  • The government hopes that making our copyright more permissive will attract more AI companies to the UK. But places like Singapore have already created a much more permissive regime so the UK has already lost that particular race-to-the-bottom.
  • In any event, there’s no sign that copyright is the reason AI companies choose where to invest. Other factors matter too. For example, where they’re based. Or energy costs. Which are?four times higher?in the UK than the USA.
  • Meanwhile, AI is moving at 100x the pace of legislative processes. What Deepseek has shown is that AI technology isn’t the most valuable component of an AI company. The content they use to train their systems is much more significant. A market in this content will be a huge economic opportunity, especially for the UK whose creative industries out-pace most of the rest of the world and already contribute £125bn to the economy every year.
  • The proposed exception will get us nowhere. It will create huge amounts of cost and huge inefficiences, but won’t deliver any material benefit. Even if it succeeds in attracting AI companies to the UK to conduct training, it will do so at the cost of every creator who will either have to carry the cost of asserting their rights, or be forced to abandon them.
  • In fact, creators will have to carry that cost anyway because the exception will apply to AI companies wherever they may be. We’ll have made UK content less valuable to the UK with no guarantee that the country will benefit in any way at all.
  • It also won’t do anything to address the issue of the huge infringements already done. These matter, because they were largely done stealthily and they involved all the content on the internet. Applying new rules to future copying really does feel like shutting the stable door after the horses have not only bolted but stampeded back and trampled the stable to dust.

More detailed thoughts...

Here’s what the minister said:

“At present, the application of UK copyright law to the training of AI models is disputed. Rights holders are finding it difficult to control the use of their works in training AI models and seek to be remunerated for its use. AI developers are similarly finding it difficult to navigate copyright law in the UK, and this legal uncertainty is undermining investment in and adoption of AI technology.” (emphasis added)

Now… read on to clause 5 of the consultation itself:

“The copyright framework provides right holders with economic and moral rights which mean they can control how their works are used. This means that copying works to train AI models requires a licence from the relevant right holders unless an exception applies.”

Does that seem uncertain to you? In case you aren’t sure, carry on to clause 41:

“The use of automated techniques to analyse large amounts of information (for AI training or other purposes) is often referred to as “data mining”. … If this process involves a reproduction of the copyright work, under copyright law, permission is needed from a copyright owner, unless a relevant exception applies”

Still not quite sure? Seems pretty clear, to the government at least.

Copyright law is crystal clear, as they helpfully explain.

But what can AI companies do? They have ignored the law, and so they face consequences. If they had just copied one or two things, copyright owners might just turn a blind eye, or the AI companies might get their wrists slapped in court.

But they didn’t just copy a few things. They copied everything they could find on the entire internet. Billions and billions of works, none of which they were allowed to do by law. As well as ignoring the law, they ignored all the various ways copyright owners have of explicitly saying that this sort of copying is not allowed: they didn’t seek permission from anyone.

Which looks like a whole heap of trouble. In the USA, where most AI training has been happening, statutory damages for “wilful” copyright infringement can go as high as $150,000. Per work copied. Even to the biggest of Silicon Valley money machines, that’s a lot.

That might leave them with a dilemma but they don’t seem to be unsure of what to do about it. They’re not doing anything at all, in fact some AI companies are doubling down and developing technical tricks to evade attempts by publishers to stop them stealing stuff.

They seem to be betting that instead of them needing to change to comply with the law, the law will change to retrospectively wave a magic wand and make everything legal.

Step forward the UK government and their proposed exception. It will allow AI companies to do something — train their systems without asking permission — which the law has hitherto not allowed.

Kind-of.

It will only allow them to do it if the copyright owner hasn't specifically asked them not to. Which, obviously, nearly every copyright owner will do if they can.

But HOW will rights owners do this?

Nobody knows. The press release announcing the consultation says as much:

“Before these measures could come into effect, further work with both sectors would be needed to ensure any standards and requirements for rights reservation and transparency are effective, accessible, and widely adopted.”

So, everyone who wants to still have their current rights, or who wants to licence or restrict the use of their work by AI companies for any reason, will have to go through some as-yet unknown process, every time they create something.

When they have done so, they’ll be exactly where they are today: right now the law says “don’t copy this without permission” and in future everyone will have to attach some kind of digital sign to every single thing they produce saying the same thing. Doesn’t sound very efficient.

Not much more efficient for AI companies either: every time they want to copy something they’ll need to check whether this digital no-entry sign exists. If it does, they’ll have to either not copy it or try to get permission to copy it — exactly what they are supposed to do today.

Assuming they decide to start trying to comply with the law, which they have not done so far, there will need to be some sort of system to help them get what they want on terms they can live with. It’s called a marketplace and what it will sell are licences. These marketplaces exist today for all sorts of rights, even AI training rights. If AI companies become willing buyers of rights, we can be sure it will quickly develop, become larger and more efficient.

Again, this is exactly the same as today. The market is small because most AI companies have decided to ignore it, not because it doesn’t exist or rights holders are unwilling to participate..

So let’s say we have this new exception, we have a system for “rights reservation” which is widely adopted, content owners have absorbed the cost of using it to say they don’t want their work copied without permission and AI companies have all decided to start complying with the law and start participating in a market for rights… what have we gained?

AI companies will have a new right to exploit the work and property of creators who are unable, unaware or can’t afford to reserve their rights — plus some who happy to give them up.

For everything else, which will include substantially everything produced by anyone for whom their creativity is their living and anyone who just would prefer not to have their work being fed into AI systems for unknown purposes, they’ll still need what they need today: permission.

All of which sounds like what Bono might call “running to stand still”. A huge amount of energy and effort being expended to go exactly nowhere.

The biggest irony, though, is that it’s completely irrelevant.

Very few AI companies, and none of the giants, are training their systems in the UK. The government has heard that training AI is very expensive, though, and fantasises that they might start doing that expensive thing here, if our copyright law isn permissive enough. Imagine the growth!

Thing is, they won’t.

If they’re looking for the most permissive copyright regime, other countries have beaten us to the punch and gone even further, so far without the giants of AI relocating there to take advantage.

But AI companies seem content to play chicken with copyright for the time being, those battles are going to be fought, primarily in the US, over the next few years.

Other factors might weigh more heavily against the UK. For example, a large part of the cost of training AI is the energy needed by data centres. Energy in the UK is among the most expensive in the world.

If AI companies start to invest in the UK, which we should all hope they do, it won’t be because of our newly permissive but very clunky copyright regime.

Turns out that for the moment, AI companies prefer to stay close to home.

Also, Deepseek have just up-ended the whole hypothesis by training their AI for, they claim, about 5% of what it cost OpenAI to do the same thing. Fair to expect that the investment needed to train AIs will come down, quite dramatically. Perhaps the opportunity isn’t quite as big as it was thought to be when this consultation kicked off, long long ago (it’s a 10 week process, which is a long time in AI-land).

All of which means that this exception is a kind of giant Rube Goldberg machine, proposing to create immense complexity and cost which will achieve, even in the best cases, virtually nothing. Other than giving away the property of people who can’t afford to defend it, to any AI company anywhere in the world which wishes to use it with impunity.

Hopefully the consultation will highlight that the path the government is considering is a huge waste of time and will only harm the creative industries with no benefit guaranteed. They can do better by defending our IP and looking to establish a leading position in the licensing market will will inevitably develop.

Otherwise, creators, you’d better start thinking about how to get your work off the “open” internet. It’s not safe there.


More from me at https://copyrightblog.co.uk and https://medium.com/@dominic_young

Ross McCaul

Head of Licensing at News Corp

2 周

Great piece Dominic. Nailed it. Copyright legislation is a fair framework for users and creators alike. Seeking to change it so creators & publishers only have 'opt-in protections' because an all-powerful user group prefers it just stinks. Remember the growth of the internet/digital was the first pretend excuse for copyright not being innovative enough? Now it's Ai.

回复

My light-hearted but serious prank about this very issue 18 years ago. https://www.theregister.com/2007/06/04/book_publisher_google_laptop/

Dids Macdonald OBE., FRSA

Chairman and Co-founder of Anti Copying in Design, Past Master of the Furniture Makers' Company

3 周

Thanks for sharing

Dominic. Excellent. But we will be branded as holding back the sun light of Ai. Copyright has always puzzled.?Pointing out the?£126bn in?value added?to the economy and, all of which?rests on copyright, normally gets the rejoinder?"yeah but its not a real industry is it ? No smoke stacks,?no?factory?floors.”?.. How do we get the creative tribes, music, writing TV/Fim together and increase collective bargaining power. ?Just say no, seldom succeeds. .When the UK cable and satellite industry started?it went to Hollywood and said we want to buy, your movies for our channels. The answer was “Given you are at the start of this industry you cannot pay any amount of money that is worth what our movies will build for you, however we will sell to you but??selling to you is to invest and we will want??more than money and and some ownership in your services, so we participate in your future growth.”…..The Music Industry?had a chance to get?some??ownership?in MPEG3 , the Studios had a chance to get ownership?in HBO, and then Netflix, but?they didn't .? Any copyright material produced by Ai should create royalty revenues, but there is chance here to get participation beyond that.

回复
Richard Mollet

Head of European Government Affairs at RELX plc

3 周

Great post Dominic. I had the pleasure of debating this with Jimmy Wales today (….name drop!….) and he - like our friends at MSFT - is convinced that the temporary copyright exception makes this all legal. It’s barmy obviously but a lot of people believe it. It is one example of where crystal clear clarity from HMG would be useful.

回复

要查看或添加评论,请登录

Dominic Young的更多文章

社区洞察

其他会员也浏览了