Generative AI: is it a threat to creative economy?
Aleksandr Tiulkanov
Upskill in the EU AI Act - Link in profile | LL.M., CIPP/E, AI Governance Advisor, implementing ISO 42001, promoting AI Literacy
It’s 2023 and generative AI is trending. So are the discussions on policy changes some say we urgently need. It’s no wonder, having in mind that ChatGPT, Midjourney, Lensa, DALL-E 2, Stable Diffusion, and the like are all over the place in the blogosphere and the news. Compared to predecessors, these tools are based on more sophisticated machine learning models and deliver textual and visual outputs which are significantly more human-like.
So what are the legal and policy folks currently talking about?
Three things primarily:
1) Training data which make generative AI possible. Whether the AI developers are observing or infringing on intellectual property rights, as machine learning model training necessarily involves copying protected works. And whether future policy should be more permissive or restrictive in that regard.
2) Copyrightability of AI outputs. Whether they are protectable de lege lata (under the current law). And if not, whether they should be protectable de lege ferenda (under the new law we might desire), and to what extent duration- and rights-wise.
3) Competition between AI “prompt engineers” and traditional artists. Whether the works of the former are likely to displace the works of the latter, at least on some markets, and whether it’s fair competition.
Alongside, there is a related discussion on whether AI developers should be required to watermark “pure AI” outputs (so-called “computer-generated works”) to tell them apart from more traditional works.
We need to act now… or do we?
I understand the sense of urgency some have, but I also gather that this sense is often wrong. Artistic trends and fashion are quick, the law is slow. We may be starting a policy discussion which will, in 10 years, end up in a law which will be completely inadequate for the future reality. And if we will be 10 times as quick, we may just end up with a hastily crafted law which will block beneficial innovation.
In the 1990s, Heather Meeker was pondering whether the fair use doctrine (a US invention to restrict some rights owners’ tendency to abuse copyright law) should be extended to protect Appropriationism (taking the works of others’ as a form of artistic expression). But already then she argued that it is not appropriate for the law to concern itself with particular artistic trends. Notably, Appropriationism as an artistic movement was becoming less and less fashionable at the time.
Now, generative AI has brought us this new form of Appropriationism. ChatGPT, Midjourney, Lensa and the like have provided new opportunities for imitating the work of others. Unlike previously, it’s not the concrete works which are now being copied or recontextualised, but often the manner of expression, the style – the features and essence of what is it like to paint like Picasso or to speak like Jordan Peterson.
It is therefore understandable that some people may want to rush in with proposals to urgently deal with what they perceive as looming iniquities.
Artistic works are, they may persuasively argue, the continuation of the creator’s personality. So, an artist may feel violated if someone commands the AI to make a work in this artist’s manner, his or her particular style, and then try to market this work, potentially capturing the audience which may have been this artist’s.
Imitation, so what?
Is this something quite new? For centuries artists have been repeating after each other. It was, and is still, a thing to belong to a particular fine arts school where you couldn’t really tell whether it is one particular painter or someone else belonging to the same artistic group.
Moreover, innovation is often based on repetition with alteration. Take this example:
Would you say that Van Gogh should have been, under the current copyright regime, judged as infringing on Millet? For the benefit of the public and progress in the arts, we should have better answered in the negative here. And if the prevailing interpretation of the law were to prevent this, we would be better off by reaching a new interpretation under which Van Gogh would have been in his right to borrow from Millet as much as needed to become Van Gogh.
Yes, it took Van Gogh many years to achieve mastery in what, in the digital content world now, might as well have been achieved by applying a few filters over the original work in an image editing software or by cleverly wording a prompt for an AI image generator. But OK, progress happens.
Anyway, from a legal standpoint it might not matter that much how you managed to copy from another – by spending years in an art studio or by clicking a few buttons, as we’re assessing the end result here.
What about markets?
If the end result is such that the works of an artist are displaced from the market now flooded by AI-generated imitations, it is definitely bad for that particular artist. But is it necessarily bad for the market and the society at large?
Actually, one may argue that true works of art are not easy to imitate. As Leslie Kurtz has put it:
Indeed, the greater a work of art, the more stubbornly it resists simple explanation and the more difficult it is to abstract from it that which makes it unique.
While copyright is traditionally said to protect only expression, the true works of art may be said to be valued not only because of the particular expression but equally because of the underlying original ideas. In a masterpiece, you might value the combination of expression and idea and cannot readily divide the two.
And so, if an unsophisticated consumer sees no difference between the AI-generated imitation and the original work from which a style, or idea, or both, have been borrowed, this consumer might not have been in the market for this original work. He might have, presumptuously, been thinking that. But in fact, from the start, he might have been in the market for cheap imitations.
Likewise, someone wouldn’t be likely to commission a new incarnation of Beethoven’s 9th symphony, if his needs are simple enough to be satisfied by licencing some classical-sounding Muzak to entertain customers in his convenience store.
Here’s another, more personal example. At one point, I used to have an avatar here on LinkedIn which cost me a photo shoot and a reasonably high fee which I paid to Dmitry Ternovoy, a photo artist who also shot for the covers of the Forbes magazine. I still like the work, and I will order another photoshoot from him again when I will happen to be in the same city as he is.
But for the time being, as the old photo was becoming, well, old, I chose to update the avatar with the one created by Lensa. I paid a few euros for a set of 100 AI-generated avatars based on 10 amateur selfies of myself, and there were, may be, a couple of decent outputs which I liked, including the one you now see on my LinkedIn profile (as of January 2023). They don’t approach the quality of that original Forbes-level photo I had before. But I just wanted something quick, for a time, and I’ve got it.
I still have a longer-term interest in commissioning a new photo shoot, and I will pay another reasonably high fee for the real quality and, more importantly, for the particular skill and style of that photo artist. I have developed a bit of a personal connection with him, and I want to support him financially when I can. I want him to continue creating and to benefit the society (and my LinkedIn profile, of course).
领英推荐
So his work is not likely to be displaced by AI avatars. They circulate on two different markets.
The same is true for contemporary painters. I follow several of them quite closely and one in particular - Alexei Chaykasov. In the late 2020, I was impressed enough by one of his works to purchase it just a few hours after he posted it on Instagram. I have again developed a personal connection with him and sympathy to his works after meeting him at an art exhibition earlier that year. I enjoy his style, his narrative, his story, and I want to motivate him financially to continue creating.
His works, again, in my eyes, are never going to be displaced by AI generations. They are not on the same market.
And what about monetisation?
We lawyers are so used to copyright… well, existing, that we sometimes tend to be blindsided by other possible and sometimes preferred ways to “promote the progress… of useful arts”.
Yes, copyright was instrumental in securing that progress, and capitalism favoured the commodification of art and the monetisation scheme where the exploitation of an art object, commoditised, would become the primary source of the artist’s revenues. As Peter Jaszi has noted:
In capitalist economies, one function of the market is to assure the public distribution of commodities, but the discipline of the market extends to the private attributes of individuals (personality, emotion, sexuality, artistic self-expression) only to the extent that these can be effectively commodified.
But nowadays not all creative market players stick to commoditisation and copyright as the preferred monetisation vehicle.
Some creators do monetise intellectual property rights, but some primarily depend on incomes from live performances, merchandise, donations and subscriptions. They may think about selling tickets and increasing their subscriber count on Patreon, Onlyfans, and what have you. But they may not necessarily think about the Copyright Office or derivative works which may instantly appear after the live performance is done or the content is published.
What might be more valued by consumers who are actually paying something in these new creative markets are the verifiable and traceable originality and connection with the creator’s personality. Copyright and royalties might be of lesser or of no relevance as means of talent monetisation there.
So, regardless of copyright existing and being enforced or not, an increasing share of artists still secure and earn commissions and subscription revenues.
No hasty policy changes, then?
All of the above suggests that we might not want to rush with policy proposals to address what we could see, in the first instance, as an AI-induced threat to the creative economy.
The proposals to consider granting sui generis rights to AI users like prompt engineers (in the style of neighbouring rights) may not be that relevant considering the rising trend for creators being less and less dependent on intellectual property rights for talent monetisation.
Maybe, leaving purely AI-generated outputs uncopyrightable is a good thing, and maybe the interests of the businesses which choose to exploit such works could still be well protected, regardless of copyright, by means of statutes and case law which prohibit unfair competition, like the French tort of parasitism and the English tort of passing off. Maybe, mandatory watermarking of AI outputs would be also not that relevant.
Or maybe not completely – let’s wait and see, perhaps already this year.
And what if we ask ChatGPT?
In the meantime, what I definitely would not recommend to anyone, is to rely on generative AI for legal and policy advice on how we should do things.
Because, as it seems, generative AI might be producing lies, even if unintentionally, in defence of its developers. To that end, I undertook a quick experiment.
I asked ChatGPT whether it would suggest to me some case law on style imitation. The thing which might, theoretically, although normally shouldn’t, get you in copyright trouble if you ask AI generation tool to produce the work “in the manner of” someone.
ChatGPT was happy to oblige, and provided a couple of citations, with very neat summaries of what should have been the crux of the adjudicated cases and the courts’ legal reasoning.
The summaries were so persuasive that I, at first instance, thought: Wow! Is it really the case that I can now skip the drudgery of sifting through case law, navigating Westlaw, Lexis and what have you, to find out exactly what is the status of jurisprudence on a particular issue?
But as I, being a proper lawyer, delved into the cited cases, I realised that ChatGPT was just, well, lying.
The second cited case didn’t concern the topic of style imitation at all. The first one did concern it, but the verdict reached was opposite to what ChatGPT surmised. There was a finding of infringement based on the copying of substantial elements of the work in question (not to mention that the infringing work was a sculpture, not a painting, as wrongly suggested).
So, as of today at least, I would not recommend you trusting the summaries ChatGPT can produce for you on legal matters. You can take its outputs as basis for further research, but never at face value, because the tool seems to be more of a proficient producer of persuasive prose but without regard for the truth value of what is being said.
So what's the takeaway?
The generative AI has arrived. But it is unlikely to be a threat to creative economy. It may be disrupting a market, but not necessarily the market where successful modern creators and performers with solid fanbases operate. If you create and use generative AI outputs, be sure that you don't run afoul of applicable copyright laws and rules prohibiting unfair competition, parasitism and passing off. And if you're not a lawyer, don't trust generative AI to give you correct legal information: double-check everything or, better yet, ask a legal professional.
If you like my work, please support me on Patreon.