Cultural Policy and Generative AI: The Classic Copyright Dilemma
Hamidreza Sheshjavani, PhD
Researcher and Advisor in Cultural Economics and Cultural Policymaking
Although the first comprehensive attention to the artificial intelligence (AI) revolution appeared in the cultural sector, such as with the DeepMind Challenge Match in 2016, the prevailing opinion was that AI would not impact the field of culture and art anytime soon. This assumption, however, proved incorrect. It is probably why, about a month ago, the Cultural Policy Hub at OCAD U hosted a significant policy roundtable addressing critical issues related to developing policy and regulatory frameworks in response to the rapid adoption of GAI tools in the cultural and creative industries. One of the topics discussed was how to shape regulatory frameworks that ensure adequate copyright protections and safeguard the future of CCIs labourers. Following this roundtable, I’d like to share some ideas about this topic.
The more the learning content, the better the generative results.
AI's capabilities are built using massive collections of images and texts traditionally collected using web crawlers. Some well-known AI companies use the LAION-5B dataset, which contains almost six billion tagged images compiled from indiscriminate web scraping and includes a substantial number of copyrighted creations. The clash between technology and copyright law is not a new topic. In 2015, the American Authors Guild filed a lawsuit against Google for digitizing tens of millions of books and scraping texts to create Google Books. The court concluded that Google's actions constituted “fair use.”
It seems that large AI companies are nowadays using copyrighted works, without permission and payment, as training data for their models, based on the principle of fair use. However, this practice has opponents even among senior employees within the AI industry. For instance, in late 2023, Ed Newton-Rex, the audio VP at Stability AI, resigned, stating that "Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works." He argued that “in a society that has set up the economics of the creative arts such that creators rely on copyright,” this cannot be considered fair use.”
Currently, several lawsuits are being investigated in courts on whether or not copyright infringement has occurred: The New York Times v. OpenAI and Microsoft , Tremblay v. OpenAI Inc., Chabon v. Meta Platforms Inc., ?J. L.?v.?Alphabet Inc., ?Andersen v. Stability AI Ltd., Getty Images v. Stability AI.
Navigating the New Frontier
The use of AI tools has been rapidly accepted within the cultural and creative industries. Agnieszka Kurant, a Polish artist, collected signatures from the Massachusetts Institute of Technology community and the surrounding city, using them to train a Generative AI to create signature-like neon installations, one of which was once projected onto the Guggenheim Museum’s facade at nightfall. A few months ago, Canadian singer Grimes,?tweeted “I'll split 50% royalties on any successful AI-generated song that uses my voice. Feel free to use my voice without penalty.” Despite such acceptances, there are rising concerns for “AI-exposed” professions such as writers, designers, and composers. You might have heard about the last year's five-month strike by Hollywood writers protesting the implications of AI for their jobs, which has been seen as a canary in the coal mine. They argued that although existing AI tools appear to be in their early stages, studios producing long-running drama series with established characters could use AI to replace writers or force them to polish and rewrite AI-generated content for much lower wages.
Singers worry that GAI will diminish the uniqueness and value of their work by mass-producing pieces that easily mimic their style, a feat few humans can achieve. The entertainment industry is heavily reliant on the attention economy, where content viewed on algorithmically-driven social media platforms is easily monetized. A few months ago a college student garnered more than 13 million views on TikTok for his AI-generated rendition of Drake, Kendrick Lamar, and Kanye West singing "Fukashigi no Carte," a theme song from a well-known anime series. Indeed, the power to manipulate creative content is astonishing, even now. Features like de-mixing music tracks, deploying them on streaming services, and collecting and analyzing listener feedback can achieve results that surpass human capabilities.
Studies show that digitization has done little to change copyright revenue patterns: a few earn high royalties, while most earn little from their copyrighted works. Given the unique characteristics of the artists’ job market, even a small drop in income can be critical. AI tools enable more sophisticated, broader, and faster copyright infringement, exacerbating this issue. Just think about the quandary of identifying the author/rights holder of AI-generated works.
Beyond the Monkey Selfie
In 2018, a U.S. court issued a final verdict in a case concerning the copyright of Monkey's selfies. The court upheld the United States Copyright Office's stance that works created by non-humans are not eligible for copyright protection. This decision seems straightforward but may vary depending on the level of human involvement in the creation process. For instance, Agnieszka Kurant's work, created with the direct participation of Generative AI, was not only celebrated but also exhibited in one of the world's most prestigious art museums. Few in the art community would likely question the copyright of this work. However, following the controversial Kashtanova case last year, the Copyright Office updated its registration guidelines to state that when AI "determines the expressive elements of its output, the generated material is not the product of human authorship."
If human contribution goes beyond just an idea, can't copyright protection be requested for that contribution? Assuming that the output of AI is copyrighted, to whom does this right belong? The user, the developer, or the company providing the AI tools?
Are the AI-generated outputs considered derivative works? If so, what about the rights of the original artists? How are these rights calculated?
What happens in case of a violation? Recently, a study showed that widely used models, such as Stable Diffusion, sometimes copy their training data. If infringement is proven, who should be held accountable? Under current doctrines, both the AI user and the AI Company could potentially be liable and fines can sometimes reach $150,000. However, the question is, how could an AI user know that the response to their prompt is a copyrighted material?
Shaping the Future: A Few Suggestions
·?????? First and foremost, we must recognize that in an emerging world where each sector is a part of an integrated socio-technical system, AI will drastically change the fields of culture and art. Interestingly, artists have embraced these changes even more readily than policymakers. The Mauritshuis Museum's display of works produced by artificial intelligence, the Museum of Modern Art's epic AI-generated installation, and the revival of John Lennon with the release of "Now and Then" are notable examples of this shift. Some of the effects of GAI on the artists’ labour market from a cultural economics perspective are highlighted here.
1.?????? This market has historically faced an excess supply of artists. With AI tools lowering the barriers to market entry, this issue is expected to intensify in the short term. Consequently: a) The average income of artists will decrease. b) The incidence of multiple job-holding among trained artists will rise. c) The income penalty for artists will increase. d) Artists will face longer periods of unemployment.
2.?????? The distinguishing feature of art products is their high fixed cost and low marginal cost. However, with AI automation and job displacement in roles such as translators, voice actors, animators, and others, the time to market and the cost of production are significantly reduced. This results in a) increased competition among existing cultural labourers, b) artists need to adapt and reskill to stay relevant, c) the nature of the artist's job shifts to oversight and curation of AI-generated content, d) intensified competition for stardom and the role of gatekeepers becoming more important, and e) the personalized goods market will increase greatly due to the flood of AI-generated content.
3.?????? Web scraping and free-riding on other people's content to train GAI models have rapidly decreased the amount of content freely available on the open web. Content producers are losing the motivation to share their work where they feel it will be used to compete against them by GAI. As a result, they are locking their data behind logins or paywalls to prevent it from being used for training. This creates two problems: it limits public access to the content, and it slows down the progress of GAI models.
4.?????? Another economic characteristic of art is the "cost disease," or Baumol effect which arises from the inability to use technology in production, particularly in real-time, personal contexts. With the development of artificial intelligence tools, the cost disease in art may be alleviated. This is a significant argument for subsidizing art, and in the future, governments may reduce their funding for the arts.
·?????? The next issue is who controls artificial intelligence and what are its consequences for the culture sector and copyright. Investing in disruptive technologies is generally high risk, and governments and large companies are the ones who pay for its research and development and are the first to benefit from it. Due to the proven capabilities of artificial intelligence in healthcare, finance, education, manufacturing, and other sectors, competition for acquiring its achievements is high. Since Putin said in 2017, "The nation that becomes the leader in artificial intelligence will rule the world," the competition has intensified. Elon Musk’s response to this statement was also noteworthy: "It begins..."
The European Union's Report on IPR for the development of AI technologies stated that it "considers that IPR for the development of AI technologies should be distinguished from IPR for content generated by AI; stresses the need to remove unnecessary legal barriers to AI development in order to unlock the potential of such technologies in culture and education."
Now that artificial intelligence has become a pervasive tool for advancing knowledge in physics, economics, genetics, social sciences, and military industries, it is unlikely that governments will allow copyright to become an obstacle to its development. It is enough to look at the reports of the World Intellectual Property Organization – WIPO about the share of copyright industries in GDP.
Given that the development of AI is dependent on educational input, governors will try to balance both copyright and tech innovation interests. However, it is evident which way the balance will tilt.
·?????? William F. Ogburn, a sociologist, coined the term “cultural lag”, which may help clarify the complexity of the copyright issue in the age of AI. This term refers to the notion that non-material culture takes time to catch up with material culture like technological innovations. Issues arise when material culture changes so rapidly that society is unable to prepare for or adjust to it. Non-material culture (laws, customs, and norms) is a static part of society and does not have the possibility of rapid change, sometimes resulting in maladjustment.
AI has disrupted concepts, business models, and relations of production, but the rules regarding its adjustment are often delayed. In some societies, such as Canada, which have more bureaucracy, the laws governing the culture sector experience greater delays. According to Ogburn's theory, this may lead to a loss of potential benefits and confusion. For example, if there is a conflict of interest between the culture sector and other parts of society that adapt more quickly, the culture sector may lose out. The faster a sector adapts, the better prepared it is for future development. This is why one of the guests at the Hub meeting highlighted the significant lobbying power of the tech sector.
To overcome this delay, proactive measures are necessary, including:
1.?????? Forming a workgroup with associations, companies, artists, and AI activists in the cultural industries.
2.?????? Establishing international contacts with scientific, legal, policy, and academic centers.
3.?????? ?Forming research groups and involving local and national policymakers with the results of these groups' efforts.
4.?????? Attending meetings with local or national governments of AI stakeholders with a written program for future actions or decisions.
Just coming back from an extended break, thanks Hamidreza Sheshjavani, PhD for this engagement with the Hub's session. For those who missed the session, there's a now a report from it posted on Cultural Policy Hub at OCAD U. Much more work to do in this space and conversation in partnership with others, like Valentine Goddard's AI cluster project as well as the work of the Coalition pour la diversité des expressions culturelles / for the diversity of cultural expressions https://culturalpolicyhub.ocadu.ca/sites/default/files/pdfs/Hub_AI_Roundtable_Report_EN.pd
Lawyer/artist. Exp in AI policy/law. Contact for consulting/public speaking/workshops/advisory roles. AI and creative media design/sociolegal feasibility of data trusts. Fndr AI Impact Alliance ?????? network. ペラペラFR/EN.
7 个月Your recommendations seem to align with the Art, AI, Law and Society Resource Cluster supported by the Canada Council for the Arts | Conseil des arts du Canada and its members. We adopt an intersectional feminist perspective which means that we will be attentive to who (gender balance) we quote to support our publications.