The music publishers who are suing Anthropic for copyright infringement have asked the judge overseeing the case to reject the AI company’s fair use defence and rule in their favour. 

They back up their arguments by citing the judgement in another legal battle between copyright owners and a tech company - the one involving a group of book authors and Meta - even though the judge in that case ultimately sided with Meta and said its AI training was fair use. 

Many tech companies argue that when they make copies of existing works when training generative AI models that constitutes 'fair use' under US copyright law, meaning they don’t need permission from, or to pay any licensing fees to, copyright owners. Both Meta and Anthropic have employed the fair use defence. 

However, in the book authors v Meta case, while Judge Vincent Chhabri accepted Meta’s fair use defence, he then also provided a toolkit for how other copyright owners could go about successfully defeating a similar fair use claim by another AI company. 

That mainly involves demonstrating that AI-generated content is negatively impacting on the value of human-created content through ‘market dilution’ - because market dilution is a key factor when assessing fair use claims.  

The Universal, Concord and ABKCO music publishing companies are now employing Chhabri’s toolkit. Anthropic trained its Claude AI model “to produce ‘new’ AI-generated lyrics that compete with and dilute the market for publishers’ human-authored works”, the three music publishers claim in a new court filing. 

“This market harm is not hypothetical”, they add. “Claude is being used to proliferate AI-generated song lyrics in huge numbers, and such AI songs created with tools like Claude are saturating streaming platforms and even climbing the Billboard charts”. As a result, “Anthropic’s actions are quintessential infringement - not fair use”. 

The publishers claim that Anthropic infringed their copyrights twice, firstly by making unlicensed copies when training Claude, and then again by having Claude output existing lyrics owned by the music companies when prompted to do so by users. 

When it comes to Claude outputting existing lyrics controlled by one of the three music companies, it’s much harder to make a fair use defence. However, Anthropic argues that Claude spitting out existing lyrics is a bug in the system rather than part of the design, and that it has put in place so called ‘guardrails’ to stop existing lyrics from being outputted in the future. 

In their new filing, the publishers are dismissive of that defence too. “Anthropic claims that it trained Claude on publishers’ lyrics merely to ‘teach AI models to recognise language patterns’ and did not intend for the model to output those lyrics”, they write. 

But, they add, “the contemporaneous evidence from Anthropic’s records proves the opposite”. In fact “Anthropic trained Claude using publishers’ lyrics precisely so the model could respond to queries for those lyrics” and “Claude has repeatedly been put to that very use”. 

Meanwhile, Anthropic’s guardrails weren’t stopping the outputting of the publisher’s lyrics before they filed their lawsuit, and even since - when new guardrails have been put in place - “they have still failed to comprehensively prevent output reproducing publisher’s lyrics”. 

As for the copying of the publishers’ lyrics on the input, that’s where AI companies like Anthropic have most heavily relied on the fair use defence. And it’s where Chhabri's comments on market dilution in the Meta case become relevant. 

In his ruling, Chhabri was willing to entertain a wider interpretation of ‘market dilution’. So rather than a copyright owner having to demonstrate how the unlicensed copying of a single work had negatively impacted on the market value of that specific work, instead the negative impact could be on a category of works that the original content belonged to. Which, in this case, would be lyrics in general. 

“AI-created songs are flooding the streaming platforms and climbing the music sales charts”, the publishers say. “Streaming platforms, like Spotify, distribute revenue to music rightsholders in pro-rata shares from a fixed royalty pool”, they then explain. Therefore, “adding AI-generated works to the platform thus diverts royalties from the pool that would otherwise have been paid to human songwriters”. 

Plus, they insist, “in addition to competing with publishers and existing songwriters, the flood of AI-generated music will also ‘crowd out’ emerging songwriters and disincentivise future potential songwriters from creating new works”. 

The authors v Meta case wasn’t the only AI copyright judgement that accepted a fair use defence from a tech company, but then also helped copyright owners to defeat other fair use claims in the future.  

Anthropic was also involved in a case with a group of book authors. In that case, the judge said AI training was fair use, but only if the AI company’s original copy of any one work came from a legitimate source. Anthropic had used millions of pirated e-books. 

The music publishers also hope to use that judgement to that advantage, because Anthropic also used pirated copies of their lyrics. However, they weren’t able to add those piracy claims to this existing litigation, so instead filed a second lawsuit against the AI company back in January. 

Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to CMU | the music business explained.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.
Privacy Policy