Jul 18, 2025 4 min read

Anthropic’s ‘fair use’ win in AI copyright case could turn into a trillion dollar loss

Anthropic had a big win in court last month when a US judge said AI training was fair use. But only when an AI company copies legitimately sourced works and Anthropic copied millions of pirated books. Now this lawsuit has become a class action, meaning authors of all those books could be due damages

Anthropic’s ‘fair use’ win in AI copyright case could turn into a trillion dollar loss

AI company Anthropic recently scored a big win in a copyright battle with three authors when - in a landmark ruling - US judge William Alsup said that training a generative AI model by making copies of legitimately sourced books was ‘fair use’. However, that big win could as yet turn into a trillion dollar loss, because Anthropic also copied millions of illegitimately sourced books when training its Claude AI model. 

Not only did Alsup say that the fair use defence likely doesn’t apply to the pirated books - meaning that element of the litigation is proceeding - but he has also just upgraded this legal battle to a class action lawsuit. Which could mean that Anthropic ends up having to pay damages to the authors of every one of those illegitimately sourced books, of which there were about seven million.  

That seven million might include multiple copies of the same book - and damages would only be due once per book - plus there are other criteria authors will have to meet to participate in this class action. But, under US law, copyright owners can claim statutory damages of up to $150,000 per infringement. If there were seven million infringements, that’s potential damages in excess of one trillion dollars. 

Alsup says that this lawsuit “exemplifies the classic litigation” that deserves class action status because, while just three authors were involved in the original lawsuit, it will be pretty “straightforward” to prove that millions of authors were negatively impacted when Anthropic “violated the Copyright Act by doing Napster-style downloading of millions of works”. So those millions of authors should also be able to benefit from any positive outcome for the rightsholders in this case.

Needless to say, Anthropic “respectfully disagrees”. It would obviously prefer to deal with just the three original authors involved in the lawsuit. A spokesperson told Law360 that the judge failed to consider the “significant challenges” involved in confirming who actually controls the rights in each of the millions of books the AI company pirated, and that challenge alone should have stopped him from granting this lawsuit class action status. “We are exploring all avenues for review”, the spokesperson added. 

The legal battle between Anthropic and the authors is one of numerous lawsuits currently working their way through the US courts where an AI company is accused of copyright infringement for using existing works when training generative AI models without getting rightsholder permission. A group of music publishers have also sued Anthropic, while the record companies have gone after Suno and Udio

In all these cases the technology companies claim that AI training is fair use under US law, which means they can make use of existing works without getting permission from any rightholders. Meanwhile copyright owners argue that AI training is never fair use. Which means all these cases swing on how fair use is defined in the context of AI. 

The judgement in the authors v Anthropic case was one of the first big rulings in this domain, with Alsup concluding that AI training is fair use, because that use is “spectacularly transformative” - in that the content generated by Claude is nothing like the content used during the training process. 

Whether or not a use is transformative is a key factor when assessing the fair use defence in any copyright infringement action under US law. 

That was a big win for Anthropic, but there was a big proviso. The AI company had bought and copied physical books as part of its training processes, but before that it downloaded millions of ebooks from unlicensed sources. Alsup said that fair use only applied if the books had been sourced legitimately. So the authors’ lawsuit in relation to the pirated books is proceeding. Now as a class action. 

Anthropic presented two arguments for why making this a class action would be impractical: that first it would be hard to identify what specific books had been used and then it would be even harder to confirm who owned the copyright in those books. 

But Alsup disagrees in relation to all but one of the sources of pirated books that Anthropic utilised. He noted that the AI company had separately sourced metadata to help it identify what books it had pirated, which included the unique identifier for books - the ISBN - or Amazon’s own book identifier - ASIN. 

Meanwhile in the US there is a copyright registration process that should help identify who wrote and owns the rights in each pirated book. Therefore, while identifying all the books and all the rightsholders will be a big task, it’s not an impossible or unreasonable task. 

As a result, the authors and rightsholders of any books that Anthropic pirated via the LibGen or PiLiMi platforms can be part of the class in this case, providing their books have an ISBN or ASIN, and their works were registered with the US Copyright Office. 

Obviously creators and copyright owners - including those in the music industry - would prefer it if the US courts ruled in no uncertain terms that AI training is never fair use, meaning any AI company making use of existing content to train their models would have to get rightsholder permission and pay licensing fees. 

However, if judges rule that AI training is sometimes fair use but, in some scenarios, it is not - and if there is the prospect of scarily high damages when the fair use defence fails - that might be enough to empower rightsholders to force AI companies into negotiating licensing deals. 

And while last month’s ruling in this Anthropic case in theory favoured the AI company, in reality, it is currently providing enough uncertainty and risk - both for Anthropic and other AI companies fighting similar cases influenced by this one - that creators and copyright owners might just get their way.

Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to CMU | the music business explained.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.
Privacy Policy