Jun 26, 2025 4 min read

First Anthropic, now Meta: another judge accepts fair use defence in AI training - but there’s a sting

When it was sued by a group of authors for using their books in AI training without permission, Meta used the fair use defence. A judge has accepted that defence but says that in many cases AI training probably isn’t fair use and he provides some arguments rightsholders could employ in similar cases

First Anthropic, now Meta: another judge accepts fair use defence in AI training - but there’s a sting

After Anthropic’s fair use victory in an AI copyright legal battle earlier this week, another judge has now ruled in favour of Meta, which also employed the fair use defence in response to a lawsuit filed against it by a group of authors - including Sarah Silverman and Richard Kadrey - over its unlicensed use of their books when training generative AI model Llama. 

However, this ruling comes with a word of caution for AI companies more generally, with Judge Vincent Chhabri warning that - in most cases - AI training probably isn’t fair use, before going on to provide a guide for how copyright owners might go about winning lawsuits against AI companies in the future. 

It all comes down to market dilution, he says, which means the negative impact of generative AI on the ability of human creators to make money from their work. 

There are various criteria under US law for assessing whether or not something is fair use, which means third parties can make use of copyright protected works without getting rightsholder permission. 

That includes the extent to which the ‘use’ is transformative, so that the output is significantly different to the original work. Generative AI is very transformative. Or, in the words of that Anthropic judgement earlier this week, it’s “spectacularly transformative”. 

Chhabri agrees with that, but stresses that there are other criteria to consider, and just because your use is transformative, that doesn’t mean you are “automatically inoculated” from a claim of copyright infringement. Indeed, when considering fair use, the potential “harm to market” caused by the use is “more important” than whether or not the use was transformative, he states. 

And that “harm to market”, or market dilution, might be because AI-generated works create more competition in the marketplace for things like books in general, rather than by directly competing with specific existing works that were included in a training dataset. 

In other words, while a single work may or may not directly suffer market dilution from any one AI-generated work, if the ‘transformative works’ created using AI have a negative effect on the overall market for books, then that may torpedo AI companies’ fair use defence. In essence, market dilution can be assessed on macro level rather than micro level. 

“What copyright law cares about above all else is preserving the incentive for human beings to create artistic and scientific works”, Chhabri writes in his ruling. And while the fair use principle means people can sometimes make use of copyright protected works without getting permission, it shouldn’t hinder that fundamental objective. 

As a result, Chhabri continues, “typically” fair use doesn’t apply to copying “that will significantly diminish the ability of copyright holders to make money from their works”, because if that happens it “significantly diminishes the incentive to create in the future”.

Because generative AI models will likely flood the market with new content, the judge concludes, when AI companies use existing works to train those models they are doing so in order to create something that will “dramatically undermine the market for those works, and thus dramatically undermine the incentive for human beings to create things the old-fashioned way”.

Which means that the fair use defence in copyright disputes between AI companies and rightsholders can be defeated by solid market dilution arguments. But that didn’t happen in this case, because of the arguments the authors who sued Meta put forward on the market dilution point.  

They focused on the fact that Llama might output short extracts of the original books. And that its use of their works without securing a licence is preventing them from entering into licensing deals with other AI companies, which are less likely to want to enter into commercial agreements with the authors when a major rival like Meta is getting a free ride. 

But Chhabri says these are not good arguments. Instead the market dilution angle that is “most promising” is that Llama “can generate works that are similar enough”, whether in subject matter or genre, “that they will compete with the originals and thereby indirectly substitute for them”. Which is to say, focus on market dilution in the macro way outlined above. 

Some legal experts would argue that this is an over-reach of the market dilution factor. Which is to say, for Llama to be negatively impacting on the market for the litigious authors’ books, the AI would have to be outputting works that directly compete with those books, basically covering the same topics or telling the same story. Which it does not. Which might be why the authors didn’t dwell on this point. 

However, Chhabri believes that the market dilution factor can be considered more widely when considering fair use claims. “Similar outputs, such as books on the same topics or in the same genres, can still compete for sales with the books in the training data”, he writes. 

“And by taking sales from those books”, he continues, “or by flooding stores and online marketplaces so that some of those books don’t get noticed and purchased, those outputs would reduce the incentive for authors to create - the harm that copyright aims to prevent”. 

The judge’s strong suggestion is that, if the authors had focused on demonstrating that kind of market dilution, Meta’s fair use defence could have failed, however transformative its AI model may be. 

There are numerous copyright lawsuits against AI companies working their way through the US courts that swing on the fair use defence, including the major labels’ litigation against Suno and Udio, and the lawsuit filed by a group of music publishers against Anthropic. 

Copyright owners would obviously prefer it if judges ruled that AI training is not fair use in any circumstances. However, judgements like this one are still valuable for copyright owners, partly by suggesting legal arguments that could work for rightsholders in court, and also by ensuring enough uncertainty that AI companies might decide it’s easier and safer to just start negotiating licensing deals. 

Chhabri seems to think that is likely to happen - while adding that any doom and gloom claims that forcing AI companies to secure licences from rightsholders will scupper the evolution of generative AI technologies simply aren't credible. After all, this is a sector led by billion dollar companies chasing a trillion dollar opportunity. 

“The suggestion that adverse copyright rulings would stop this technology in its tracks is ridiculous”, he writes. “These products are expected to generate billions, even trillions, of dollars for the companies that are developing them. If using copyrighted works to train the models is as necessary as the companies say, they will figure out a way to compensate copyright holders for it”. 

Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to CMU | the music business explained.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.
Privacy Policy