A bunch of tech sector trade bodies, campaign groups and law professors have piled in behind AI company Anthropic in its legal battle with the music industry.
In various court filings they insist that AI training definitely constitutes fair use under US copyright law, and therefore the judge overseeing this dispute should throw out the copyright infringement lawsuit filed against Anthropic by music publishers ABKCO, Concord and Universal Music Publishing.
The fair use doctrine “exists to ensure that copyright’s limited monopoly does not stifle the very creativity which that law is designed to foster”, says one of the filings, submitted by the Computer And Communications Industry Association, among others. And training a generative AI model like Anthropic’s Claude is “a paradigmatic application” of that principle. Which sounds rather grand.
If AI training is fair use, AI companies do not need to get permission from copyright owners like music publishers to make use of existing works when training their models.
The music industry, and the wider copyright industries, have long insisted that AI training is not fair use, though their specific legal arguments to back up that claim have evolved over the last year, and these filings tackle the revised arguments. Which makes what happens next in this case important - because it will give us an indication as to whether the music industry’s latest arguments against fair use will work.
In its own filing, the Electronic Frontier Foundation cautions the court against accepting novel legal arguments against Anthropic’s fair use defence backed up by doom and gloom predictions from copyright owners - ie “AI will kill human creativity” - insisting we’ve been here before.
In the 1970s, Hollywood studios made novel copyright arguments in court when they were panicking about the impact home video recorders would have on the film and TV industries. But, the EFF explains, the US Supreme Court found against the studios, “noting that where copyright law has not kept up with technological developments, courts should be careful not to expand copyright protections on their own”.
That ruling - in what became known as the ‘Betamax case’ - “unleashed decades of technological innovation”, the EFF says, including “the emergence of a vibrant video rental market” that opened up a whole new mega revenue stream for the film studios. And what the Betamax ruling definitely did not do was destroy the movie and TV industry, as the studios had predicted.
The music publishers fair use analysis in this case, the EFF claims, does precisely what judges in the Betamax case “counseled against”, in that it “embraces a brand-new theory of copyright, based in substantial part on hyperbole and speculation about the displacement of ‘human artistic creativity’”.
Law professors Rebecca Tushnet and Edward Lee - from Harvard Law School and Santa Clara University School Of Law respectively - agree with the EFF. In arguing against Anthropic’s fair use defence, they say, the music publishers now seek to “radically expand their copyrights” with a “new theory of ‘dilution’ to encompass entire genres of songs”.
That theory basically says that Anthropic’s AI training - which involved copying the publishers’ lyrics - is not fair use because Claude outputs new lyrics that compete with the publisher’s existing lyrics, not directly but generally. As more AI-generated songs enter the system and start to pull money out of the streaming royalty pool, there will be less money for the human-created songs used to train the model.
The risk of market dilution is one of the key considerations when deciding whether or not the use of a copyright protected work is fair use. For example, if an artist incorporates an existing artwork into their new artwork, will that negatively impact on the market value of the original work? If it does, that would weigh against a fair use defence and the artist likely needs a licence to use the existing work.
Although Claude has been known to output lyrics that are exactly or substantially similar to existing lyrics, Anthropic argues that’s both rare and not meant to happen. But Claude could be used to output entirely new lyrics that compete with the publishers’ existing lyrics, but generally not directly. In that one group of lyrics - AI-generated - competes with another group - human-created.
Is that sufficient market dilution to weigh against Anthropic’s fair use defence? A judgment last year in another AI copyright case involving Meta suggested that it could be, even though that ruling actually favoured Meta and its fair use arguments.
But Anthropic strongly disagrees, as do Tushnet and Lee, who see this ‘general’ market dilution argument as the publishers seeking protection for whole ‘genres’. And “stretching copyright to encompass genres via ‘dilution’ is unconstitutional”, they write, and is fundamentally flawed as an argument because
“copyright law favours competition from new, non-infringing works that copy no protected elements”.
Trade groups representing the music industry - and the publishers and authors of books, journals and newspapers - previously submitted filings in support of Universal, Concord and ABKCO.
They told the judge that the real question in this case is “whether a for-profit multibillion-dollar company should be able to systematically copy human-authored works without permission and use those works to
train AI models to generate content that displaces the works so taken”. The answer, they said, is “no”.
Backed by those so called amicus briefs from their supporters, both Anthropic and the music publishers now want the judge to make summary judgements in their favour.
The judge’s decision should give us a steer on where the big ‘fair use and AI debate’ is heading next, and whether that direction of travel favours AI companies or the copyright industries.
There are dozens of lawsuits between copyright owners and AI companies in the US, most of which centre on the fair use debate, and all of which could be subject to appeals that could take years. However, the direction of travel in the courts is important even if early judgements can be challenged on appeal.
Because the more it looks likely that rulings will ultimately swing against AI companies - resulting in them having to pay billions if not trillions in damages to copyright owners - the more likely those companies are to enter into licensing negotiations, like those we’ve seen between music AI companies and the major labels and publishers in the last year.
But if it looks like the direction of travel is more likely to favour big tech and AI, then those companies may be more willing to take the risk of allowing lawsuits to proceed in court and walk away from licensing talks without any deal in place. Which is why so many organisations look to intervene in cases like this one.