AI company Anthropic has again urged the judge overseeing its legal battle with a group of music publishers to throw out allegations of copyright infringement relating to the outputs of its Claude chatbot.
The publishers submitted an amended lawsuit last month expanding on their argument that Anthropic should be held liable for contributory infringement because its users could prompt Claude to spit out lyrics they control. The judge previously rejected that argument back in March.
According to Anthropic, “despite over a year of discovery and a second chance to plaid their claims”, the publishers’ argument “remains incurably defective”.
In its own court filing last week, responding to that “second chance” filing by the publishers, Anthropic says that the music companies try to argue that it “was or should have been aware of Claude’s general capacity to reproduce copyrighted lyrics” because of its “knowledge of Claude’s design and training”. Even if that was true, says Anthropic, “generalised knowledge” of the “possibility of infringement” is not enough to be held liable for the infringement.
Not only that, says Anthropic, the publishers also attempt to argue that its “automated guardrails - designed specifically to prevent infringement - somehow create actual knowledge of infringement”. But that’s not a valid argument either.
For their part, the publishers say that Anthropic’s latest filing just rehashes old arguments that were made earlier in the dispute, and fails to address new evidence they have provided.
That’s not the only back and forth between the music publishers and Anthropic in recent days. There is also a dispute over discovery in this case and how much data Anthropic should hand over as part of that process.
During a court hearing yesterday about those discovery issues, the publishers accused one of Anthropic’s expert witnesses of citing a non-existent academic paper in a statement she submitted to the court. Lawyers working for the publishers say that probably happened because the expert used AI to help prepare her statement.
According to Law360, attorney Matthew J Oppenheim told the court that a paper cited by expert Olivia Chen “does not exist, it was never written, it was never published”. He doesn’t think Chen “intentionally fabricated fake authority”, he went on to clarify, but rather that she used an AI that ‘hallucinated’ the non-existent paper.
Universal Music Publishing, Concord and ABKCO are all involved in the lawsuit against Anthropic. They accuse the AI company of copyright infringement on two levels. First, by copying lyrics they own into the dataset used to train Claude without getting permission first. And second, by allowing Claude to spit out those lyrics when prompted in a certain way.
Anthropic wants the second claim dismissed, so that the litigation is focused entirely on its use of lyrics in its training data, where it will employ the usual fair use defence.
It successfully convinced the judge to dismiss the claims relating to Claude’s outputs in March, on the basis that - although the publishers themselves had managed to prompt Claude in such a way that it outputted their lyrics - there wasn’t evidence that third parties were using Claude in that way.
The publishers’ most recent filing tries to provide that evidence. “Countless users unaffiliated with the publishers have prompted Claude for lyrics”, it says, and Claude has responded by generating output that likewise copies verbatim or near-verbatim the copyrighted lyrics to publishers’ works”.
In “just a nine day period” prior to the publishers filing their lawsuit in 2023, it claims, “the term ‘lyric’ appeared in more than 170,000 Claude prompt and output records - nearly 20,000 every day”. Many of these, it adds “are prompts by third-party Claude users seeking lyrics to publishers’ works and output by Anthropic’s AI models copying those lyrics”.
Not only that, but “key Anthropic employees explicitly contemplated prompting Claude for the lyrics to publishers’ works and discussed various lyric-related prompts” they allege, adding that, “Anthropic’s own co-founder and chief compute officer Tom Brown queried ‘what are the lyrics to ‘Desolation Row’ by Dylan?’, one of publishers’ works”.
Meanwhile, “on forums frequented by Anthropic employees’, including the Claude subreddit, “Claude users openly discuss their infringing uses of Claude”, including ways to circumvent the guardrails put in place by Anthropic to stop its AI outputting copyright protected works.
But all these new claims, Anthropic insists, are too general and non-specific to demonstrate the AI company is liable for contributory infringement. While the publishers cite a specific post on Reddit, they “do not allege that any Anthropic employee actually saw that post”. And as for Brown’s alleged Dylan-related query of Claude, the publishers “do not allege that Claude returned an output that was infringing”.
Which is why Anthropic’s legal team reckon the judge should again dismiss the copyright infringement claims in relation to Claude’s outputs.
As for the claims that Claude - or another AI - put a fake citation into one of the AI company’s expert statements in the dispute over discovery in the case, Anthropic’s lawyers criticised the other side for bringing that issue up in front of the judge without any warning.
According to attorney Matthew J Oppenheim, the academic paper cited by expert witness Olivia Chen supposedly comes from a journal called The American Statistician, and was written by Julien Dutant and Julia Staffel. Those two academics are real, one is an epistemology lecturer in the UK, the other a philosophy professor in the US. But they have never collaborated on any study and the journal never published that paper.
Responding in court, Anthropic lawyer Sarang V Damle said, “I wish Mr Oppenheim had brought this to our attention before sandbagging us in this court”. The judge gave Anthropic’s team until tomorrow to figure out what happened, after which she’ll decide whether to “strike” the expert statement from the proceedings.