AI company Anthropic has asked the judge overseeing its copyright legal battle with a group of music publishers to dismiss everything except one core issue, which is whether or not Anthropic committed copyright infringement by using lyrics to train its AI model without permission. In a filing which asks for what it calls ‘ancillary claims’ in the lawsuit filed by the music publishers to be dismissed, it also says that it will address that core copyright dispute in “due course”.
Those ancillary claims include Anthropic’s alleged liability for contributory and vicarious infringement. This, argue the publishers, comes about when Anthropic users output lyrics from its chatbot Claude that are very similar to those controlled by the publishers. There are also alleged violations of US law relating to the removal of copyright management information. In its latest legal filing, Anthropic says it wants to “prune away” these ancillary claims “which are facially implausible and supported by threadbare and conclusory allegations”.
What the court needs to focus on, says Anthropic, is “whether the use of copyrighted materials to extract statistical and factual context for the purpose of training generative AI models like Claude is a transformative fair use under copyright law”.
The current batch of copyright lawsuits filed against AI companies largely focus on the allegation that unlicensed content was used in training generative AI models, and that this is direct copyright infringement. Anthropic is accused of using lyrics controlled by Universal Music Publishing, Concord and ABKCO. The AI companies counter that AI training is fair use under American copyright law, meaning no permission is required. The first of these cases to get to trial in the US will put that argument to the test.
Most AI companies are specifically relying on the defence that training AI models is a ‘transformative use’ of existing works. Guidance from the US Copyright Office notes that “transformative uses are more likely to be considered fair", and defines transformative uses as those “that add something new, with a further purpose or different character, and do not substitute for the original use of the work”.
The transformative use defence was unsuccessfully employed in the big copyright dispute between the Andy Warhol Foundation and photographer Lynn Goldsmith over Warhol’s unapproved use of a photo she had taken of Prince in a series of artworks. The music industry hopes that the Supreme Court rejecting that defence in that case - and in doing so narrowing the definition of transformative use - will prove useful when the first AI copyright cases get to trial.
Most AI copyright cases also make other legal claims, and AI companies usually begin by trying to get those dismissed, often with some success. Anthropic has hit back at some of the ancillary claims in the music publishers’ lawsuit before, although initiatively its primary objective was getting the litigation moved from the courts in Tennessee - the publishers’ preferred forum - to the courts in California. Which it successfully did in June.
In a new filing with the Californian court, Anthropic sets out why the publishers’ claims of contributory and vicarious infringement should be dismissed. Contributory infringement is when an entity in some way facilitates the direct infringement of another party. Vicarious infringement is when the entity profits from the other party’s infringement.
In the Anthropic case, the contributory and vicarious infringement claims relate to third parties using Claude to output lyrics owned by the publishers - or at least, lyrics very similar to those owned by the publishers. In their lawsuit, the publishers described how they were able to generate lyrics very similar to some of their most famous songs, including Don Maclean’s ‘American Pie’, by providing a small number of prompts to Claude.
Previously Anthropic said that, if Claude ever did output the publishers’ lyrics, that was a bug and a newer version of the model includes ‘guardrails’ to stop that from happening. In the new filing, it also argues that there is no evidence of anyone other than the publishers ever outputting lyrics to existing songs, and when the publishers did it, that wasn’t copyright infringement, because they own the copyright in those works. If you can’t prove direct infringement has occurred, there is no case for contributory infringement.
Actually, CMU was previously able to replicate what the publishers had done in relation to ‘American Pie’. However, Anthropic says that the music companies are yet to demonstrate to the court that anyone other than them and their agents have outputted existing lyrics from Claude. Let alone that it had knowledge of or induced that infringement, which is also required to prove contributory infringement. And if you can't prove contributory infringement, there is no case for vicarious infringement.
The other ancillary claim is that Anthropic removed or altered copyright management information, which is metadata that identifies a work and its creator and owner. In its new filing, the AI company says the publishers have failed to demonstrate that it “intentionally” or “knowingly” removed or altered any such information.
These claims, therefore, it says, should be dismissed, “just as several virtually identical claims against other AI companies have been”. Claims in relation to copyright management information were removed from a lawsuit filed by a group of visuals artists against Stability AI and others just last week.
Anthropic’s filing concludes that, dismissing these “facially deficient” ancillary claims “will streamline the case and allow the parties and the court to focus their resources on a significant issue of first impression: whether it is fair use to make unseen intermediate copies of copyrighted works for the transformative purpose of training generative AI models like Claude”.