A US judge has reinstated all the copyright claims against Anthropic in the lawsuit filed by the Universal, Concord and ABKCO music publishing companies, ruling that the AI business’s own safety features prove it knew its users were generating copyright protected lyrics via its Claude chatbot. 

Judge Eumi K Lee ruled yesterday that revised arguments submitted by the publishers are strong enough to allow all of their copyright claims against Anthropic to proceed, despite having previously dismissed all but one of those claims. This time the judge knocked back Anthropic’s motion for dismissal.  

In a new ruling, Lee writes that “the court has carefully reviewed the parties’ briefs, the operative complaint and the relevant authority”, and concluded that the music companies’ revised claims are both “plausible” and “sufficient” and can therefore proceed. “As a result, the court denies Anthropic’s motion to dismiss”. 

The music companies are presumably hoping they can force Anthropic into a lucrative settlement deal in this dispute, the AI business having raised $13 billion in investment last month, and then agreeing to a $1.5 billion deal to settle a separate copyright lawsuit filed against it by a group of book authors. 

But in the meantime, the publishers will be pleased that all their claims against Anthropic are back on the table, after their lawsuit was previously trimmed down by Judge Lee back in March. They accuse the AI company of direct, contributory and vicarious copyright infringement, and of illegally removing copyright management information from digital files during its AI training processes. 

The direct infringement allegation relates to Anthropic including unlicensed lyrics in its AI training dataset. As with most of the copyright lawsuits against AI companies, that allegation centres on whether or not copying existing content for the purposes of AI training constitutes fair use under US law. 

But before it even got into any arguments with the publishers about fair use, Anthropic asked the court to dismiss the contributory and vicarious infringement claims, and the allegations relating to copyright management information, on the basis those allegations were flawed. The judge did just that back in March, but gave the publishers the option to revise and resubmit those claims, to make them less flawed. 

The contributory and vicarious infringement claims relate to content outputted by Anthropic’s Claude chatbot. The publishers say that - when prompted in a certain way - Claude will output their copyright protected lyrics, so there is copyright infringement on the output. 

Technically it would be the Claude user directly infringing the copyright in those lyrics, but, they argue, Anthropic was knowingly facilitating that activity - which is the contributory infringement - and profiting from it - which is the vicarious infringement.  

Earlier this year, Lee ruled that the publishers had failed to show who was prompting Claude to output the publishers’ lyrics or that the AI company knew - or should have known - that was happening. 

In their revised lawsuit, the publishers point out that - even before they launched their legal action - Anthropic developed ‘guardrails’ to restrict Claude being used to output existing copyright protected lyrics. Which implies they were aware of the copyright issues on the output. 

The judge summarises that claim in her ruling, stating that the “publishers argue that the court can infer that Anthropic had ‘actual or constructive knowledge’ that Claude users infringed specific lyrics based on Anthropic’s implementation and development of ‘guardrails’”. That claim, Lee adds, is plausible. 

Regarding the claims around copyright management information, or CMI, the publishers needed to plausibly allege that Anthropic not only removed that information from digital files used in its training, but that it did so knowing that CMI removal would help conceal its infringement of copyright. 

Based on the publishers’ arguments this time round, and having reviewed the relevant legal precedent, Lee concludes that the publishers have now “plausibly alleged that Anthropic had the requisite scienter”, which is legal speak for the required level of knowledge that an action was probably illegal. 

Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to CMU | the music business explained.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.
Privacy Policy