Nov 16, 2023 3 min read

Stability AI's VP Of Audio resigns over its position that training AI with copyright works is 'fair use'

The VP Of Audio at Stability AI, Ed Newton-Rex, has resigned in protest at the company’s position that training generative AI models with existing content is fair use under US copyright law - meaning it does not need to get the permission of copyright owners

Stability AI's VP Of Audio resigns over its position that training AI with copyright works is 'fair use'

Stability AI's VP Of Audio Ed Newton-Rex has resigned in protest at the tech company's position that the training of generative AI models constitutes fair use under American copyright law.

Explaining his decision on X, he wrote: "Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works. I don’t see how this can be acceptable in a society that has set up the economics of the creative arts such that creators rely on copyright".

The copyright industries are adamant that consent must be sought before AI models are trained on existing content. Which, of course, would require AI companies to negotiate licensing deals. However, many AI companies argue that certain copyright exceptions apply to the training of AI models, at least in some countries, so consent is not required.

In the context of US copyright law, AI companies are relying on the concept of fair use. Copyright owners insist fair use does not apply in this context, and there are now a number of lawsuits working their way through the US courts that will put all this to the test.

Meanwhile, the US Copyright Office is undertaking a review of how AI interacts with copyright, and earlier this month it published the thousands of submissions that have been made to that review. That included submissions from various big players in generative AI who very much put it on the record that they consider AI training fair use.

Stability AI wrote in its submission: "We believe that training AI models is an acceptable, transformative and socially beneficial use of existing content that is protected by the fair use doctrine and furthers the objectives of copyright law, including to ‘promote the progress of science and useful arts’”.

Newton-Rex has been working in the generative AI domain for years, having previously founded the music AI company Jukedeck, which was ultimately acquired by TikTok. Stability AI launched its first commercial product for music and sound generation, Stable Audio, back in September trained using licensed music.

In his statement yesterday he said he has resigned "because I don’t agree with the company’s opinion that training generative AI models on copyrighted works is ‘fair use’".

He noted that "there are lots of people at Stability who are deeply thoughtful about these issues", and added that he is "proud that we were able to launch a state-of-the-art AI music generation product trained on licensed training data, sharing the revenue from the model with rights-holders".

However, he confirmed, he was unable "to change the prevailing opinion on fair use at the company". And that opinion, he reckons, is wrong.

"One of the factors affecting whether the act of copying is fair use, according to Congress, is 'the effect of the use upon the potential market for or value of the copyrighted work'", he observed. "Today’s generative AI models can clearly be used to create works that compete with the copyrighted works they are trained on. So I don’t see how using copyrighted works to train generative AI models of this nature can be considered fair use".

Confirming that he still believes in the potential positive impact of generative AI in music, he added that he “can only support generative AI that doesn’t exploit creators by training models  -  which may replace them  - on their work without permission”.

Concluding, he said that he doesn’t believe that he is the only person who thinks this way in the burgeoning AI industry and hopes that “others will speak up, either internally or in public, so that companies realise that exploiting creators can’t be the long-term solution in generative AI".

Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to CMU.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.
Privacy Policy