The deadline for making submissions to the US Copyright Office's consultation on all things AI was earlier this week and those submissions have now been made public. Unsurprisingly, while copyright owners talk a lot about “consent” in their submissions, the tech companies talk a lot about “fair use”.
"We believe”, writes Stability AI, “that training AI models is an acceptable, transformative and socially beneficial use of existing content that is protected by the fair use doctrine and furthers the objectives of copyright law, including to ‘promote the progress of science and useful arts’”.
Generative AI models pose a number of copyright questions. The big one is whether technology companies training AI models with existing content need to get permission from whoever owns the copyright in that content. The copyright industries, including the music industry, say yes, consent must be sought. But many AI companies reckon not.
That is based on exceptions that exist in many copyright systems, which are scenarios where copyright-protected works can be used without licence. Though, under US law, we are talking about the wider and more ambiguous concept of fair use. We already knew AI firms were claiming fair use covers the training of their models, but the new submissions with the Copyright Office sets that out in much clearer terms.
Echoing the comments of Stability, OpenAI says in its submission that it "believes that the training of AI models qualifies as a fair use, falling squarely in line with established precedents recognising that the use of copyrighted materials by technology innovators in transformative ways is entirely consistent with copyright law".
And Google states that "the doctrine of fair use provides that copying for a new and different purpose is permitted without authorisation where - as with training AI systems - the secondary use is transformative and does not substitute for the copyrighted work".
Interestingly, Stability does concede that "the improper use of a person's physical or voice likeness can be problematic if it wrongfully implies a person’s endorsement of, affiliation with, or promotion of a work or idea. The improper use of personal likeness should be governed by clear rules that specify impermissible use".
The music industry has also been calling for stronger protection in this domain in the context of AI. In the US that mainly means putting in place, at a country-wide federal level, some kind of publicity right similar to that which already exists in some US states.
However, Google expresses concerns about that proposal too. It acknowledges creator concerns, but says: "Congress should be extremely cautious before enacting a federal right of publicity or anti-impersonation law ... at their core, right of publicity laws restrict speech rights and can be justified only when they are narrowly tailored to serve compelling state interests".
Submissions from the music industry present strong counterarguments regarding any reliance on fair use and any proposed restriction of publicity rights. With both sides’ positions now clearly set out, it will be interesting to see how lawmakers in the US and elsewhere respond.