OpenAI has called on Donald Trump’s administration to “weigh in where fundamental, pro-innovation principles are at risk”, by which it really means the principle of fair use under American copyright law. It believes that fair use should apply to AI training, meaning AI companies can use existing copyright protected works when training their models without getting permission from creators or copyright owners.
As well as calling on Trump to protect fair use in the US, OpenAI also says that “AI creates prosperity and freedom worth fighting for” and, to that end, the US President should prevent “less innovative countries” from slowing down the development of AI by imposing their own restrictive copyright regimes on American tech companies.
The message is clear: OpenAI does not want to wait for the numerous lawsuits that centre on whether or not AI training is fair use to work their way through the courts system.
And they shouldn’t have to deal with foreign governments restricting their operations in other countries through copyright law. And, in case you wondered, for OpenAI, the supposedly pro-AI copyright reforms currently being proposed by the UK government will still result in too restrictive a copyright regime.
We know the American copyright industries - including the music industry - will strongly oppose this position, insisting that AI training is simply not fair use, and that giving AI companies a free ride not only makes a mockery of copyright but will also cause significant damage to the US’s hugely successful media and entertainment industries.
But, insists OpenAI, Trump weighing in on the fair use debate, and putting pressure on other countries when it comes to the copyright obligations of AI companies, will deliver “heightened prosperity and greater freedom” and allow American AI businesses to out-compete the Chinese. Both objectives that it knows will tick big boxes in Trump’s White House.
The AI company makes its copyright requests in a submission to the team currently working on an ‘AI Action Plan’ that Trump commissioned shortly after returning to the Presidency.
The submission talks a lot about the Chinese government’s ambitions in the AI domain and, while trying to talk down the abilities of the R1 AI model launched by China’s DeepSeek in January, it says the recent DeepSeek developments nevertheless demonstrate the seriousness of the threat from China.
“In advancing democratic AI, America is competing with a Chinese Democratic Party determined to become the global leader by 2030”, it writes. “That’s why the recent release of DeepSeek’s R1 model is so noteworthy. Not because of its capabilities - R1’s reasoning capabilities, albeit impressive, are at best on par with several US models - but as a gauge of the state of this competition”.
Many AI companies argue that they should be able to make use of copyright protected works when training their models without getting permission from the copyright owner, even though doing so usually involves making copies of those works.
Legally speaking, the easiest way to justify that argument is to say that AI training is or should be covered by a copyright exception, probably a data mining exception, meaning permission is not required. Or, under US law, that AI training constitutes fair use.
Through the fair use principle, OpenAI’s submission claims, US copyright law “protects the transformative uses of existing works”, which is to say transformative uses of copyright works do not require copyright owner permission. This, it adds, has ensured “that innovators have a balanced and predictable framework for experimentation and entrepreneurship”.
“This approach has underpinned American success through earlier phases of technological progress and is even more critical to continued American leadership on AI in the wake of recent events in the People’s Republic Of China”, it goes on.
Expanding on its argument for why AI training is fair use, OpenAI says that its models “are trained to not replicate works for consumption by the public. Instead, they learn from the works and extract patterns, linguistic structures, and contextual insights”.
This means, it adds, “our AI model training aligns with the core objectives of copyright and the fair use doctrine, using existing works to create something wholly new and different without eroding the commercial value of those existing works”.
“America has so many AI startups, attracts so much investment, and has made so many research breakthroughs largely because the fair use doctrine promotes AI development”, the submission then says, claiming that copyright rules in Europe are hindering “AI innovation, particularly for smaller, newer entrants with limited budgets”.
However, the copyright industries are adamant that AI training does not constitute fair use. There are now numerous lawsuits working their way through the US courts where copyright owners accuse AI companies of copyright infringement by using existing content to train their models without getting permission.
OpenAI is a defendant in a number of those cases. The music industry is yet to sue OpenAI in the US, but has filed lawsuits against Anthropic, Suno and Udio. German collecting society GEMA is suing OpenAI under European law.
As the US cases get to court, the AI companies will rely on the fair use defence, making arguments very much like those set out in OpenAI’s submission.
Both sides in those cases - the copyright industries and AI companies - are currently bullish that their own interpretation of fair use is correct and that they will prevail in court. Though a judge recently rejected the fair use defence in a lawsuit filed by Thomson Reuters against an AI-powered legal-research search engine, albeit in a rather different context, and a case that didn’t focus on generative AI like OpenAI’s ChatGPT
What we do know about the current AI vs copyright legal battles is that they’ll take years to get through the system. Even once a case is decided in court, with the stakes so high, there will inevitably be a round of appeals, with a near certain likelihood that the key test cases will ultimately end up being decided by the Supreme Court.
To avoid that long drawn out process, both sides are keen for governments to get involved and issue statements or amend copyright laws in a way that provides some clarity on copyright and AI more quickly.
While OpenAI doesn’t go quite that far in its submission to Trump’s AI consultation, it does say that the President should “monitor domestic policy debates and ongoing litigation” and “weigh in where fundamental, pro-innovation principles are at risk”. That reads very much like OpenAI wants presidential intervention if it looks as though the courts are going to conclude that AI training is not fair use.
The AI firm’s request for Trump’s global intervention is also important. Recognising that AI companies could base their actual training in countries that have the most favourable copyright rules and then make the resulting models available worldwide, the copyright industries have been calling on governments to introduce market access requirements into law.
So, for example, no matter where an AI model is trained, that training must be compliant with UK copyright law if the model is being commercialised in the UK.
Therefore it’s not enough for OpenAI and other AI companies to win the copyright debate only in the US. Which is why OpenAI calls on Trump to “shape international policy discussions around copyright and AI” and “work to prevent less innovative countries from imposing their legal regimes on American AI firms and slowing our rate of progress”.
Interestingly, among the foreign legal regimes that is criticised by OpenAI is that of the European Union, which provides a copyright exception for AI companies, but also an opt-out that allows copyright owners to reserve their rights, meaning permission is required to use those works. That approach is what the UK government is now considering as part of supposedly pro-AI copyright law reforms.
But OpenAI doesn’t seem to think of the EU exception with opt-out as being particularly pro-AI. Because, it writes, that approach means “access to important AI inputs is less predictable and likely to become more difficult as the EU’s regulations take shape. Unpredictable availability of inputs hinders AI innovation, particularly for smaller, newer entrants with limited budgets”.
“The UK government is currently considering changes to its copyright regime”, it goes on. “It has indicated that it prefers creating a data mining exception that allows rightsholders to ‘reserve their rights’, creating the same regulatory barriers to AI development that we see in the EU”.
All of which means, if Team Trump does end up producing an AI Action Plan in line with OpenAI’s requests, UK ministers are likely to come under pressure from the US government to abandon their current ‘pro-AI’ copyright proposals and opt for something more radical.
Given the current apparent disparity between how the UK views its relationship with Trump, and how Trump regards the UK - as evidenced by the recent steel tariffs imposed on the UK, which seem to have blindsided the government - this may result in a situation where the US uses its burgeoning trade war to strongarm its position on AI onto the UK government.
The outcome of that could be that the previous UK government’s proposed AI copyright exception without opt outs, which was dropped amid a massive backlash from the creative and copyright industries, is implemented after all, under pressure from the US.