Jan 15, 2024 2 min read

No copyright exception for AI reiterates UK government - but tech companies still lobbying for more change

The UK government has again confirmed it is no longer planning to introduce a new data mining exception in copyright law to benefit the AI sector, but it remains under intense lobbying from tech companies that want easy access to copyright protected works to train AI models

No copyright exception for AI reiterates UK government - but tech companies still lobbying for more change

The UK government has reiterated that it will not proceed with its previous plan to introduce a wide-ranging text and data mining exception into copyright law that could be utilised by AI companies. It is instead seeking to develop a code of practice around AI and copyright, although the tech sector continues to lobby for more flexibility in law. 

Copyright industries, including the music industry, are adamant that if tech companies train generative AI models with existing content then they must first get permission from relevant copyright owners. However, many tech companies argue that training AI models in that way should be covered by a copyright exception, meaning no permission would be required. 

In the UK, the government proposed a new data mining exception to specifically benefit AI companies but backtracked after a considerable backlash from the copyright industries. It confirmed last week that those proposals are definitely off the table in a response to a report by Parliament’s Culture, Media & Sport Select Committee

It also confirmed that the Intellectual Property Office is still working with copyright owners and tech companies in a bid to develop a code of practice, which was originally expected before the end of last year. However, finding a compromise between the two sides will be a challenge. 

A select committee in the House Or Lords is also considering these issues and has continued to publish submissions to its inquiry this month. That includes two that set out the tech sector’s position that, for AI models and the UK artificial intelligence industry to achieve their full potential, there needs to be more flexibility when it comes to using third party data in training.

One of those submissions comes from OpenAI. Although not as bold as its recent blog post on its legal dispute with the New York Times, it argued that AI tools need to be exposed to "the full diversity and breadth of human intelligence and experience", which inevitably means using copyright protected works because "copyright today covers virtually every sort of human expression". 

In its submission, OpenUK - an organisation that says it seeks to "empower the open technology community with a cohesive voice" - raised concerns that, with the new data mining exception now not going ahead, development of AI in UK - and tech innovation more generally - “will be further stifled by a new code of conduct restricting … legitimate use". 

Noting that "many nations have specifically enacted exceptions to copyright law to allow for … training whilst the US has a fair use provision, all allowing LLMs to be trained", it stated that, with no exception in UK law, "it is understood that no large language models are being trained in the UK due to confusion around the ability to use publicly available data”.

The copyright industries will continue to argue that technology companies are exaggerating the doom and gloom that will ensue if they don't get their way on copyright matters. Insisting there are licensing solutions for those developing generative AI models, they will continue to call on  lawmakers to resist the demands of tech companies to reduce their copyright obligations.

Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to CMU.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.
Privacy Policy