AI remained a big talking point across the music industry in 2025.
Big disputes continued over the copyright obligations of AI companies, resulting in plenty of lobbying and litigation. Though towards the end of the year there were also licensing deals, as the majors successfully turned previously confrontational AI businesses into partners and collaborators.
We all know AI will be - and already is - a powerful tool for the music industry, assisting in music-making, music marketing, music rights management and the effective running of music businesses.
However the commercial opportunity presented by AI - and especially generative AI - depends on the answers to some crucial legal questions. Some of those questions impact on the relationship between the music industry and AI companies, while others relate to how different stakeholders within the music community will participate in and benefit from the AI opportunity.
Pretty much all the legal questions remain unanswered as we head into 2026 - though it’s the latter set of questions that could dominate the music and AI debate in the year ahead.
Legal questions on the input
To train a generative AI model that generates music, you usually need access to millions of music files, which together make up a ‘training dataset’. This involves copying millions of music files onto a server, which should be a licensing opportunity for the music industry.
Under copyright law, a copyright owner has control over the copying of their music, meaning third parties need to secure permission from the rights owner before making any copies. The music industry usually charges for granting this permission, which is how music copyright makes money.
If AI companies need access to music, they need permission from the music industry. So they should negotiate licensing deals with the companies that control music rights - record labels, music distributors music publishers and collecting societies - and then pay fees for those licences
However, many AI companies have made use of music without getting permission, arguing that they don’t need permission because of copyright exceptions and/or the concept of ‘fair use’.
Under most copyright systems, there are certain uses of copyright protected works where a third party does not need to get permission. These are copyright exceptions and cover things like critical analysis, parody and news reporting. In the US, there is a related principle called fair use, which simply says if the use of a copyright protected work is fair, rights owner permission is not required.
Many AI companies argue that AI training is - or should be - covered by a copyright exception, most likely a ‘text and data mining’ - or TDM - exception, which is already available in some copyright systems. Or, if they are based in the US, they argue that AI training is fair use.
The music industry, and all the other creative and copyright industries, do not agree. This dispute has resulted in dozens of lawsuits between copyright owners and AI companies, mostly in the US, but some in other countries too.
Some of those lawsuits have been filed by music companies. In the US, a group of music publishers sued Anthropic, while the majors and a group of independent artists sued Suno and Udio. In Europe, German collecting society GEMA has sued Open AI and Suno, and Danish society Koda has sued Suno.
Meanwhile, both the copyright industries and the tech sector have been lobbying governments to amend or clarify copyright law, either to clearly state that AI companies need to secure licences before using copyright protected works, or to add a new copyright exception that clearly applies to AI training.
Legal questions on the output
There are separate legal questions relating to the output - ie the new content that an AI generates.
There are actually two output scenarios where many AI companies would concede that they do need to secure consent from copyright owners, and/or creators and performers.
First, if the AI basically outputs an existing copyright protected work - or what is clearly an adaptation of an existing work - such as a remix or mash-up, or a cover version in another style or genre.
And second, if the output clearly imitates the voice or likeness of a specific performer.
However, when this happens, an AI company may argue that it was by mistake rather than by design. When music publishers got Anthropic’s Claude AI to output the lyrics to ‘American Pie’, Anthropic insisted that was the result of a bug in the system. When a dance track was released with Suno-generated vocals that sounded like Jorja Smith, the producer of the track said that was just a coincidence.
In these scenarios the AI company will usually claim they have put ‘guardrails’ into their model to stop any future outputs from replicating existing works or imitating a specific performer’s voice or likeness.
However, for some AI companies, allowing users to rework existing content - or to utilise a specific person’s voice or likeness - is part of the design. And in those scenarios, many AI companies have been more cautious about putting services live without securing the necessary consents.
Although, when it comes to voice and likeness, we actually move beyond copyright. To develop an AI that can imitate an artist’s voice, you will need that artist’s music in your training dataset, which has copyright ramifications. But there isn’t copyright in the voice itself.
It is assumed that in most countries voice and likeness can be protected through so called ‘personality rights’ or ‘publicity rights’, although quite how that works isn’t entirely clear, and those rights don’t currently exist under UK law. So much so, many argue that more specific ‘digital replica’ rights are required.
In the US, that has already happened in the state of Tennessee, which passed the ELVIS Act in 2024. Meanwhile the No FAKES Act - which would introduce a US-wide digital replica right - was reintroduced in Congress in 2025. And in Denmark, copyright law reforms are set to introduce new digital replica rights there as well, perhaps setting a framework that could be adopted across the rest of Europe.
What happened in the UK?
Under UK copyright law, there is a TDM exception, but only for “non-commercial research”. Which means it’s pretty clear that commercial AI training requires permission from rights owners.
However, twice in the last five years the UK government has proposed extending the TDM exception to cover commercial use. The previous Conservative government first proposed this in 2022, but quickly abandoned the plan following a major backlash from the creative and copyright industries.
Then, in late 2024, the current Labour government also proposed a new commercial TDM exception, but with an opt-out for rightsholders. That would allow copyright owners to exclude their works from the exception, meaning AI companies would still need permission to make use of the excluded works.
A TDM exception with opt-out already exists in the European Union. That exception was introduced into European law via the 2019 EU Copyright Directive, which the UK was involved in negotiating, but which was never implemented in the UK because of Brexit.
The UK government’s latest proposals for a commercial TDM exception resulted in another major backlash from the copyright industries, including the music industry, which lobbied ministers and lawmakers hard, and also released the silent protest album ‘Is This What We Want?’
More than 10,000 submissions were made to a consultation on the government’s plans relating to AI and copyright, with only 3% of respondents supporting the TDM exception proposal. 95% said AI companies should secure licences before making use of copyright-protected works, with 88% calling for copyright law to be reformed to clearly set out that obligation.
Even before those stats were published last month, ministers had started to backtrack somewhat from the TDM exception proposal in face of such a strong backlash from the copyright industries. However, as 2026 begins, officially all options - including the new exception - are still on the table, with a new government report on copyright and AI now set to be published in March.
For now the status quo arguably favours the copyright industries, given the lack of any exception AI companies can rely on. But what happens if an AI company trains its model - and therefore makes its copies - in a country where an exception is available, or in the US where they will claim fair use, and then that model powers a service available to consumers in the UK?
That question was posed this year in one of the big UK-based AI legal battles, between image library Getty and Stability AI, which trained its image generating AI model in the US.
Copyright owners argue that if an AI company makes a model available in the UK, that model should be trained subject to UK copyright rules, even if it is actually trained in another country.
In its case against Stability, Getty argued that that principle already exists in UK law, citing existing rules that prohibit the import of copyright infringing materials.
However, a judge concluded that those rules did not apply to Stability’s AI model. Getty has been given the all clear to appeal that ruling - but it may be that the copyright industries need the law amended to clearly state that AI models commercialised in the UK must be trained according to UK law.
What happened in the US?
Many of the big AI companies are based in the US, which makes the American dimension of the big copyright v AI dispute particularly interesting. The big question: is AI training fair use?
The US Copyright Office published a report on that question in May last year, concluding that AI training may be fair use in certain scenarios, but probably isn’t in others. Although it took something of a middle position, the report was seen to favour copyright owners more than AI companies. But then Donald Trump sacked Copyright Office boss Shira Perlmutter the day after the report was published.
It’s generally assumed Trump is much more sympathetic to big tech than the copyright industries. In a speech in July, the President said, “you can’t be expected to have a successful AI programme when every single article, book or anything else that you’ve read or studied, you’re supposed to pay for”. That approach isn’t “doable”, he said, before adding “China’s not doing it”.
However, when Trump’s team published an ‘AI Action Plan’ it didn’t include a copyright section, and his officials said that the Trump administration is happy for the US courts to decide if AI training is fair use.
There are a plethora of cases working their way through the American courts that centre on this question. The copyright owners that have filed the lawsuits say AI training is not fair use and therefore unlicensed training data is copyright infringement. But the AI companies argue they do in fact have a fair use defence.
We have had judgements in two cases so far, both lawsuits filed by book authors.
The AI companies involved in these cases were Anthropic and Meta, which both argued that their copying of existing books to train their respective AI models was fair use, and therefore they didn’t need licences from the authors or their publishers. And in initial judgements, the courts agreed.
However, both judgements included some elements that favour the copyright industries.
In the Meta case, the judge said that the authors had failed to present strong enough arguments to defeat the tech giant’s fair use defence. However, he then suggested different arguments the authors could have presented that might have been successful. So, basically, the judge sided with Meta, but then provided guidelines for how copyright owners can actually win cases like this one.
Even more importantly, in the Anthropic case, the judge said that AI training was fair use, but only if an AI company sourced legitimate copies for its training dataset. Anthropic had bought and scanned some books when training its Claude AI and that copying would be fair use under this judgement. But it also downloaded millions of ebooks from piracy sites. That copying is not fair use.
Because of the way damages work under US law, and because the judge granted the authors’ lawsuit class action status - meaning the litigation could represent all affected US authors - Anthropic was facing a potential ruling that would require it to pay a trillion dollars in damages. Anthropic could have appealed this judgement, but instead it quickly agreed a $1.5 billion settlement with the authors’ lawyers.
So, as 2026 begins, many legal uncertainties remain. The Copyright Office’s report - and initial rulings in court - suggest AI training can be fair use in some scenarios.
But whether an AI company can rely on fair use will likely depend on the specific circumstances of its training work, including how it sourced its training data and what it then did with that data. For any AI company busy raising billions in new investment, this is a problem. Maybe it can make use of copyright protected works for free or maybe it faces future trillion dollar damages.
In the midst of these uncertainties, some AI companies have decided to start doing licensing deals with copyright owners to remove the risk of future damages. And that includes in the music industry, where both Universal Music and Warner Music have settled their legal battles with Udio and agreed a licensing deal for future use of their music. Warner has agreed a similar deal with Suno.
What happens next legally speaking remains uncertain. Could Trump suddenly issue an executive order declaring AI training fair use? Will we see a flurry of rulings in court favouring the copyright owners or the tech sector? How many AI companies are willing to take the risk of possible trillion dollar damages?
However, right now, it does feel like the ongoing legal uncertainties in the US will favour the copyright industries, at least in the short term, pressuring an increasing number of AI companies into licensing negotiations that could result in lucrative deals for copyright owners.
What about creator consent?
For the music industry, it’s obviously good news that AI companies that previously said they didn’t need licences to copy existing tracks into their training datasets are now entering into licensing deals.
However, for the wider music community, those deals - which have mainly been agreed with the majors so far - pose a number of questions and potentially spark a different dispute, this time within the industry.
How do these licensing deals work? Is there a one-off payment or ongoing payments? Are those ongoing payments annual lump-sums or a share of each AI companies’ revenues?
How will money generated by these deals be split between the two copyrights in music - the recording rights controlled by record labels and artists, and the song rights controlled by music publishers and songwriters? Will it be the 80/20 split we’ve seen in streaming or the 50/50 split that is common in sync?
How will money be allocated to individual recordings and songs in each label or publisher’s catalogue? And what royalty rate will labels and publishers apply when sharing money with artists and writers?
So far, we don’t have answers to any of these questions. The majors may argue that - with the Udio and Suno deals only announced in October and November - it’s too soon to provide much of this information. However, organisations representing artists and songwriters all over the world have demanded that the majors urgently provide clarity and transparency about their big AI deals.
When the streaming market emerged in the late 2000s, there was very little transparency about the initial digital deals. It took many years of detective work for most artists and songwriters to figure out how they were being paid when their music was streamed, and even now key information about the music industry’s ever evolving streaming deals is kept secret from the wider music community.
It has also been repeatedly argued that the major labels agreed streaming deals, and unilaterally set royalty payment policies, in a self-serving way, so they could keep most of the money generated.
Creator groups fear that the same thing will happen with the AI deals, unless the majors commit to more transparency now, or lawmakers force more transparency through changes to copyright law.
Then there is the issue of creator consent. The labels and publishers insist that AI companies must secure rightsholder consent before using existing music to train their models. But will the labels and publishers secure artist and songwriter consent before opting each creator’s music into those deals?
Currently the majors are only committing to secure creator consent in two narrow circumstances. Basically the same scenarios where most AI companies agree consent is required on the output.
First, if the output is clearly an adaptation of an existing work - such as a remix or mash-up. Because under most publishing deals, songwriters have veto rights over adaptations of their work.
And second, if the output clearly imitates the voice or likeness of a specific performer. Because record deals do not normally grant labels rights in relation to an artist’s voice or likeness.
However, when it comes to the input - ie an AI company copying tracks into its training dataset - will creator consent be secured? Before the majors announced their Udio and Suno deals, indie label repping Merlin and indie publisher Kobalt announced an AI licensing deal with ElevenLabs. It is thought Kobalt and at least some Merlin labels provided writers and artists the right to opt in or out of this deal.
But the majors have so far refused to commit to secure creator consent for basic AI training, seemingly believing that they control the copyright in the recordings and songs in their respective catalogues, and can therefore unilaterally opt all that music into their AI deals. Creator groups do not agree.
In the UK, the Council Of Music Makers says that labels and publishers should commit to secure creator consent before opting any music into any AI deals - while also ensuring that each creator and performer has full control over how their music is used, and is fairly compensated for that use.
They have also called on law-makers to strengthen copyright law to clarify that creator consent is required for AI training, even when the creator does not own the copyright in their work.
Which means, even if we are now heading into the licensing phase of music and AI - when AI companies start to enter into licensing negotiations rather than fighting legal battles with labels and publishers - big disputes will nevertheless remain.
However, this time the dispute is within the music community, as creators and performers push for copyright law reform that not only clarifies the obligations of AI companies, but also of their business partners within the music industry as well.