Universal Music is in talks with Google about licensing a new AI platform that will seemingly allow users to legitimately generate tracks that imitate the voices, lyrics or sounds of established artists.
Or at least that’s according to sources who have spoken to the Financial Times. Talks are at an early stage, the sources say, with YouTube Music boss Lyor Cohen involved in the conversations.
With generative AI technologies becoming ever more sophisticated, there has – of course – been lots of debate in the music industry in recent months about the threats and challenges posed by artificial intelligence, and especially AI models that can generate original music.
There are already a number of platforms that use AI – or at least very clever algorithms – to generate original music by pulling together segments and stems from a catalogue of existing audio. And then there are those companies developing AI models that can generate music from scratch, including Meta’s MusicGen and Google’s MusicLM.
Though the AI tools that have garnered the most headlines in the music space are those that help people to create tracks with vocals that sound like established artists. The ‘fake Drake’ track ‘Heart On My Sleeve’ was particularly newsworthy, it popping up on the streaming services until Universal, as Drake’s label, demanded it be removed.
Music-making AI models are ‘trained’ by being exposed to existing music. The music industry is adamant that, whenever that is commercially released music, the AI companies must first secure licences from the relevant copyright owners.
Not all AI companies agree, which is why the music industry has been increasingly calling on lawmakers to clarify the obligations of tech firms in this domain.
That said, at the same time, the music industry is keen to stress that it also recognises the opportunities presented by generative AI. That includes making use of AI tools that assist artists and songwriters in the creative process, and also developing new business models around AI models that will result in new revenue streams for the music business.
Noting how people have been using AI tools to generate unofficial ‘deep fake’ tracks that imitate the vocals of established artists, the FT reports that the ongoing talks between Universal and Google aim “to develop a tool for fans to create these tracks legitimately and pay the owners of the copyrights for it”.
As the majors enter into more talks with AI companies – Warner has also reportedly spoken to Google about this new product – a whole bunch more questions are posed within the music community.
While music-makers and music companies are generally aligned on the need for AI businesses to secure licences whenever they utilise existing music to train their technologies, the music community is likely less aligned when it comes to how the licensing of AI platforms should work, and how any monies generated should be shared.
It’s no secret that artists and songwriters have often been kept in the dark about the deals done between the music industry and the streaming services.
Often those services have been operational for years before most artists and writers figure out how the deals work and how royalties are paid. And even then, their business partners often deny them access to key information, citing those dreaded NDAs that are inserted into the contracts between the industry and the services.
The music-maker community will likely demand more clarity much earlier on this time round. Partly because the ongoing debates around the economics of streaming have created a forum via which they can make those demands. And partly because of the fears that AI poses some fundamental threats to the music-maker community.
And also partly because the training of an AI model possibly exploits rights beyond the copyrights that the record companies control.
Certainly where an artist’s voice is being overtly imitated, there’s an argument that an artist’s publicity rights – and possibly other data and privacy rights – are being exploited too.
And that would require the consent and involvement of the artist. Which is possibly why the sources who have spoken to the FT were keen to stress that, under the plans being discussed by Universal and Google, artists will be able to choose whether or not to opt in to the opportunity.
But – artists and their managers will likely want to know – how much information will be provided about how any one AI licensing deal or business model works, and how monies will be shared, in order for them to properly decide whether it’s an opportunity they want to opt into?
If it’s anything like previous digital opportunities, very little information indeed. Though, if artist opt in is going to be required, then better information sharing may be necessary.
Then, of course, there are additional complexities in all this too, because recording rights and song rights are generally licensed separately, and the song rights are often co-owned. And as soon as an AI company copies recordings from a label onto their servers, it is also exploiting the song rights. And how exactly are they going to be licensed?
That said, Google – more than most of its competitors in the generative AI space – has plenty of experience navigating these licensing challenges. Which could give it the edge here, providing those in the tech sector still pushing the “we don’t actually need a licence” line do not win any legal arguments.