YouTube last month updated its policies to allow people to request the removal of videos where AI has been used to imitate their likeness or voice without permission. This change, first spotted by TechCrunch, enables users to file complaints through the platform’s privacy request process.
The updated guidelines state: “If someone has used AI to alter or create synthetic content that looks or sounds like you, you can ask for it to be removed. In order to qualify for removal, the content should depict a realistic altered or synthetic version of your likeness”.
YouTube will consider several factors when assessing removal requests. Those considerations include whether or not the video’s creator has disclosed that the content is altered or synthetic; if the person making the complaint can be uniquely identified; and if the content is parody or satire, or has other “public interest value”.
Another key consideration will be “whether the content features a public figure or well-known individual engaging in a sensitive behaviour such as criminal activity, violence, or endorsing a product or political candidate”.
According to TechCrunch, once a removal request has been filed, the creator will be alerted and have 48 hours to remove the content before YouTube’s review process begins. If they don’t delete the video and the review process decides that the complaint is valid, the video will be deleted.
Unlike when content violates YouTube’s Community Guidelines, having a video deleted under this new system won’t automatically result in a strike against the creator’s channel. Although YouTube may take action against a creator who repeatedly uploads videos that are deleted under these new Privacy Guidelines.
The policy update comes amidst a growing debate about AI-generated vocal clones and deepfakes, and how an individual whose voice or likeness is imitated might go about stopping the distribution of such content. That includes what legal protections that individual might rely on, and what processes user-generated content and social media platforms could and should put in place.
The music industry became particularly interested in AI-generated vocal clones last year after a number of tracks using AI to imitate the voices of famous artists went viral, in particular the Ghostwriter ‘fake Drake’ track.
The record companies put pressure on the streaming platforms to remove such tracks when made aware of them and, in November, it emerged that YouTube was putting in place a system by which record labels and music distributors could request removals of that kind.
Interestingly YouTube has framed this issue as something that falls under privacy policies, rather than copyright or personality rights, which have been the focus of legal debates in the music industry.
The music community is adamant that training an AI model with existing music - which is required if the model is going to imitate an artist’s vocals - must be licensed by the copyright owner, which means permission must be sought.
Personality rights, in countries where they are enshrined in law, possibly offer more protection for artists, especially if they don’t own the copyright in their music. Technically a label could license an artist’s work to an AI company, and without the safeguards offered by personality rights the artist could be powerless to prevent AI-generated vocal clones being created.
It could be that privacy law and data protection law also offer a useful avenue for artists seeking to stop unauthorised vocal clones. Again, there remains plenty of legal debate in this domain. In the UK, the Information Commissioner’s Office is currently in the midst of an extensive consultation on the data protection implications of generative AI.
Whatever the legalities, the pressure is only going to increase on YouTube and its competitors to deal with videos that contain unapproved vocal clones or deepfakes, with lawmakers and the wider entertainment industry taking a much greater interest.
Earlier this week the actor Morgan Freeman joined that conversation, thanking his fans for bringing to his attention a video that had gone viral on TikTok that used AI to imitate his voice.
Freeman’s statement follows the recent dispute between Scarlett Johansson and OpenAI, after the actor complained that one of the ChatGPT voice assistants, called Sky, seemed to be based on her voice and the virtual assistant character she played in the film ‘Her’. The AI company denied that Sky had been designed to sound like Johansson.