Deadmau5 took to social media yesterday to criticise an AI-generated video that appeared online this week in which a deepfake version of the DJ/producer appeared to promote the work of another DJ.
Calling the video “scary as fuck”, in a subsequent statement Deadmau5 insisted that everyone “should be in control of their own faces and voices”, while also bigging up the NO FAKES Act currently being considered in US Congress. Deadmau5’s experience this week illustrates the need for that act.
“Woke up to some idiot DJ’s Instagram story that depicted me standing there promoting him and his music”, Deadmau5 wrote. The video posted to Instagram was “fully AI generated”, he explained, and - while the “voice wasn’t quite 100%” - it was “pretty damn convincing”.
“I’m sure this is just the beginning for talentless fucks to abuse this tech to further themselves while violating others rights in one of the worst ways possible”, he continued, adding “we need to stop idiots like this” from “abusing” generative AI in this way.
The rise of generative AI has created a number of concerns for performers and creators, of course.
That includes copyright concerns: so, can creators control how their existing work is used to train generative AI models? And transparency concerns: how do creators know if their work has been used in AI training? And the concerns raised here by Deadmau5 over how performers can stop the unauthorised use of their voice and likeness in AI-generated content?
There are a plethora of proposals currently being considered by US Congress for regulating AI, including proposals to address these concerns. That includes the NO FAKES Act, which would provide a new ‘digital replica right’ within the US, allowing performers - and indeed everyone - to control and, if they wish, monetise their voice and likeness in the context of AI.
Deadmau5 referenced those proposals in a statement to Billboard about this week’s deepfake video. “People should be in control of their own faces and voices - it’s that simple", he said, before adding that his lawyer Dina LaPolt “has been working on the NO FAKES Act to create real protections against deepfakes that use our images and voices and fool fans into thinking it’s actually us... when it’s not”.
Originally introduced in Congress in 2024, the NO FAKES Act was reintroduced last May and continues to go through the lawmaking process in Washington. It enjoys support from groups representing both creators and rightsholders across the media and entertainment industries, including the music industry, and some in the tech sector.
However, there are also opponents to the proposals, including AI and tech companies, and campaign groups like the Electronic Frontier Foundation which last year said the current version of the NO FAKES Act would be “a disaster for internet speech and innovation”. Those lobbying against the proposals could slow things down while they try to get the act amended or, ultimately, voted down.
Other proposals in Congress seek to address the other concerns, including around transparency. That includes the CLEAR Act, which was reintroduced in Washington by senators Adam Schiff and John Curtis just this week. Schiff previously introduced similar proposals in 2024 when he was a member of the House Of Representatives.
Under the CLEAR Act, AI companies would have to submit to the US Copyright Office “a sufficiently detailed summary of all copyrighted works used in building the training dataset” for any publicly released generative AI model.
Like the NO FAKES Act, the CLEAR Act is widely supported by the music and wider entertainment industries. Organisations supporting the proposals include the American Federation of Musicians, the Music Artist Coalition, Songwriters Of North America, the Recording Industry Association Of America, the National Music Publishers Association and collecting societies like ASCAP, BMI, GMR and SoundExchange.
However, there are critics of the CLEAR Act too. The Re:Create Coalition that campaigns on behalf of tech companies, libraries and digital rights groups like the EFF this week said that the CLEAR Act is “anti-competitive” and “un-American”.
Its Executive Director Brandon Butler added that the transparency proposals would create “an extraordinary bureaucratic burden that would send companies and investments fleeing overseas and crush all but the most massive American AI developers”.