German songwriter collecting society GEMA has published an AI Charter, setting out ten “ethical and legal principles” that it wants the AI sector - and law-makers regulating that sector - to adhere to. It says the charter is “intended to be thought-provoking”, while also providing a framework for “a responsible approach to generative AI that respects and protects the rights of creators”.
“Generative AI provides opportunities but also carries significant risks for the rights and livelihoods of creators”, says the society’s CEO Tobias Holzmüller. “In our understanding”, he adds, “human creativity is at the centre and the use of musical works created by people in the context of generative AI must be dealt with in a transparent manner and must attract fair pay”.
Several of the principles set out in the charter are basically demands frequently made by the music industry. AI companies must seek permission to use copyright protected works when training generative AI models and must be fully transparent about what works have been used in the training process. And creators must fairly share in any revenues generated by AI models trained on their work.
“Copyright protects the creative human being and gives them the sole right to decide on the use of their works within the legal framework”, the charter declares. “This proven principle must also apply in the context of generative AI”.
However, there are some other interesting statements within the charter that have not always been front and centre in the music industry’s AI conversations. For example, it notes that the generative AI sector is already dominated by a small number of companies that “have the necessary computing capacities, financial means and infrastructures to establish AI technologies quickly and successfully in the market”.
This means, even if AI companies can be forced to secure licences before using existing music, there are already “imbalances and asymmetries” in negotiating power that “disadvantages smaller players and individuals”. This, it adds, "necessitates that collective negotiations are strengthened to enable a situation where the interests of the parties involved can be represented in a concerted effort vis-à-vis the digital corporations”.
The charter also brings up moral rights, particularly in the context of creators protecting their voice and image, something which has generally been talked about in the context of publicity rights in the US.
“Each person must have the option to take swift and effective action against infringements of their moral rights”, it states. And in the context of AI, that “affects the right to one’s own voice, one’s own name and one’s own image”.
Voice, name and image, it adds, “are affected by phenomena such as 'deep fakes'. It affects informational self-determination in general, ie the authority to decide directly which data about oneself is made available to the public”. To that end, it states, moral rights must be respected.
Finally, the charter says that the providers of AI must not be allowed to pass any moral or legal responsibilities in connection with their models onto the users of those models.
“AI providers must be aware of the impact of their technology and accept responsibility for it”, it says. “AI providers therefore justifiably do not enjoy any right to a liability privilege” and “must not shift their responsibility to users”.