A stack of tech industry bosses and founders, as well academics, researchers and lobbying organisations, have signed a letter calling for companies and teams working on AI technologies to pause the training of any AI systems more powerful than GPT-4, to allow a big discussion on where these technological developments are heading and how they might be managed.
There has, of course, been a renewed interest of late in AI technologies – and especially generative AI technologies, which automatically create content and media – partly because of the hype around specific platforms like ChatGPT and partly because these systems are now rapidly becoming more sophisticated.
Within the creative and copyright industries this has made debates over the licensing of data mining, the copyright status of AI-created works and how the law protects people’s identities – all debates which have been ongoing for years – feel a lot more pressing. In the US, the music industry has launched the Human Artistry Campaign to bring everyone together around those issues.
Though for law-makers, there are even bigger concerns about the impact ever more sophisticated AI technologies might have when it comes to things like privacy, security and fraud, not to mention politics, the economy and society at large.
The new letter, organised by a group called the Future Of Life Institute, notes existing research which demonstrates how “AI systems with human-competitive intelligence can pose profound risks to society and humanity” and that “advanced AI could represent a profound change in the history of life on Earth”.
With that in mind, it goes on, these technological advancements “should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”.
The letter continues: “Contemporary AI systems are now becoming human-competitive at general tasks and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilisation?”
“Such decisions must not be delegated to unelected tech leaders”, it argues. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects”.
With that in mind, the letter calls on all AI labs “to immediately pause for at least six months the training of AI systems more powerful than GPT-4”, which is the latest version of the AI that powers ChatGPT, developed by the research lab OpenAI.
“This pause should be public and verifiable, and include all key actors”, the letter says. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts”.
“These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt”, it adds. “This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black box models with emergent capabilities”.
Signatories of the letter include Elon Musk – who co-founded OpenAI – and Apple co-founder Steve Wozniak, as well as numerous academics and researchers, and a bunch of founders and/or current CEOs of tech giants and AI companies. You can read the full letter and see the full list of signatories here.