A group of more than 1,100 tech industry insiders and artificial intelligence researchers recently signed a letter calling for a six-month moratorium on AI-related research, citing the potentially negative societal toll that the rapidly evolving technology could take in the near future.
Programs that achieve “human-competitive intelligence” can have unintended and unforeseeable implications on human society and “should be planned for and managed with commensurate care and resources,” the letter advises.
NEW – Elon Musk & Tech Leaders Call for a Pause on Advanced AI Systems Citing 'Risks to Society'
"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs."@elonmusk pic.twitter.com/1R3AxIYDre
— Chief Nerd (@TheChiefNerd) March 29, 2023
“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control,” the group added.
Although the letter was endorsed by prominent figures like Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, one high-profile individual spoke out against taking any such precautions.
“I don’t think asking one particular group to pause solves the challenges,” said Microsoft co-founder Bill Gates. “Clearly there’s huge benefits to these things…what we need to do is identify the tricky areas.”
He also suggested that a pause in AI development would be too complicated to actually achieve, explaining: “I don’t really understand who they’re saying could stop, and would every country in the world agree to stop, and why to stop. But there are a lot of different opinions in this area.”
Gates, who has faced criticism in the past for his controversial recommendations about global problems, asserted in a blog post last month that AI could be used to address inequality around the world.
On the other hand, some experts are even more concerned about the implications of so-called “artificial general intelligence” than those who signed the letter.
Machine-learning expert Eliezer Yudkowsky, who has been studying the topic for decades, said that he did not sign the letter because he saw it as “understating the seriousness of the situation and asking for too little to solve it.”
He is calling for the current infrastructure to be dismantled at all costs, claiming that “the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.”