Artificial intelligence expert Eliezer Yudkowsky believes the US government should implement more than an immediate six-month “pause” on AI research, as previously suggested by several tech innovators, including Elon Musk.
In a recent Time op-ed, Yudkowsky, a decision theorist at the Machine Intelligence Research Institute who has studied AI for more than 20 years, claimed that the Twitter CEO-signed letter understates the “seriousness of the situation” as AI could allegedly become smarter — and turn — on humans.
Issued by the Future of Life Institute, the open letter is signed by more than 1,600 people, including Musk and Apple co-founder Steve Wozniak.
It asks the government to pause the development of any AI system that is more powerful than the current GPT-4 system.
The letter argues that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” which Yudkowsky disputes.
“The key issue is not ‘human-competitive’ intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence,” he wrote.
“Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die,” Yudkowsky claimed. “Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’”
Yudkowsky fears that AI could disobey its creators and not care about human lives.
“Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers — in a world of creatures that are, from its perspective, very stupid and very slow,” he wrote.
He added that six months is not enough time to come up with a plan on how to deal with the rapidly advancing technology.
“It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities,” he continued. “Solving safety of superhuman intelligence — not perfect safety, safety in the sense of ‘not killing literally everyone’ — could very reasonably take at least half that long.”
Yudkowsky’s proposal on this issue is to have international cooperation to shut down the development of powerful AI systems.
He claimed doing so would be more important than “preventing a full nuclear exchange.”
“Shut it all down,” he wrote. “Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries.”
His warning comes as AI is already making it more difficult for people to decipher what’s real.
Just last week, computer-generated images of former President Donald Trump fighting off and being arrested by NYPD officers went viral as he awaits possible indictment.
Another set of fake photos showing Pope Francis in an unusually drippy white puffer jacket also fooled the internet into thinking the religious leader had stepped up his fashion sense.