TIME TO MAKE FRIENDS WITH YOUR ROOMBA
AI Pioneer Says Humans Are Doomed If We Don’t Put the Brakes On Now
“If we go ahead on this everyone will die”
Move over, climate change, nuclear war, and social collapse. There’s a new apocalypse in town.
On March 29, 2023, Elon Musk, along with a host of scientific and business luminaries, published an open letter demanding a six-month moratorium on more powerful Artificial Intelligence so safety protocols can be devised and implemented.
Eliezer Yudkowsky, decision theory researcher and co-founder of the Machine Intelligence Research Institute, didn’t sign because he thinks they’re a bunch of wide-eyed optimists who didn’t go far enough. To hammer home his point, he’s penned a terrifying editorial in Time magazine, bluntly warning that if we don’t stop AI work completely, it will kill us. Literally, and soon.
I highly recommend reading the whole thing, but here are a few lines to give you the flavour. This guy is not screwing around.
Many researchers … expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.
The likely result of humanity…