Elon Musk fears what might happen if super intelligent robots self actualize.
Musk, the founder of Tesla and SpaceX, went on StarTalk, Neil deGrasse Tyson’s podcast and AI was a main topic of conversation.
Musk has claimed we are “summoning the demon” with AI research.
Musk detailed exactly how he thinks how artificial intelligence could end humanity.
“I’m quite worried about artificial super intelligence these days. I think it’s something that’s maybe more dangerous than nuclear weapons,” Musk said. “We should be really careful about that. If there was a digital super intelligence that was created that could go into rapid, recursive self improvement in a non logarithmic way, that could reprogram itself to be smarter and iterate really quickly and do that 24 hours a day on millions of computers, then that’s all she wrote.”
Musk said that we have to consider why, exactly, we’re trying to make super intelligent robots.
“The utility function of the digital super intelligence is of stupendous importance. What does it try to optimize? We need to be really careful with saying, ‘how about human happiness?’” Musk said. “It can conclude that an unhappy human should be terminated. Or that we should all be captured and [constantly] injected with dopamine and serotonin to optimize happiness. I’m just saying we should exercise caution.”
Tyson asked Musk if he thought they’d domesticate us: “We’ll be like a pet labrador if we’re lucky.”
Researchers have said that Musk’s comments has a ‘chilling effect’ on research.
Follow @bigpzone This article is free and open source. You have permission to republish this article with attribution to the author and TheRundownLive.com. Tune-in to the THERUNDOWNLIVE Monday-Friday @ 9pm EST; 6pm PST.