A.I. Developers Say Their Product Creates a ‘Risk of Extinction’ in Self-Serving Statement safe.ai

Hundreds of experts in artificial intelligence — including several executives and developers in the field — issued a brief and worrying statement via the Center for AI Safety:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The Center calls out Meta by name for not signing onto the letter; Elon Musk also did not endorse it.

OpenAI’s Sam Altman was among the hundreds of signatories after feigning an absolute rejection of regulations which he and his peers did not have a role in writing. Perhaps that is an overly cynical take, but it is hard to read this statement with the gravity it suggests.

Martin Peers, the Information:

Perhaps instead of issuing a single-sentence statement meant to freak everyone out, AI scientists should use their considerable skills to figure out a solution to the problem they have wrought.

I believe the researchers, academics, and ethicists are earnest in their endorsement of this statement. I do not believe the corporate executives who simultaneously claim artificial intelligence is a threat to civilization itself while rapidly deploying their latest developments in the field. Their obvious hypocrisy makes it hard to take them seriously.