This post was originally published on this site
https://content.fortune.com/wp-content/uploads/2023/05/GettyImages-1490690264-1.jpg?w=2048Technologists and computer science experts are warning that artificial intelligence poses threats to humanity’s survival on par with nuclear warfare and global pandemics, and even business leaders that are fronting the charge for A.I. are cautioning about the technology’s existential risks.
Sam Altman, CEO of ChatGPT creator OpenAI, is one of over 300 signatories behind a public “statement of A.I. risk” published Monday by the Center for A.I. Safety, a non-profit research organization. The letter is a short single statement to capture the risks associated with A.I.:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The letter’s preamble said the statement is intended to “open up discussion” on how to prepare for the technology’s potentially world-ending capabilities. Other signatories include former Google engineer Geoffrey Hinton and University of Montreal computer scientist Yoshua Bengio, who are known as two of the “Godfathers of A.I.” due to their contributions to modern computer science. Both Bengio and Hinton have issued several warnings in recent weeks about what dangerous capabilities the technology is likely to develop in the near future. Hinton recently left Google so that he could more openly discuss A.I.’s risks.
It isn’t the first letter calling for more attention on the possible disastrous outcomes of advanced A.I. research without stricter government oversight. Elon Musk was one of over 1,000 technologists and experts to call for a six-month pause on advanced A.I. research in March, citing the technology’s destructive potential.
And Altman warned Congress this month that sufficient regulation is already lacking as the technology develops at a breakneck pace.