1,300 A.I. experts come together to downplay ‘nightmare scenario of evil robot overlords’

This post was originally published on this site

https://content.fortune.com/wp-content/uploads/2023/07/GettyImages-1507650715-e1689761396303.jpg?w=2048

The development of superintelligent machines is a “journey without a return ticket”—but not a ride that will end with humans being destroyed by “evil robot overlords,” a cohort of international technologists has argued.

In an open letter published on Tuesday, more than 1,370 signatories—including business founders, CEOs and academics from various institutions including the University of Oxford—said they wanted to “counter ‘A.I. doom.’”

“A.I. is not an existential threat to humanity; it will be a transformative force for good if we get critical decisions about its development and use right,” they insisted.

The people who signed the letter—most of whom are U.K. based—argued that Britain had an opportunity to “lead the way” by setting professional and technical standards in A.I. jobs.

The development of the technology needed to be achieved alongside a robust code of conduct, international collaboration and strong regulation, they noted.

A.I. Armageddon anxiety

Since OpenAI’s large language model chatbot ChatGPT took the world by storm late last year, artificial intelligence has generated countless headlines, won billions of dollars from investors, and divided experts on how it will change the planet.

To many business leaders and technologists—including two of the three tech pioneers known as the “godfathers of A.I.”—the technology is a potential source of humanity’s downfall.

Back in March, 1,100 prominent technologists and A.I. researchers, including Elon Musk and Apple cofounder Steve Wozniak, signed an open letter calling for a six-month pause on the development of powerful A.I. systems.

As well as raising concerns about the impact of A.I. on the workforce, the letter’s signatories pointed to the possibility of these systems already being on a path to superintelligence that could threaten human civilization.

Tesla and SpaceX cofounder Musk has separately said the tech will hit people “like an asteroid” and warned there is a chance it will “go Terminator.”

Musk has since launched his own A.I. firm, xAI, in what he says is a bid to “understand the universe” and prevent the extinction of mankind.

Even Sam Altman, CEO of ChatGPT creator OpenAI, has painted a bleak picture of what he thinks could happen if the technology goes wrong.

“The bad case—and I think this is important to say—is, like, lights-out for all of us,” he noted in an interview with StrictlyVC earlier this year.

No ‘nightmare scenario of evil robot overlords’

The signatories to Tuesday’s letter strongly disagree with Musk and Altman’s doomsday predictions, however.

“Earlier this year a letter, signed by Elon Musk had called for a ‘pause’ on A.I. development, which [we] said was unrealistic and played into the hands of bad actors,” BCS—the U.K.’s Chartered Institute for IT, which wrote and circulated the letter—said in a statement alongside the letter on Tuesday.

The organization’s CEO, Rashik Parmar, added that the technologists and leaders who signed the letter “believe A.I. won’t grow up like The Terminator but instead [will be used] as a trusted co-pilot in learning, work, healthcare [and] entertainment.”

“One way of achieving that is for AI to be created and managed by licensed and ethical professionals meeting standards that are recognized across international borders. Yes, A.I. is a journey with no return ticket, but this letter shows the tech community doesn’t believe it ends with the nightmare scenario of evil robot overlords.”