This post was originally published on this site
https://content.fortune.com/wp-content/uploads/2023/05/2019-11-18T010329Z_391535597_MT1YOMIUR000VQP2DH_RTRMADP_3_DR-GEOFFREY-E-HINTON-AWARDED-HONDA-FOUNDATION-e1683555289918.jpg?w=2048Geoffrey Hinton’s artificial intelligence (A.I.) research has helped enable the rise of technologies that were once the stuff of Sci-Fi flicks, from facial recognition to chatbots like OpenAI’s ChatGPT and Google’s Bard. The British-Canadian computer scientist earned the title, “the godfather of A.I.,” by dedicating his career to the study of neural networks—complex computer models whose layered structures mimic the human brain—decades before the technology went mainstream. But Hinton resigned from a position he held at Google for over a decade last month, telling the New York Times he made the decision so he could freely discuss “the dangers of A.I.” without considering how it might impact the company.
Since then, he has been on a Paul Revere-esque campaign to warn about the existential risk to humanity that A.I. poses in a series of interviews which have even garnered the attention of the rapper Snoop Dogg, who recently referenced Hinton’ saying’s claim that A.I. is “not safe.” “Snoop gets it,” Hinton told Wired Monday.
The A.I. pioneer’s latest cautionary message? Even the threat of climate change doesn’t compare to A.I.
“I wouldn’t like to devalue climate change. I wouldn’t like to say, ‘You shouldn’t worry about climate change.’ That’s a huge risk too,” he told Reuters Friday. “But I think this might end up being more urgent.”
Hinton believes A.I. systems could eventually become more intelligent than humans and take over the planet, or bad actors could use the technology to fuel division in society in hopes of gaining power—and that’s all before the threat of job losses. And while the solutions to climate change are quite obvious (“just stop burning carbon)” when it comes to A.I., Hinton warned that “it’s not at all clear what you should do.”
Repeated warnings
On his campaign to warn of the dangers of A.I., Hinton has compared the technology to the birth of Nuclear weapons, and admitted that he regrets much of his work now that he sees its destructive potential. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he told the New York Times in late April.
Comparing the rise of artificial intelligence to the creation of nuclear weapons may sound hyperbolic, but even Warren Buffett sees the parallels. The 92-year-old investing legend referenced a warning Albert Einstein gave after the birth of the atomic bomb at Berkshire Hathaway’s annual conference over the weekend, noting that A.I. has “can change everything in the world except how men think and behave.”
And Hinton, who won the Turing Award for his lasting contributions of major technical importance to computer science in 2018, warned earlier this month in an interview with BBC of a “nightmare scenario” in which chatbots like ChatGPT are used to seek power. “It is hard to see how you can prevent the bad actors from using it for bad things,” he said.
In a separate interview at MIT Technology Review’s EmTech Digital conference last week, the computer scientist told the crowd: “These things will have learned from us, by reading all the novels that ever were and everything Machiavelli ever wrote, how to manipulate people. Even if they can’t directly pull levers, they can certainly get us to pull levers.”
“I wish I had a nice simple solution for this, but I don’t,” he added. “I’m not sure there is a solution.
But no A.I. pause?
The potential risks posed by A.I. led over 1,100 prominent figures in tech, including Tesla CEO Elon Musk and Apple cofounder Steve Wozniak, to sign an open letter calling for a six-month pause on the development of advanced A.I. systems earlier this year. But Hinton told Reuters Wednesday that a pause in A.I. development is “utterly unrealistic.”
“I’m in the camp that thinks this is an existential risk, and it’s close enough that we ought to be working very hard right now, and putting a lot of resources into figuring out what we can do about it,” he said.
In an interview with CNN last week, the computer scientist explained that if the U.S. stopped developing A.I. tech, “China wouldn’t.” And in a May 5 tweet, he clarified his position:
“There is so much possible benefit that I think we should continue to develop it but also put comparable resources into making sure it’s safe.”
To that end, President Biden and Vice President Harris met A.I. leaders including Alphabet CEO Sundar Pichai and OpenAI CEO Sam Altman last week to discuss the need for safety and transparency in the field as well as the potential for new regulations. And the European Union’s A.I. Act—which classifies A.I. systems into different risk categories, adds transparency requirements, and laws to prevent bias—is expected to be operational by the end of the year. After Musk’s letter, a committee of E.U. lawmakers also agreed to a new set of proposals which would force A.I. companies to disclose when they use copyright material to train their systems, Reuters first reported May 1.