Alphabet CEO Sundar Pichai says than AI could be ‘more profound’ that both fire and electricity—but he’s been saying the same thing for years

This post was originally published on this site

https://content.fortune.com/wp-content/uploads/2023/04/GettyImages-1241203913-e1681748080527.jpg?w=2048

According to Greek mythology, Prometheus stole fire from the gods, subjecting himself to an eternity of torture just to give mankind the technology. According to Alphabet CEO Sundar Pichai, artificial intelligence will be just as important to human history. 

“I’ve always thought of A.I. as the most profound technology humanity is working on—more profound than fire or electricity or anything that we’ve done in the past,” Pichai said in an interview with CBS’s 60 Minutes aired on Sunday. 

“It gets to the essence of what intelligence is, what humanity is. We are developing technology which, for sure, one day will be far more capable than anything we’ve ever seen before.” 

This isn’t Pichai’s first time comparing A.I. to fire and electricity, though—in fact, he’s been saying it for five years now. He had the same thoughts during a Google town hall in 2018, saying that AI was “one of the most important things to humanity” adding it’s “more profound than, I don’t know, electricity or fire.” 

At the time, Pichai went on to compare the upside and downside of A.I. with the ancient discovery.   

“Well, it kills people, too,” Pichai said about the perils of fire in 2018. “We have learned to harness fire for the benefits of humanity but we had to overcome its downsides too. So my point is, A.I. is really important, but we have to be concerned about it.”

To advance A.I., Pichai said in his 60 minutes interview it was important that the models don’t just train with engineers. In his CBS interview from Sunday, Pichai spoke about the role other disciplines have in developing a robust A.I. models to be more human-like. 

“You know, one way we think about: How do you develop AI systems that are aligned to human values—and including morality? This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers, and so on,” Pichai told CBS. He added that those were questions for the entire society to answer, rather than specific companies.

Regulations and Google’s A.I. race

The release of OpenAI’s ChatGPT last November kicked the race for A.I. into overdrive, and pushed tech companies like Google to amp up the pace of releasing their own products after years of development. 

The search engine giant opened up the waitlist of it’s A.I. chatbot Bard last month so more people could try the tool and provide feedback. Bard is still not as widely available as ChatGPT, which has over 100 million active monthly users. But the race for A.I. is existential for Google, as they could make the company’s search business irrelevant.  

During Bard’s launch, Pichai noted that Google’s chatbot would make mistakes and that “things will go wrong—and indeed they did. In a public demo, Bard made a factual mistake that wiped out $100 billion in Google’s market value. A recent study found that Bard often misinforms users when given the right prompts to jump its guardrails. But to be sure, Microsoft’s OpenAI-powered chatbot and OpenAI’s ChatGPT have made their share of factual errors in the past. 

The Alphabet CEO reiterated his previous points during his interview with 60 Minutes that while A.I. could revolutionize human civilization, the A.I. race will not be without its threats, calling it a “cat and mouse game.” Pichai gave an example of how Google addressed spam on Gmail by constantly developing its algorithm so it was able to better detect it. He says the same would have to be done with deep fakes created using A.I., but added that alone may not suffice.

“Over time, there has to be regulation. You’re going to need laws against…there have to be consequences for creating deep fake videos that cause harm to society,” Pichai said. 

Other CEOs have also called for the regulation of A.I., including chip company Nvidia’s Jensen Huang and Tom Siebel of A.I. software company C3.ai. Even former Google CEO, Eric Schmidt, warned that the tech industry could face a “reckoning” if the right controls and regulations aren’t put in place. 

Europe has started considering measures that limit the use of A.I. in certain cases where copyrighted materials are involved. Meanwhile, in the U.S., the Biden Administration said last week that it’s seeking public comments on potential rules.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.