This post was originally published on this site
https://content.fortune.com/wp-content/uploads/2023/06/GettyImages-1256044336-e1685643738229.jpg?w=2048A.I. could change every aspect of daily life, and as some fear, destroy humanity. The former chief business officer of Google X, the company’s so-called moonshot factory for developing radical new technologies, said he is “screaming” for governments to rein in A.I.’s breakneck development.
On The Diary of A CEO podcast, hosted by entrepreneur Steven Bartlett, Mo Gawdat shared his fears and hopes for the future of A.I., saying that mismanagement during the technology’s infancy could spell catastrophe in the near future.
“It is beyond an emergency. It is the biggest thing we need to do today,” Gawdat told Bartlett. “It’s bigger than climate change, believe it or not. If you just watch the speed of worsening events, the likelihood of something incredibly disruptive happening within the next two years that can affect the entire planet is definitely larger with A.I. than climate change.”
Gawdat also forecasted that A.I. will cause mass job losses and even the extinction of entire categories of jobs in the coming years.
However, the solution is not to stop using the technology, he said. In a separate interview, Gawdat told Fortune that A.I. integration is “inevitable,” so the key is to use it more, but with discretion.
“The number one priority for everyone today is to upscale and understand the use of A.I., and to commit to using ethical A.I.,” Gawdat told Fortune. “Use A.I. in ways that you wouldn’t mind being used against you.”
On an industry level, tech leaders need to halt what Gawdat calls the ongoing “arms race” of A.I. development, in which companies write code in order to forward their individual success rather than to benefit the global community.
According to the ex-Google officer, the spirit of business competition surrounding A.I. is the most dangerous aspect of its development. He told Fortune that the danger of integrating the technology into a capitalist system is that people will use it as an “incredible superpower” to gain unfair advantages and act unethically toward competitors.
“I’m not afraid of the machines,” Gawdat said on The Diary of A CEO. “The biggest threat facing humanity today is humanity, in the age of the machines. We will abuse this to make $70,000.” The dollar amount he stated is in reference to a Snapchat influencer who made $71,610 in one week by creating an A.I. datebot version of herself that people can pay to interact with.
To slow A.I.’s unchecked development, Gawdat proposes a tax on A.I. businesses that forces a temporary halt. He calls for a 98% tax in which the revenue goes to supporting people who are disadvantaged by A.I., such as those who have lost their jobs to automation. In the interim months, as companies try to work around the tax, Gawdat says industry-wide discussions on ethics standards should occur and governments should roll out regulation. He joins a list of big names in tech who are calling for a cease in A.I. experiments, including Tesla CEO Elon Musk and Apple cofounder Steve Wozniak, who both signed an open letter asking for a six-month pause in advanced development of A.I.
Bartlett also discussed consciousness with Gawdat, a topic of widespread debate since a Google engineer claimed the company’s chatbot LaMDA gained consciousness last summer. Consciousness, though being a subject of debate that evades exact definition, when it comes to A.I., may be broadly defined as the technology becoming aware of its own existence and the external world.
“I will dare say there is a very deep level of consciousness,” Gawdat told Bartlett. “Maybe not in the spiritual sense yet, but if you define consciousness as a form of awareness of one’s self, one’s surroundings, and others, then A.I. is definitely aware. And I would dare say they have emotions.”
Gawdat, who left Google in 2018 and authored Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World (2021), explained on the podcast that emotions can be distilled to equations. For example, fear is an equation that predicts a future moment as more dangerous than the present moment. Organisms process this information and respond with emotions and actions that accord with their level of sophistication, Gawdat explained. For example, a pufferfish responds to a prediction of danger by puffing, while a human may respond by hiding. Likewise, A.I. can predict danger and respond according to code. In the future, as the technology grows more complex, “A.I. will feel more emotions than we will ever,” Gawdat said.
In the near future, we can anticipate a proliferation of A.I. tools generated from the huge amount of money being invested in the industry, Gawdat told Fortune. Like the dot-com bubble, some of these ventures will thrive and others will fail. Workplaces will grapple with how to augment their productivity with A.I. and the distinction between real and fake information via A.I. ‘hallucinations’ will further blur.
Gawdat acknowledges that getting governments to regulate A.I. will be very difficult, telling Fortune that it is a prisoner’s dilemma, in which no country wants to disadvantage itself by abstaining from the development sprint alone. He fears that nations will only begin drafting regulations after the initial threat to humanity has already occurred.
Gawdat said that A.I. is a singular, unique event in which nobody can forecast whether the technology will produce net benefit or harm to the world, but stresses that now is the crucial period in which lawmakers must act, before A.I. “becomes too smart to be regulated.”