This post was originally published on this site
https://content.fortune.com/wp-content/uploads/2023/05/GettyImages-1236298089-1-e1683649605761.jpg?w=2048Microsoft cofounder Bill Gates says he’s “scared” about artificial intelligence falling into the wrong hands, but unlike some fellow experts who have called for a pause on advanced A.I. development, he argues the technology may already be on a runaway train.
The latest advancements in A.I. are revolutionary, Gates said in an interview with ABC published Monday, but the technology comes with many uncertainties. U.S. regulators are failing to stay up to speed, he said, and with research into human-level artificial intelligence advancing fast, over 1,000 technologists and computer scientists including Twitter and Tesla CEO Elon Musk signed an open letter in March calling for a six-month pause on advanced A.I. development until “robust A.I. governance systems” are in place.
But for Gates, A.I. isn’t the type of technology you can just hit the pause button on.
“If you just pause the good guys and you don’t pause everyone else, you’re probably hurting yourself,” he told ABC, adding that it is critical for the “good guys” to develop more powerful A.I. systems.
The definition of who the good guys might be is subjective, but the race for dominant A.I. is magnifying existing rivalries on the corporate and geopolitical scale. OpenAI’s successful launch of ChatGPT late last year sparked an arms race between Microsoft and Google to corner the generative A.I. market, but the clash has spilled over internationally as well, as the U.S. and Chinese governments are now battling for A.I. supremacy.
ChatGPT, for instance, is blocked in China. Meanwhile, Baidu, China’s most popular search engine, is developing its own generative A.I. application, but it failed to impress investors when it launched in March. China has already raced ahead of the U.S. on creating A.I. regulations, opening the door to more innovation down the line. But some Chinese firms developing A.I., such as tech giant Alibaba, have been hampered by censorship laws that create a more limited pool of data they can use to train their systems.
International A.I. rivalry is not only limited to gaining a business advantage. The technology also promises to enhance military capabilities and war strategy.
China is investing at least $1.6 billion annually on A.I.-related military systems, according to a 2021 report by Georgetown University. Russia, meanwhile, has been squarely in the A.I. race since at least 2017, when President Vladimir Putin declared the technology to be “the future, not only for Russia, but for all humankind.” Digital hacker attacks have been a hallmark of Russia’s military strategy in Ukraine, incursions that could be enhanced by more powerful A.I., while Ukrainian cities have been terrorized by semi-autonomous kamikaze drones since the invasion began last year.
In its 2024 budget proposal from March, the U.S. Department of Defense asked lawmakers to sign off on $1.8 billion in new A.I. investments.
“We’re all scared that a bad guy could grab it. Let’s say the bad guys get ahead of the good guys, then something like cyber attacks could be driven by an A.I.,” Gates said.
The competitive nature of A.I. development means that a moratorium on new research is unlikely to succeed, he argued.
“You definitely want the good guys to have strong A.I.,” he continued. When asked if he could guarantee that the “good guys” would be the ones to develop more powerful forms of A.I., Gates replied: “If you stop the good guys you can guarantee it won’t happen.”
Other experts have made similar arguments that A.I. is unlikely to be reined in because of the number of interested parties. Geoffrey Hinton, a former Google engineer often referred to as the “Godfather of A.I.,” compared the need for global rules governing A.I. to the international treaties that have restrained nuclear weapons proliferation for decades, during a recent interview with the New York Times. But the key difference with A.I., he continued, is that it is much easier for rivals to develop behind closed doors than weapons of mass destruction.
In a CNN interview last week, Hinton added that he would not support a moratorium on A.I. research because it would be difficult to impose internationally, allowing rivals to race ahead.
“I don’t think we should stop the progress. I didn’t sign the petition saying we should stop working on A.I., because if people in America stop, people in China wouldn’t. It’s very hard to verify whether people are doing it,” he said.