This post was originally published on this site
https://fortune.com/img-assets/wp-content/uploads/2024/04/GettyImages-1471634748-e1714466737230.jpg?w=2048Regulators who want to get a grip on an emerging generation of artificially intelligent killing machines may not have much time left to do so, governments were warned on Monday.
As autonomous weapons systems rapidly proliferate, including across battlefields in Ukraine and Gaza, algorithms and unmanned aerial vehicles are already helping military planners decide whether or not to hit targets. Soon, that decision could be outsourced entirely to the machines.
“This is the Oppenheimer Moment of our generation,” said Austrian Foreign Minister Alexander Schallenberg, referencing J. Robert Oppenheimer, who helped invent the atomic bomb in 1945 before going on to advocate for controls over the spread of nuclear arms.
Civilian, military and technology officials from more than 100 countries convened Monday in Vienna to discuss how their economies can control the merger of AI with military technologies — two sectors that have recently animated investors, helping pushing stock valuations to historic highs.
Spreading global conflict combined with financial incentives for companies to promote AI adds to the challenge of controlling killer robots, according to Jaan Tallinn, an early investor in Alphabet Inc.’s AI platform DeepMind Technologies.
“Silicon Valley’s incentives might not be aligned with the rest of humanity,” Tallinn said.
Governments around the world have taken steps to collaborate with companies integrating AI tools into defense. The Pentagon is pouring millions of dollars into AI startups. The European Union last week paid Thales SA to create an imagery database to help evaluate battlefield targets.
Tel Aviv-based +972 Magazine reported this month that Israel was using an artificial intelligence program called “Lavender” to come up with assassination targets. After the story — which Israel has disputed — United Nations Secretary-General Antonio Guterres said he was “deeply troubled” by reports of AI use in the Gaza military campaign and that no part of life-and-death decisions should be delegated to the cold calculations of algorithms.
“The future of slaughter bots is here,” said Anthony Aguirre, a physicist who predicted the trajectory the technology would take in a short 2017 film seen by more than 1.6 million viewers. “We need an arms-control treaty negotiated by the United Nations General Assembly.”
But advocates of diplomatic solutions are likely to be frustrated, at least in the short term, according to Alexander Kmentt, Austria’s top disarmament official and the architect of this week’s conference.
“A classical approach to arms control doesn’t work because we’re not talking about a single weapons system but a combination of dual-use technologies,” Kmentt said in an interview.
Rather than striking a new “magnum opus” treaty, Kmentt implied that countries may be forced to muddle through with the legal tools already at their disposal. Enforcing export controls and humanitarian laws could help keep the spread of AI-weapons systems in check, he said.
In the longer run, after the technology becomes accessible to non-state actors and potentially to terrorists, countries will be forced into writing new rules, predicted Arnoldo André Tinoco, Costa Rica’s foreign minister.
“The easy availability of autonomous weapons removes limitations that ensured only a few could enter the arms race,” he said. “Now students with a 3-D printer and basic programming knowledge can make drones with the capacity to cause widespread casualties. Autonomous weapons systems have forever changed the concept of international stability.”