This post was originally published on this site
https://content.fortune.com/wp-content/uploads/2023/01/GettyImages-1244890783-e1673555531876.jpgChina is not generally the country that Western democracies look to when crafting new laws. But starting this week, it will serve as a case study on how to address a problem that’s perplexing legislators across the globe: People appearing to say or do things they never did thanks to deepfakes made with artificial-intelligence tools.
On Tuesday, new Chinese regulations came into effect that prohibit the use of deepfakes deemed harmful to national security or the economy. The rules also state deepfakes must be prominently labeled as synthetically generated (or edited) if they might be misconstrued as real.
China, of course, has different priorities than democracies around the world. State-controlled media outlets are often mouthpieces for the ruling party, which directly controls major media groups. Reporters Without Borders calls China the “world’s largest prison for journalists” and ranks it near the bottom in terms of media freedom.
Still, other nations can learn from China’s attempt to regulate deepfakes, argues Graham Webster, who tracks the nation’s digital-policy developments while running the DigiChina Project at Stanford University.
“Although China’s political system and its government’s goals differ significantly from many other countries, I argue the world can learn from China’s early and proactive attempt to navigate challenges faced around the world,” he tweeted this week.
Speaking to the Wall Street Journal, he described the new rules as “one of the world’s first large-scale efforts to try to address one of the biggest challenges confronting society.”
Observers outside China, he noted, will see how such rules play out in the real world, and observe how businesses are affected.
There’s little doubt other governments will soon have to address deepfakes, which are rapidly reaching new levels of sophistication, both in video or audio form. Wedbush tech analyst Dan Ives described an “A.I. arms race that is taking place globally” in a Wednesday note to clients, pointing to Microsoft’s investment in ChatGPT maker OpenAI.
Elections might be influenced by candidates appearing to say things they did not. Digital simulations of celebrities, including Tom Cruise, have appeared in ads with no permission granted. In war, national leaders might appear to advise citizens to surrender, as happened to Ukrainian President Volodymyr Zelensky in the early days of Russia’s invasion.
More recently, Microsoft showed off a text-to-speech A.I. model, called VALL-E, that needs only a three-second sample of someone’s voice to simulate it, as Ars Technica reported Monday. It can then create audio of that person seeming to say anything, complete with timbre, emotional tone, and even room acoustics.
Microsoft researchers did not provide the code for others to experiment with. VALL-E, they noted on an example website, “may carry potential risks in misuse…such as spoofing voice identification or impersonating a specific speaker.”
They also suggested some guidelines, writing: “If the model is generalized to unseen speakers in the real world, it should include a protocol to ensure that the speaker approves the use of their voice and a synthesized speech detection model.”
Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.