This post was originally published on this site
https://content.fortune.com/wp-content/uploads/2023/07/2023-07-06T124343Z_794020911_RC2CX1AJZ6I2_RTRMADP_3_CHINA-TECH-AI-e1689357369335.jpg?w=2048China announced new regulations for generative A.I.—the technology that powers OpenAI’s ChatGPT and Google’s Bard chatbots—on Thursday. The rules will govern every publicly available chatbot and will be overseen by the Cyberspace Administration of China (CAC), the country’s top internet regulator. Exempt from the regulations are generative A.I. research and technologies developed for use in other countries.
Major Chinese tech companies, such as Alibaba and Baidu, among others, have not yet released their generative A.I. tools for public use. Experts believe they were waiting for the government to release their final regulations before doing so. (Although Thursday’s policies are titled “Interim Measures,” leaving open the possibility of upcoming changes). Chinese versions of generative A.I. chatbots and image generators are still either in development or being trialed by B2B customers, CNN reports. Alibaba, for example, released a text-to-image generator called Tongyi Wanxiang last week that;s still only available for beta testing to corporate clients. And Baidu, China’s search engine giant, released its Ernie chatbot in March to only about 650 enterprise cloud customers.
Developers will also need to register their algorithms with the Chinese government and undergo a “security assessment,” if their services are deemed to have “social mobilization ability” capable of influencing public opinion—a policy that appears, at least initially, to keep with existing Chinese censorship efforts of online conversations.
The new law features an overarching requirement to “adhere to core socialist values.” That same section of the regulations goes on to outline a litany of illegal uses of generative A.I.; some meant to protect citizens—a ban on promoting terrorism and disseminating “obscene pornography”—and others meant to entrench government control over the nascent technology—tech companies and users must not use generative A.I. to “subvert the state power,” “damage the image of the country,” and “undermine national unity.”
Domestic national security concerns related to A.I. have been echoed at the highest levels of the Chinese government. At a meeting in May, Chinese president Xi Jinping called for a “new pattern of development with a new security architecture,” to address the “complicated and challenging circumstances” A.I. posed to national security, PBS reported.
Thursday’s rules were drafted by the CAC but were approved by seven other agencies including the Ministry of Education, the Ministry of Public Security, and the State General Administration of Radio and Television, according to the CAC’s website. The involvement of such a broad array of state agencies gives some credence to the notion that the government hopes A.I. be used by virtually every industry in the country, something outlined in the new policy as well. The new regulations come amid a brewing A.I. arms race between China and the U.S. Last December, Chinese officials identified A.I. development as an economic priority for 2023 at the government’s annual Central Economic Work Conference, Fortune’s Nicholas Gordon reported.
China’s regulations offer a guide for A.I. regulations
Thursday’s regulations were an updated version of preliminary guidelines published in April, which were deemed too restrictive by tech companies. They now offer a blueprint to the U.S. and other countries on how to contend with some of the hot button issues surrounding generative A.I., including possible copyright infringement and data protection.
They include some of the first explicit requirements in the world that intellectual property rights be respected by generative A.I. companies. The topic was recently brought to the fore in the U.S. when comedian Sarah Silverman sued OpenAI and Meta for using her copyright protected work in training their machine learning models.
The CAC’s new policy also sought to outline certain privacy rights for individual users. Generative A.I. platforms in China will be responsible for protecting personal information should users disclose it while using the services. And if companies plan to collect or store any otherwise protected information, they’ll have to offer a terms of service to users to “clarify the rights” they have when using the platform. Terms of service are widely used with tech applications ranging from social media to app stores, but aren’t yet mandated by law for generative A.I. platforms in the U.S., according to a May congressional report. Additionally, all existing Chinese privacy protection laws will also apply to A.I., according to the CAC’s released regulations. These provisions could be especially illustrative for the U.S., which currently does not have a comprehensive data protection law.
The recently released measures also offer clues into China’s global ambitions regarding A.I. and specifically the policies that will eventually be used to regulate its use around the world. Developers and suppliers, like chipmakers, were “encouraged” to participate in the “the formulation of international rules related to generative artificial intelligence,” according to the new laws.
The idea of a Chinese desire for comprehensive regulations has been batted around in the past, most recently by Tesla CEO Elon Musk. On Wednesday, he predicted that China would be open to a “cooperative international framework for A.I. regulation,” something he says he discussed with officials during his recent visit to China.