China put into force this week its newest regulations on artificial intelligence-generated content, a watered-downed version of stricter draft rules that seek to keep the country in the AI race while maintaining firm censorship on online content.
Rapid advancements in generative AI have stoked global alarm over the technology’s potential for disinformation and misuse, with deepfake images showing people mouthing things they never said.
Chinese companies have rushed to develop artificial intelligence services that can mimic human speech since the release of San Francisco-based OpenAI’s ChatGPT, which is banned in the country.
Experts say the 24 new rules appear to be diluted from strict draft regulations published earlier this year as Beijing seeks to encourage homegrown entrants to the US-dominated industry.
Here’s what you need to know about Beijing’s regulations, which target services for the general public:
AI ethics
Generative AI must “adhere to the core values of socialism” and refrain from threatening national security and promoting terrorism, violence, or “ethnic hatred”, according to the guidelines.
Service providers must label AI-generated content as such, and take measures to prevent gender, age, and racial discrimination when designing algorithms.
Their software should not create content that contains “false and harmful information”.
AI programmes must be trained on legally obtained data sources that do not infringe on others’ intellectual property rights, and individuals must give consent before their personal information can be used in AI training.
Safety measures
Companies designing publicly available generative AI software must “take effective measures to prevent underage users from excessive reliance on or addiction to generative AI services”, according to the rules published in July by Beijing’s cyberspace watchdog.
They must also establish mechanisms for the public to report inappropriate content, and promptly delete any illegal content.
Service providers must conduct security assessments and submit filings on their algorithms to the authorities if their software is judged to have an impact on “public opinion”, the rules say — a step back from a stipulation in earlier draft rules that required security assessments for all public-facing programmes.
Enforcement
The rules are technically “provisional measures” subject to the conditions of pre-existing Chinese laws.
They are the latest in a series of regulations targeting various aspects of AI technology, including guidelines on deep learning technology that came into effect earlier this year.
“From the outset and somewhat differently from the EU, China has taken a more vertical or narrow approach to creating relevant legislation, focusing more on specific issues,” partners at international law firm Taylor Wessing said.
While an earlier draft of the rules suggested a fine of up to 100,000 yuan ($13,824) for violations, the latest version says anyone breaking the rules will be issued with a warning or face suspension, receiving more severe punishment only if they are found to be in breach of actual laws.
“Chinese legislation falls between the EU and the United States, with the EU taking the most stringent approach and the United States adopting the most lenient one,” Angela Zhang, associate professor of law at Hong Kong University, told AFP.
Supporting innovation
Jeremy Daum, Senior Fellow of the Yale Law School Paul Tsai China Centre, noted that while an earlier draft of the rules was partly aimed at maintaining censors’ strict control over online content, several restrictions on generative AI that had appeared in an earlier draft regulation had been softened.
“Many of the strictest controls now yield significantly to another factor: promoting development and innovation in the AI industry,” Daum wrote on his China Law Translate blog.
The scope of the rules has been dramatically narrowed to apply only to generative AI programmes available to the public, excluding research and development uses.
“The shift might be viewed as indicating that Beijing subscribes to the idea of an AI race in which it must remain competitive,” Daum said.
Dear visitor, the comments section is undergoing an overhaul and will return soon.