The Cyberspace Administration of China (CAC) has launched a three-month special campaign to crack down AI technology abuses, aiming to regulate AI services and applications, promote the healthy and orderly development of the industry, and protecting citizens’ legal rights.
The special action will focus on the platform source of AI technology, not only addressing the applications and products accessible to users but also emphasizing platform responsibility, aiming to address issues such as false information, pornography, superstition, fraud, and other abuses of AI-generated capabilities, according to the CAC announcement on Wednesday.
In the first phase, the CAC will strengthen the governance of AI technology at its source, clean up and rectify illegal AI applications, strengthen the management of AI-generated synthetic technology and content labeling, and encourage websites and platforms to improve their detection and verification capabilities, it said.
In this phase, the CAC will address six major issues. The first issue involves the abuse of AI that violates rights, including illegal AI products that use generative AI technology to provide content services but have not completed the filing or registration process for large models, the functions that violate laws or ethics, such as “one-click nudity” or cloning and editing others’ biometric features, such as voices and faces, without consent, thus violating privacy.
The second issue involves teaching or selling illegal AI products, including teaching how to create fake videos or voice swaps using illegal AI products, or selling illegal items like “voice synthesizers” and “face-swapping tools,” and promoting or marketing these products.
The third issue deals with the management of AI training data, including using data that infringes on others’ intellectual property or privacy, or data that is false, invalid, or collected from illegal sources. The lack of a mechanism for managing training data or failing to regularly inspect and clean up illegal data is also a concern.
The fourth issue focuses on platform responsibility, particularly weak security management measures on platforms, failure to implement content labeling requirements, and the risks of key sectors such as medical, financial, and educational services, which require specialized safety measures to prevent issues like “AI prescriptions,” “investment manipulation,” and “AI hallucinations.”
In the second phase, the CAC will focus on prominent issues such as using AI technology to spread rumors, false information, pornography, and lowbrow content, impersonating others, and engaging in online “troll” activities. This phase will focus on clearing related illegal and harmful information and dealing with violations by accounts, MCN organizations, and platform sites.
The second phase will address seven key issues:
- The use of AI to create and spread rumors, including the fabrication of information related to current political events, public policies, social issues, international relations, and emergency situations, as well as malicious interpretations of major policies.
- The use of AI to create and spread false information, such as combining unrelated images and videos to create misleading content, including exaggerated and pseudoscientific claims in fields like finance, education, justice, and healthcare, or using AI for fortune-telling or divination.
- The use of AI to create and spread pornographic and vulgar content, including generating inappropriate or suggestive images and videos, or producing content with graphic violence or grotesque features.
- The use of AI to impersonate others, such as creating fake personas of experts, celebrities, and historical figures to deceive users and make profit.
- The use of AI for online “troll” activities, such as creating fake accounts and generating low-quality content to gain traffic and manipulate online discussions.
- Violations by AI products and services, including counterfeit AI websites or applications, offering unethical features like generating fake content or promoting adult or inappropriate chat services.
- The violation of minors’ rights, such as AI applications that induce minors to become addicted or contain harmful content affecting their mental or physical health.
An official from the CAC emphasized the importance of this special action for preventing the abuse of AI technology and protecting the legal rights of internet users, saying that it’s crucial for local internet authorities to fully recognize the significance of this initiative and to guide platform sites in adhering to the requirements of this action.
This includes improving content review mechanisms, enhancing technological detection capabilities, and ensuring compliance with regulations.
The government will also promote policies related to AI and raise public awareness of AI literacy to guide the correct understanding and application of AI technologies, the official said.