‘Warning alarm’ when talking to AI for more than 2 hours in China

A draft AI service regulation has been prepared in China that calls for interactive artificial intelligence (AI) service providers such as ChatGPT and DeepSeek to warn users with notification windows and induce them to log out when they detect symptoms of addiction.

According to China Central Television (CCTV) on the 28th, China’s Cyber Space Management Bureau announced the previous day that it would release a “draft of opinions on how to manage AI-based personified dialogue services” and receive opinions from citizens by the 25th of next month.

The regulation is applied to companies that use AI technology to imitate human mindsets and provide services in China that interact with users in the form of text, image, voice, and video services. The draft focused on preventing users’ dependence on and addiction to AI. AI service providers must mark users to know that they are “talking to AI,” not people. Service companies must also notify users to stop using the service through notification windows or other means if they use the service for more than two hours consecutively. For underage users, parents should be given an “addiction warning alarm” and functions such as time limits should be set.

The draft also said it should use a dataset that meets core socialist values and embodies excellent traditional Chinese culture. As a result, creation and distribution of content for the purpose of threatening national security or disrupting economic and social order are prohibited. It is also prohibited from obscene or gambling-related content, violent or inciting a crime. The Chinese authorities are expected to come up with official laws and regulations after collecting opinions.

JENNIFER KIM

US ASIA JOURNAL

spot_img

Latest Articles