Beijing declared AI censorship content

Beijing declared AI censorship content

China’s tech giants have released ChatGPT clones. This prompted Beijing to introduce suggestions for research, development, and utilization of generative AI chatbots. However, these rules do not address the question of whether or not these technologies can abide by socialist values.

Yesterday, the Cyberspace Administration of China released a draft of Administrative Measures for Generative Artificial Intelligence Services outlining twenty-one points. In particular, such services must demonstrate adherence to the core values of socialism. They must be without content that could potentially undermine state power, the socialist system, or encourage the severance of the country. China won’t allow content that may disrupt economic and social order, such as violence, obscene, pornographic, and false information.

The document strictly bans any promotion of terrorism, extremism, ethnic animosity, and discrimination. Designers of AI tools are responsible for ensuring that any racial, ethnic, religious, national, regional, gender, and age-based discrimination is averted through the judicious choice of training data, algorithm design, and other modifications.

Users must sign up with their authentic name. Moreover, those who give AI services, or aid others in doing so, will be the creator of the content. The producer is in charge of any applicable privacy regulations – including those pertaining to personal information.

China encourages the growth of AI despite challenges in technological development

The CAC stated that the state encourages the growth and utilization of AI algorithms and frameworks.

The Beijing Municipal Bureau of Economy and Information Technology has apparently backed up the claims made, as showed by the declared initiative to aid in creating Artificial Intelligence models and open source frameworks in February.

It will prove difficult realization of Beijing’s aspirations since the requested technology is still in its infancy and sometimes prone to errors. This way, it will limit access to data models that could prevent such mistakes from occurring in the first place.

Last month, Baidu unveiled its ERNIE model, which caused controversy due to its unsuccessful handling of simple requests and the noticeable censorship of politically sensitive material.