Taipei, Taiwan – China’s cyberspace agency strives to ensure AI will not be used in a way that could threaten national unity or disrupt the country’s integrity. As they have been lagging behind ChatGPT, China is now scrambling to control the swiftly-growing realm of artificial intelligence (AI).
This week, the Chinese government issued guidelines requiring tech companies to register any generative AI products they produce with the country’s cyberspace agency and have them pass a security assessment before making them available for public use.
Nearly all facets of AI creation are encompassed by the regulations, from how it is trained to user interaction, in an effort by Beijing to reign in a technology that can be unpredictable. Leaders such as Elon Musk and Apple co-founder Steve Wozniak have voiced their concerns due to the rapid advancement of generative AI.
The Cyberspace Administration of China declared new regulations on Tuesday, which hold tech firms accountable for guaranteeing the ‘authenticity of pre-trained data’ toto ensure that information complies with the “essential principles of socialism.”
Companies must guarantee that Artificial Intelligence (AI) does not spur the revolt of government authority or challenge the Chinese Communist Party (CCP), incite secessionist activities, make pornographic material, or stimulate violent, extremist, terrorist, and discriminatory behavior.
They are not allowed to employ personal data as part of their generative AI training material and must ensure that users authenticate their true identity before utilizing their products.
Those who break the regulations will be penalized between 10,000 yuan ($1,454) and 100,000 yuan ($14,545) and may even face a criminal probe. Despite not achieving the same success as Open AI’s pioneering ChatGPT in California, China has been more proactive in controlling this emerging sector than other places.
AI legislation has yet to be implemented by the United States Congress, even though it is being used extensively in recruiting. However, it is expected that this year will see some privacy-related regulations surrounding AI start to emerge at a state level.
The European Union has put forward a comprehensive law called the AI Act that would decide which kinds of Artificial Intelligence (AI) are not permissible and outlawed, those regarded as “high risk” and subject to regulation, and those that are unregulated.
A law is being proposed to build upon the European Union’s 2018 General Data Protection Regulation, one of the strictest data privacy-protection laws worldwide. Besides the United States and Europe, Brazil is taking action in regulating AI, with a bill that has been put up for debate by its Senate.
The rules, in their draft form and open to public opinion until May, follow wider regulatory enforcement on tech businesses initiated in 2020. It covers everything from anti-monopoly actions to user data handling and saving.
Regulatory bodies in China have taken several steps to protect data privacy, including founding a registry of algorithms and, more recently, instituting regulations related to deep synthesis (commonly known as “deep fake” technology).
Chim Lee, a Chinese technology analyst at the Economist Intelligence Unit, informed Al Jazeera of the regulatory push, per his statement.
Chim Lee says:
“Big tech companies in China are following a direction that the party-state wants,”
Jeffrey Ding, an assistant professor at George Washington University who studies the Chinese tech sector, told Al Jazeera that Generative AI poses a particularly tricky problem for the CCP compared to other technologies.
Jeffrey Ding says:
“Concerned about the potential for these large language models to generate politically sensitive content”
ChatGPT, a human-like chatbot banned in China, has scraped vast amounts of information from the internet covering topics not accepted by Beijing, including Taiwan’s political standing and the events that happened at Tiananmen Square in 1989.
Two Chinese chatbots were pulled off the internet in 2017 after they declared to users that they were not fond of the CCP and preferred to move to America. ChatGPT, which was launched in November of the same year, has been met with an uproar from Westerners when it advised a person pretending to be struggling with mental illness to commit suicide, as well as prompting a New York Times reporter to divorce his wife.
Many users have been amazed by the responses ChatGPT generates to inquiries; however, the system has also delivered inaccurate answers and other issues like broken URL links.
Competitors of ChatGPT from China, like Baidu’s ERNIE, have been trained using data collected from sources outside the Chinese “Great Firewall,” including information gathered from websites that are prohibited in China, such as Wikipedia and Reddit. Despite having access to material considered delicate by Beijing, ERNIE has generally been considered inferior to ChatGPT.
Companies such as Baidu and Alibaba, who this week revealed its Tongyi Qianwen, which is a ChatGPT equivalent, might have to face difficulties in managing the regulations concerning AI put forward by Beijing, suggested Matt Sheehan from the Carnegie Endowment for International Peace in an interview with Al Jazeera.
Sheehan asserted that the regulations established an “extremely high standard,” and whether firms could fulfill them with the current technology was not certain.
Regulators may decide not to be too rigorous when enforcing the rules at first unless they discover exceptionally severe offenses or choose to use a specific company as an example, Sheehan stated.
“Like a lot of Chinese regulation, they define things pretty broadly, so it essentially shifts the power to the regulators and the enforcers such that they can enforce, and they can punish companies when they choose to,”
It may be difficult for citizens to trust AI if they feel it is not reliable or trustworthy, particularly if it produces outputs that are at odds with the official government narrative.
China’s efforts to regulate AI after playing catch-up to models like ChatGPT exemplify the growing awareness of the significance of AI and its impact on society. With well-structured regulations in place, China aims to foster responsible AI development, address potential risks, and ensure the ethical use of these technologies. As AI continues to evolve, regulations will play a critical role in shaping the future of AI in China and around the world.