China has announced that they will require a security review of all generative Artificial Intelligence (AI) services before they are allowed to operate in the country, which casts uncertainty on ChatGPT-like bots recently unveiled by some of China’s largest tech companies such as Baidu Inc. This new regulation means that any AI service must pass through a screening process before it can be used, and this could potentially affect how these large corporations deploy their products in the country. The regulation also suggests growing concern about the potential dangers associated with AI technology, particularly regarding privacy and data protection.
The Cyberspace Administration of China has put forth draft guidelines, open to public feedback, which states that service providers must guarantee the accuracy of their content, uphold intellectual property rights, and not allow anything which could be considered discriminatory or a threat to security.
The country’s internet regulator requires AI operators to label any AI technology-generated content appropriately.
China’s requirement for AI security reviews for services like ChatGPT is a significant step towards ensuring the responsible and secure use of AI technologies. It emphasizes the importance of ethical considerations and risk mitigation in developing and deploying AI systems. As AI continues to advance and play a larger role in society, regulatory measures such as AI security reviews are crucial in fostering the responsible and ethical use of these technologies for the benefit of all stakeholders.