China Payment Association Warns Of US Risks: What You Need To Know

On Monday, the Payment & Clearing Industry Association of China issued a warning against using OpenAI’s ChatGPT and comparable Artificial Intelligence tools, backed by Microsoft, due to their potential risks, such as cross-border data breaches.

The Payment & Clearing Association of China, as supervised by the People’s Bank of China, declared on Monday that personnel in the payment sector must follow laws and regulations when utilizing tools, including ChatGPT, and should not upload confidential data linked to finance and national security.

OpenAI has not allowed users in China to access its AI-driven chatbot, but its demand is skyrocketing. Consequently, many Chinese companies strive to develop versions and incorporate technology into their products.

Residents in China are unable to make OpenAI accounts. However, some use virtual private networks and foreign phone numbers to get around these restrictions to access the chatbox.

A probe was launched in Italy due to suspected violations of privacy regulations, leading to a temporary suspension of ChatGPT. Other European nations have been exploring the possibility of implementing more stringent measures.

The surge of enthusiasm in China surrounding the chatbot has contributed to the increase in tech, media, and telecom stocks, though analysts are warning it could be an unsustainable bubble.

Monday saw a Chinese state media outlet, Economic Daily, published an opinion piece that implored regulators to increase their oversight and take action against any speculation in the sector.

Monday saw Chinese stocks in the computer, media, and communications equipment sectors fall between 3.4 percent and 5.6 percent.

As AI technology becomes increasingly prevalent in our daily lives, it is crucial to consider the potential risks and take steps to mitigate them. While AI has the potential to bring many benefits, including improved efficiency and convenience, there are also risks associated with its use, such as bias and security vulnerabilities.

Developers need to prioritize transparency and accountability in developing and deploying AI products. Additionally, users should be aware of the potential risks and exercise caution when using AI-powered products. By working together to address these concerns, we can ensure that AI technology continues to bring positive benefits to society while minimizing its potential negative impacts.

Source: business.inquirer.net

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top