The EU is tightly regulating handling AI techs such as ChatGPT, recently praised in the news. Their goal is to create strict rules covering these technologies’ breadth and reach.
Two years ago, the EU Commission proposed the first legislative framework concerning AI regulations and submitted it to the member states and European Parliament – a move that did not include specific rules regarding ChatGPT or similar AI systems.
If passed into law, this proposal will enforce certain limitations and transparency regulations concerning using artificial intelligence (AI) systems. Systems such as ChatGPT would need to adhere to these restrictions so that they may be employed.
The same approach based on risks is likely to be applied uniformly in all the member states of the EU to account for the new regulations on AI.
AI systems can be categorized into four different categories based on the commission proposal: those that pose an “unacceptable risk,” those that present a “high risk,” those with a “limited risk,” and finally, those with a “minimum risk.”
Systems involving AI that present a risk to people’s lives, jobs, and rights are in the unacceptable risk group and should not be used. Additionally, it is prohibited to use AI systems or applications that override an individual’s free will, control human behavior, or rate people socially.
Apart from healthcare, other high-risk areas such as critical infrastructure, education, surgery, CV evaluation for recruitment, and credit ratings involve AI. Other key domains include evidence processing for immigration and asylum applications and border management; travel document verification; biometric identification systems; and judicial or democratic processes.
Before releasing AI systems in the high-risk group to the market, strict requirements must be strictly enforced. These requirements include being free of discrimination, accessible results for human oversight, and observable output.
Security forces can employ biometric recognition technology in public locations for terrorism and major criminal activities. Yet, these AI systems can only be put into practice with approval from legal authorities and are constricted to limited use.
The limited risk group of systems must also meet certain transparency requirements to be compliant. These obligations involve providing clarity and insight into the system’s functioning.
Chatbots are also classified as part of the limited risk group in this proposal. This ensures that users understand they are conversing with a machine instead of a human being, thereby reducing the risk associated with the interaction.
This artificial intelligence system does not disrupt minimum-risk applications, such as AI-supported video games or spam filters. These systems are in a group that causes little to no danger to the rights or security of individuals.
The penalties for violating the proposed regulations are substantial, possibly amounting to up to €30 million or 6% of global profits.
The European Parliament (EP) and member states are currently approving a proposed Artificial Intelligence law. Upon successful approval, this will be effective and binding.
The regulations are not without controversy and may pose challenges for businesses and governments; they represent a necessary response to the growing use of AI in our lives. It remains to be seen how effective these regulations will be in practice, but they are a good starting point for ensuring that AI is developed and used in a way that benefits everyone.
As the EU continues to refine its approach to AI regulation, it will be important to balance innovation and ethics to ensure that we can fully harness this transformative technology’s potential while protecting the values that define our societies.