Regulating Artificial Intelligence: What You Need to Know

We are witnessing an artificial intelligence revolution, unprecedented speed, and complexity in human history. Many countries worldwide are racing to develop and utilize AI for commercial and military use. With this rapid growth of AI applications comes a need for regulation and rules to guide AI development so we can ensure its responsible use without sacrificing technological potential or innovation.

The upcoming Regulation of Artificial Intelligence (RAI) will be a big step forward in ushering in a future where intelligent machines make decisions that impact our lives.

The New York Times of May 12, 1997, featured a headline that proclaimed, “Computer Defeats Kasparov in Swift and Decisive Fashion.”

For those who may not be familiar with it, the article reported on one of the most well-known chess matches in history – Deep Blue, an IBM supercomputer, triumphed over world champion Garry Kasparov in six games.

This chess game between man and machine was much more than the mere competition for many, as it revealed a shrinking difference in intelligence between AI and humans. It had a profound impact.

ChatGPT, released by OpenAI, will be remembered as a remarkable event between machines and humans. This time it’s not a game but the potential power of language and its varied uses that are being explored.

Garry Kasparov’s thoughts when he looked back on his defeat to Deep Blue 10 years later in an interview with the CBC are most fitting for this occasion – and it is no coincidence.

Garry Kasparov says:

“I always say, machines won’t make us obsolete.”

“Our complacency might.”

ChatGPT has reminded us of the power of Artificial Intelligence to reshape our lives in areas such as education, medicine, law, and commerce. Even though it doesn’t appear that ChatGPT will replace us completely, its effects on these aspects of human life could be profound.

We must take Kasparov’s words to heart and not become too comfortable. Our elected representatives, above all else, need to be in charge of how Artificial Intelligence develops- not the other way around.

A Regulatory Conundrum

The House of Commons is considering Bill C-27, the Digital Charter Implementation Act. This includes the Artificial Intelligence and Data Act (AIDA), which could become Canada’s first AI legislation to place several guardrails on AI uses and enforce penalties for non-compliance of up to $25 million.

It’s clear that this is a positive move, yet there are likely to be numerous difficulties encountered when putting the AIDA or similar plans into action.

At first, technology advances rapidly while the legislative process is slow and steady. It takes a great deal of time for laws concerning Artificial Intelligence (AI) to be approved by the House and Senate. Nevertheless, it’s hard to imagine what AI will be able to do in the future.

It has been extremely difficult to deal with increased risks in the past. For example, consider how severely the COVID-19 pandemic spread exponentially, greatly burdening hospitals and other vital services.

AI’s rate of spread is likely to increase as technology advances. An impressive feat was accomplished in only one week when ChatGPT gained over a million users. OpenAI has already revealed plans for an even more powerful software version.

AIDA primarily focuses on AI misuses, such as data security violations or financial fraud. What is more worrying are the ambiguous areas. In education, for example, some people have argued that AI will make doing homework obsolete. Nevertheless, this raises the question: Will this lead to future students being smarter or dumber?

AI is revolutionizing the world as we know it, and there will be more advancements. While this can be scary for some, it’s important to remember that people are working hard to ensure AI is ethically sound and regulated. As society moves forward with AI, things will become more interesting.



Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top