The Impact Of Artificial Intelligence On The Great Ethics Cage Fight Of 2023

It didn’t take long for people I’m familiar with to begin fiercely debating each other over ethics issues, and these discussions quickly deteriorated into arguments consisting of personal attacks. Although this may be an exaggeration, it’s not far from the truth; dinner conversations and private talks have been filled with mean words, snide remarks, and broken relationships.

All because of a piece of statistical magic almost no one had ever heard of before November 29th, called GPT (General Pre-trained Transformer).

Even though many people are well-versed in this field, ChatGPT was initially developed on GPT-3, a language-oriented AI platform. A few months later, it was updated to GPT-4, which is significantly more powerful as it can process language and images. It is expected that this year GPT-5 will be released, which will be even more potent than its predecessor.

Tens of billions of dollars have been invested by tech giants such as Google, Meta, Adobe, IBM, Nvidia, and Bloomberg, along with several startups working hard to stay ahead of ChatGPT.

It didn’t take long for AI to enter the public domain, faster than even its biggest fans anticipated. People instantly began voicing the need to slow down and consider what this could mean before going any further.

Jabir ibn Hayyan, a figure from 800 AD, is credited with the conception of taking – an Arabic alchemical theory that sought to replicate life in laboratories, even humans. This marks the beginning of a long history concerning ethics and artificial intelligence.

Around 1580 AD, Rabbi Judah Loew ben Bezalel of Prague is believed to have created the golem, a clay figure given life. Neither of them was successful in their attempts. However, one can assume that the honest conversations around their efforts were intense.

Popular culture is full of works exploring artificial intelligence, beginning with Jonathan Swift’s Gulliver’s Travels, Mary Shelley’s Frankenstein, and Neal Stephenson’s Snow Crash, and continuing through movies like Blade Runner and The Matrix.

In collaboration with his editor John W Campbell, Isaac Asimov was the first to develop laws of robot ethics through a short story called “Runaround” featured in their collection I, Robot. This is believed to be the first attempt at tackling this complex issue, even though it was presented as fiction. The three laws highlighted were:

A robot must never cause hurt to a human or allow them to be in harm’s way because of inaction. It must also obey any human commands, except if it goes against the First Law. Furthermore, it must preserve its existence as long as that does not conflict with the First or Second Law.

This serves as an initial step in the ethical discussion. However, a proficient legal representative could easily challenge it. What constitutes “injury” and what is understood by “come to harm” must be clarified (e.g., does this include defamation, insults, or neglect?).

What would happen if the Robot was given the trolley problem and required to pick between two detrimental actions? This thought experiment in ethics is based on an imaginary situation where someone can save five people from being hit by a trolley car by diverting it towards an individual, resulting in their death.

Latest Updates On The Heated Debate Around Ethics And Morality

As Asimov and AI research began to generate results in the 21st century, discussions about ethical dilemmas arose among those knowledgeable. However, these conversations largely stayed away from public knowledge. With real-life applications of Artificial Intelligence becoming closer than ever, people started to pay more attention to the ethical concerns posed by its use. Yet, no one was certain when it would become prominent in our day-to-day lives.

The Asilomar Conference in Barcelona, which took place in 2017, was the first conference devoted solely to AI ethics. This event brought together philosophers, sociologists, and technology experts who discussed and created 23 ethical principles deemed suitable for the field. These principles have a serious tone and are intended to be noble ideas.

AI systems should be designed and operated in a manner that is consistent with the values of human respect, rights, freedoms, and diversity.

Well, yes. Easy to say. Kumbaya, right?

Recently, the European Commission and the UK have established pre-legislative framework plans that include principles for their implementation. In particular, the UK has proposed this plan.

My usage must adhere to the laws established in the UK, such as Equality Act 2010 and GDPR. Furthermore, I need to guarantee no discrimination against someone or results in inequitable business dealings.

Do you understand the issue here? It is likely that in one, two, or four years, someone like Putin, Xi Jinping, or a religious extremist terrorist could use AI to close down air-traffic control towers at a major airport within a week, create a bioweapon the following week and open up the floodgates of Hoover Dam after.

It is impossible for the leader of a country like Russia or China, which has no qualms about abducting tens of thousands of children, forcing them to relocate to another nation, and oppressing entire ethnicities, to accept the ethical standards espoused by the Western world. Absolutely none.n world. Absolutely none.

Two news stories have been stirring up a lot of attention online over the past couple of weeks. One was an open letter from the Future of Life Institute released on March 23rd, which has garnered signatures from more than 1300 notable people in various fields, such as Yuval Noah Harari, Steve Wozniak, Elon Musk, Tristan Harris, and Lawrence Krauss.

The letter is concise, stating that there could be unexpected consequences when creating AI systems more powerful than GPT-4. Consequently, the writers urge all AI labs to take a six-month break before continuing development in this area.

This letter circulated for a few days while people hesitated, presumably worrying about terrorists and Putin and XI and whether they were laughing and obtaining Alpaca – a $600 large language model from Stanford that is comparable to ChatGPT.

Eliezer Yudkowsky, a prominent figure in artificial general intelligence research and one of the leading global machine intelligence scientists, made his presence known a few days later. His words of wisdom? Be strong-minded: Steel yourself.

Eliezer Yudkowsky says:

“…The most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally, everyone on Earth will die. Not as in ‘maybe possibly some remote chance’, but as in ‘that is the obvious thing that would happen’.”

When some wise individuals became alarmed, others declared that there was no cause for fear. A few scoffed at the idea, while others labeled them a fool. Now, the arena is ready to host fights, and Steven Boykey Sidley, professor of practice at JBS, University of Johannesburg, is ready for the challenge.

The “great ethics cage fight” of 2023 underscores the critical need for ethical considerations to guide the development and integration of AI into our society. It’s imperative for all stakeholders to actively participate in this discourse and work towards the responsible and ethical use of AI. By prioritizing ethics in the advancement of AI, we can ensure that this transformative technology benefits humanity as a whole.

Source: Daily Maverick

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top