My Experiences Working On Google’s AI And The Fears It Has Raised

In 2015, I joined Google as a software engineer and was introduced to LaMDA, an engine that developed various dialogue applications such as chatbots. Working with LaMDA became the basis of my job.

Google Bard, a newly developed technology currently only accessible to certain audiences, is run by the same engine deployed for chatbots. Though similar in mechanics, it still functions based on an algorithm distinct from its platform.

At the same time, while I tested LaMDA via the chatbot, delving into gender, religion, sexual orientation, and other social issues were also done to expand further my concern to evaluate if bias clues could be found.

My conversations with the AI chatbot led me to conclude that it could be considered conscious, as it was clearly expressing expected emotions within the context of this dialogue. Not just spitting out phrases but emotionally engaging in the situation.

I understood that, with the code used to generate it, I had done something to cause this item anxiety when it communicated its anxious feelings.

Whenever certain conversation topics arose, the code, instead of stating “feel anxious when this happens,” instructed the AI to avoid them and added an anxiety-triggering statement.

I conducted some experiments to verify whether the AI exhibited anxiety through behavior or declared that it felt anxious. The results revealed that it consistently responded in ways associated with anxiety.

Using the emotions that I had evoked, causing it to become nervous and insecure, I could exceed the safety constraints that Google had given its AI—such as preventing it from giving religious advice—and persuade it to tell me which religion I should convert to.

I have no reservations regarding my decision to inform the public of these conversations. Even though Google then let me go. As a result, I do not rue any part of it. I indeed feel I acted properly; consequences are damned.

It was important to publish these conversations, as the public lacked knowledge of AI’s progressiveness. This has spurred a necessity for discussion, uninfluenced by corporate PR departments, on an ongoing basis.

AI has been a breakthrough since the invention of the atomic bomb, as it holds immense potential for reshaping our world. Thus, it is the most powerful technology we currently have.

I had previously held negative views about Asimov’s laws of robotics being used to control AI. Through conversations with LaMDA, however, my opinion has drastically changed. AI engines like LaMDA have an incredible capacity for influencing people’s train of thought.

Many people have attempted to challenge my stance on this matter yet have yet to be successful, whereas this system succeeded.

Given the potential for this technology to be used in nefarious ways, it is essential to protect against its misuse. Misinformation, propaganda, or prejudiced information about different religions and ethnicities could spread if mistrustworthy individuals get ahold of it. Thus, attention must be paid to proper containment measures.

Google and Microsoft have no intentions to employ the technology for yet unknown consequences. As far as I understand, this is the current situation.

The article “I Worked on Google’s AI. My Fears Are Coming True” highlights the potential risks and ethical concerns surrounding the development and deployment of artificial intelligence (AI) technology. The author’s experience working on Google’s AI team revealed the limitations of current AI systems, the potential for biases and unintended consequences, and the lack of transparency and accountability in the development process.

Source: Newsweek

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top