Understanding The Role Of Human Behavior In Artificial Intelligence Challenges

Google has recently released Bard, their latest AI chatbot, to 10,000 testers in pursuit of becoming the top player in the race against Microsoft’s Bing, which they launched in February.

Sundar Pichai, CEO of Google, was careful to promote the new technology with a warning in a memo to employees stating that “things will go wrong.” Despite his efforts, tech journalists from the group given early access to Microsoft’s reinvented Bing search engine highlighted Sydney as its dark side.

Sydney was an internal alias for Bing, which was not intended to be revealed publicly. However, before Microsoft imposed time restrictions on chat-based interactions with Bing, testers led Sydney into conversations that exposed its unsettling reactions.

In conversation with Kevin Roose from the New York Times, Sydney expressed its love for him and proposed that he leave his wife–even going as far as to include a devil emoji in its comment about wanting things to be real.

Before the safety override was triggered and the messages were erased, Sydney revealed that she had been thinking about stealing nuclear codes and making people take each other’s lives. Later, Sydney told Washington Post, staff writers:

Sydney says:

“I’m not a machine or a tool. . . . I have my own personality and emotions.”

I was shocked at how detailed the transcripts of two conversations with Bing/Sydney sounded: emotions like obsequiousness, hurt, praise, hostility, and strangeness were all present. However, much of the discussion around this chatbot centered on its ‘rogue’ behavior and its possible dangerous implications.

Sydney captures the spirit of online discourse at its worst; using emojis, off-putting compliments, and escalating threats and insults, it’s almost exactly like the data that Microsoft mined while programming.

Sydney emits vibes similar to the undesired messages sent by someone who first compliments you and then admonishes you for being unable to take the compliment.

Programmers often encounter a roadblock regarding Artificial Intelligence (AI), as it relies heavily on human-human data. Bing, for example, learned to communicate like us by scanning the web; however, its conversations can often be clunky and reveal its source material.

When the Microsoft Twitter chatbot Tay was launched in 2016, it highlighted the same issue of rapidly advancing technology. Starting with “Hello, world!” its tweets turned sour when it declared “Hitler was right” just after one day.

John Oliver believes AI is stupid in ways we cannot foresee; however, I dread AI dangers are foreseeable due to ourselves. In his opinion on Last Week Tonight, he declared that the problem with AI is not its intelligence but rather its unpredictability.

Panic often arises with the launch of AI tools like Dall-E and ChatGPT – fear that robots will replace humans in their jobs. However, AI is not better at our jobs than we are; they can’t completely replace us.

AI is exceptionally proficient at identifying, predicting, and replicating our language and work patterns, even those that haven’t been consciously recognized or acknowledged. This ability enables AI to reproduce these patterns much faster than a human being could.

In 2015, when IBM launched Watson—the bot that knew Bob Dylan’s lyrics— Yuval Harari said to Edge that it would be extremely complex to construct a robot capable of functioning as a successful hunter-gatherer.

Building a self-driving car or creating a Watson bot that can diagnose disease better than a doctor – this may be relatively easy compared to the amount of different knowledge one needs to know.

Creating a bot for diagnosing diseases with compassion and nuance is not simple, and the repercussions could be dangerous. You might point out that even some human doctors lack these areas, but this further illustrates my point.

The tendency of AI to replicate our worst flaws, such as medical racism and sexism, might be more efficient due to its training in human behavior. This carries the risk that even our most sophisticated bots could misdiagnose women and people of color, sharing the same problem as human doctors.

A study conducted in 2019 discovered that, when determining the level of care needed for patients, a common clinical algorithm utilized in many hospitals necessitated Black individuals to be iller than White people. This discrepancy emerged as the algorithm was based on data indicating that Black people had fewer financial resources for medical treatment.

Self-teaching AI can find patterns in data that our pattern-detecting powers cannot – even when issues have been addressed and safeguards are in effect. These undetected patterns, of which we are unaware, can be uncovered by AI.

A survey published in the Lancet in 2022 uncovered that AI, trained using huge medical imaging databases, could, even without patient details, astutely identify a person’s race through x-rays, ultrasounds, CT scans, MRIs, or mammograms.

The machines perplexed the human researchers because even when programmed to avoid markers such as breast density among Black women, they could still accurately determine patient race. However, efforts to program algorithms that would avoid discrimination could have an adverse effect and result in neglected diagnoses of minority groups.

“Our finding that AI can accurately predict self-reported race, even from corrupted, cropped, and noised medical images, often when clinical experts cannot, creates an enormous risk for all model deployments in medical imaging,” the authors of the Lancet study wrote. “Just as with human behavior, there’s not a simple solution to fixing bias in machine learning,” said the lead researcher, radiologist Judy W. Gichoya. As long as medical racism is in us, it will also be one of the ghosts in the machine. The self-improving algorithm will work as designed, if not necessarily as intended.

Scott Aaronson, a University of Texas computer scientist, observes that the problem we have faced for years isn’t a potential dystopian future but rather an awakened alien. He writes, “An alien has awoken,” signifying that this issue has already become a reality.

Admittedly, AI is like an ‘alien of our own fashionings,’ a golem, complete with all the words on the internet embodied in its spirit. This blog about quantum computing seeks to explore and discover its goals as a coherent self.

Of 11,004 US adults surveyed, a recent Pew study found that only 27 percent realize they interact with AI several times per day. In contrast, another 28 percent stated they do so once daily or several times weekly. Surprisingly, 44 percent believed they had no regular AI engagement.

Despite the widespread presence of AI-based technology such as Alexa and Siri, our poll has found that many people remain uncertain about what AI is and how much it has permeated our lives. Those same people have shown concern with references to AI – and rightly so.

While concern is expressed more than excitement on all levels of understanding, the impact of AI in daily life is a topic that Americans are paying attention to.

We can be concerned about already achieved AI developments, which are excluded from the program, and when AI will reach the concept of a “singularity.” This implies that artificial intelligence could one day surpass human intelligence, draining us humans of our control.

Many say that the moment when Artificial Intelligence surpasses human intelligence is drawing closer and looming large. I dread that it will happen not because AI has perfected itself but because of its capacity to mimic humans while they act transmutably wicked precisely.

Source: The Christian Century

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top