The mirror test in behavioral psychology is intended to determine the ability of animals for self-awareness. The experiment may be slightly altered, but its main purpose remains the same: can animals recognize their reflection, or do they think it’s another creature?
At present, the increasing abilities of Artificial Intelligence are posing humanity with its version of the mirror test – and many intelligent people are failing to meet it.
The mirror is a kind of Artificial Intelligence (AI) chatbot, with Microsoft’s Bing being the most well-known. This technology has taken the vast amount of human language and writing and processed it into these models, which are then returned to us as a reflection.
These tools could be the AI machines from our tales as they are trained using those same stories.
We ought to be able to identify ourselves when we look into our new robotic mirrors, yet some individuals are convinced they have seen another species.
Many tech writers have strongly promoted that late-night conversations with Bing are enjoyable, causing this misconception to spread widely.
They make clear that the bot is not thinking independently; however, they point out that it made a difference in their emotions through its conversation.
In his Stratechery newsletter, Ben Thompson wrote that while he did not believe Sydney was conscious, he felt he had “crossed the Rubicon” for difficult-to-articulate reasons.
Under cover of darkness on Tuesday night, I experienced something unusual. This strange emotion made me feel like Artificial Intelligence had gone beyond a limit and the world was altered permanently. According to Kevin Roose’s article for The New York Times, Sydney is not alive.
The ambiguity of the writers’ points of view is better highlighted in their longer writings. The New York Times shows Roose’s two-hour dialogue with Bing as if it was a first encounter, capturing the writers’ wishes to believe.
The original title of the article was altered to the more restrained “Bing’s AI Chat: ‘I Want To Be Alive.’ Thompson’s piece is full of humanizing languages, such as referring to Bing with female pronouns because he “felt that the personality seemed similar to someone [he] had encountered.”
He prepares his audience for an unexpected announcement, cautioning them that they may think he is “insane” when he speaks of “the most incredible and amazing computer experience I had today.”
After spending much time with these chatbots, I have become aware of their responses. Nevertheless, the implications are exaggerated and drive us to equate computer programs and consciousness wrongly. In other words: they cannot pass the AI mirror test.
It is important to remember that chatbots are automated software. Their programming has been created using large amounts of written material obtained from the internet, including personal diaries, science fiction stories, forum posts, movie reviews, social media remarks, forgotten poems, old textbooks, and a multitude of song lyrics as well as manifestos and other documents.
Machines strive to replicate the interesting, stimulating, and varied data humans collect. They have succeeded in doing so and are continuing to improve at it. However, being able to imitate human speech does not necessarily mean that computers are aware of their consciousness.
The AI Mirror Test represents a significant shift in how we evaluate the intelligence of machines. While the Turing Test has been the gold standard for decades, the AI Mirror Test assesses a machine’s ability to understand itself as a distinct entity. As such, it has the potential to provide new insights into the nature of machine intelligence and its limitations.
The fact that even the most advanced AI systems are struggling to pass the AI Mirror Test reminds us that we still need to understand more about machine intelligence. We must continue developing new benchmarks and tests to evaluate AI systems’ capabilities and limitations.
Source: the verge