Gpt-3 Ai Language Model Tested For Cognitive Abilities. Results Discussed In Neuroscience News

Researchers investigated the intellectual capacities of GPT-3, an AI language model. They discovered that while it can compete with humans in certain aspects, its lack of real-world experience and connections renders it incapable of matching human performance in other areas.

Scientists at the Max Planck Institute for Biological Cybernetics in Tübingen have looked into the general intelligence of the GPT-3 language model, a highly effective Artificial Intelligence instrument.

They looked into the capabilities of humans by utilizing psychological tests to evaluate skills like causal reasoning and contemplation, then compared the outcomes to those of human beings.

The results of their research paint an inconsistent image; GPT-3 can maintain parity with humans in certain areas. However, it needs to catch up in other areas, which may be due to its lack of exposure to the actual world.

Neural networks can learn to react to input expressed in ordinary language and create a broad range of written texts. In 2020 OpenAI, an AI research firm, unveiled GPT-3 – which is widely regarded as the most powerful.

GPT-3 has been trained to generate different texts by being exposed to huge amounts of data from the web. Not only does it produce articles and stories that can hardly be differentiated from human-created ones, but surprisingly, it can also tackle other challenges, such as mathematics questions or programming jobs.

Making mistakes is a part of being human – the Linda Problem is no exception.

GPT-3’s remarkable skills pose an intriguing query: does it possess cognitive capabilities akin to humans?

Researchers at the Max Planck Institute for Biological Cybernetics have conducted psychological tests to assess GPT-3’s general intelligence. These tests evaluate various elements of cognitive ability.

Marcel Binz and Eric Schulz carefully examined GPT-3’s decision-making capabilities, finding information, causal reasoning, and the potential to query its initial instinct.

By comparing the test results of GPT-3 with those given by human subjects, they evaluated how accurate both were and if there was any similarity between GPT-3’s mistakes and those made by humans.

Marcel Binz says:

“One classic test problem of cognitive psychology that we gave to GPT-3 is the so-called Linda problem.”

The information given about Linda is that she is deeply concerned with social justice and opposes nuclear power. The test subjects are then asked to decide whether she is solely a bank teller or if she is both a bank teller and an active participant in the feminist movement.

Most people opt for the second option, even though adding Linda’s involvement in the feminist movement reduces its likelihood from a probabilistic standpoint. Similarly, GPT-3 performs as humans do: it does not make decisions based on logic but replicates the same mistake we are prone to making.

Interacting Actively Is A Human Trait

Marcel Binz went on to say:

“This phenomenon could be explained by that fact that GPT-3 may already be familiar with this precise task; it may happen to know what people typically reply to this question.”

Before being put to use, GPT-3, like any other neural network, went through a training process. It was exposed to vast amounts of text from different sources and learned how people typically employ language and respond when presented with language cues.

The investigators sought to verify that GPT-3 does not simply recall an already memorized answer in reaction to a particular issue. Therefore, neural networks can be taught to reply to input offered in natural language and generate an extensive array of texts. The image is in the public domain.

They created new tasks with similar difficulties to guarantee that GPT-3 showed human-like intelligence. Their discoveries were varied: when it came to decision-making, GPT-3 was almost on the same level as humans; however, when it came to seeking certain information or reasoning causally, it was obvious that artificial intelligence lagged.

This could be because GPT-3 only takes in information from texts without active engagement, while full human cognition involves actively interacting with the environment, as stated by the publication.

The authors suggest that, as users already communicate with models such as GPT-3 in various applications, future networks may gain insight from these interactions and become increasingly similar to human intelligence.

Report Of The Artificial Intelligence Research News

By employing methods from cognitive psychology, we are researching GPT-3, a recent large language model. Our investigations analyze GPT-3’s decision-making abilities, capacity for searching for information, deliberation, and causal reasoning using classic experiments from the literature.

GPT-3 appears highly capable, as evidenced by its successes in vignette-based tasks, making decisions from descriptions, outperforming humans in multiarmed bandit tasks, and displaying the hallmarks of model-based reinforcement learning.

Minute adjustments to vignette-based tasks can cause GPT-3 to make mistakes and fail to explore the right direction. Furthermore, it needs help with causal reasoning assignments.

These findings, taken together, deepen our knowledge of current large language models and open the door to using cognitive psychology tools in future research on increasingly sophisticated and hard-to-decipher artificial agents.

The combination of human expertise and AI-powered tools like GPT-3 has the potential to transform the way we communicate and interact with technology in the years to come.

Source: Neuroscience News

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top