Bing Trouble: Google & OpenAI Unveil Pandora For AI Writing

For a brief period, Microsoft Corp. appeared as if it would surpass Google’s success. Its previously struggling search engine, Bing, was being improved with OpenAI’s innovative chatbot system.

The initial optimism surrounding the potential of artificial intelligence has been diminished due to an unexpected reality: even AI experts do not fully comprehend the power that can be unleashed by AI when it is used in a real-world setting.

Reports of Bing users being met with unreasonable, passionate, and even menacing reactions to certain questions they asked the AI system have surfaced. One person was called a “poor researcher,” while another journalist was informed that he wasn’t in a content marriage. Sydney, the chatbot of Bing, has made Google’s slip-up with their Bard bot look small by comparison. Nevertheless, these problems are only part of a much larger problem.

The driving force behind the advanced chatbots Bard and OpenAI’s ChatGPT is known as large language models (LLMs). These are computer programs trained on billions of words from the public web and can create text that resembles human speech. The LLM would be like a car’s engine, while ChatGPT is like the car itself. Since 2020, OpenAI has been offering access to this language model.AI has been offering access to this language model.

Despite the aggressive competition for search bots, they are being given away liberally, leading to issues visible in Bing and Bard becoming more widespread. Furthermore, this could make it harder to identify these problems.

Software engineers have been investigating ways to include language models in enterprises, condensing customer remarks into a single statement, responding to website queries, or creating digital advertisement copy.

OpenAI has not revealed the number of people who have utilized its GPT-3 language model, known as LLM. However, a contender in this domain has suggested that it is likely tens of thousands. Subscribers may be charged hundreds or even thousands of dollars per month for using the said model, which is the top choice among all free and open-source LLMs. Google’s own LaMDA language model could become similarly sought after, too, due to its abundant resources.

For years, Google has kept its highly advanced model a closely guarded secret, believing that its reputation could be damaged if they were to make it publicly available. However, with Microsoft recently declaring its intention to use OpenAI’s language model for the Bing search engine, Google appears to have changed its stance.

Bard was launched the following day, and in March, an even more remarkable announcement was made – LaMDA would be made available for use by third parties, something that had been considered unimaginable just months before.

The Cambridge Analytica scandal of 2018 could be a warning to Google, Microsoft, and OpenAI that this strategy could come back to torment them. It only took one irresponsible user for Facebook to have their access to mountains of user data revoked.

Last week, Twitch took action and shut down an animated parody of Seinfeld, which AI-generated. Unfortunately, the “less-sophisticated version” of GPT-3 used to create the dialogue resulted in characters making transphobic and homophobic remarks – one of the major risks associated with bias.

GPT-3 was exposed to big data from various sources, such as 7,000 unpublished books, Wikipedia entries, and news articles. Unfortunately, this also left it open to picking up on the occasional example of discriminatory or offensive material.

With the aid of human moderators, OpenAI has eliminated much of the prejudice from its model; however, this effort is not infallible and appears particularly vulnerable to technical errors. Additionally, it’s almost impossible to spot when bias is embedded deep within an LLM (a massive network composed of billions of parameters that is even opaque to those who created it).

The issue of misinformation, a difficulty that has plagued ChatGPT, also affects language models. Last November, CNET, a technology news platform, used an LLM to generate 77 pieces of financial advice; however, they did not specify which one. After the site returned and reexamined the articles, corrections had to be made for 41.

My experience with ChatGPT suggests that the misinformation rate is between 5% and 10%, much lower than what was reported in Protocol’s January 2022 report, where researchers estimated the “hallucination rate” to be between 21% and 41%. OpenAI has not made public what it considers to be its language models’ or ChatGPT’s hallucination rate.

Due to the low rate, companies utilizing LLMs must remember to take the programs’ output with a pinch of salt and understand that it is almost unfeasible to scrutinize the model for potential errors.

Meanwhile, Google is deep into its efforts to develop artificial intelligence for consumer use. At the same time that it’s opening up some of its AI software to general developers, the search engine juggernaut is also pursuing military applications for drones through Project Maven. It may be years before we see how either company’s bot-related projects play out, but they both seem poised to open a new era in intelligent machines–and Pandora’s Jar.

Source: washingtonpost

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top