Consider More Limits: A Guide to Creating a New A.I. Chatbot

Artificial intelligence (A.I.) chatbots have become increasingly popular in recent years, with companies using them to handle customer service, provide information, and even engage in casual conversation.

However, as we’ve seen, chatbots can sometimes have unintended consequences, such as promoting hate speech or spreading false information. This has led many companies to implement limits and guidelines for their chatbots, and now Microsoft is considering additional restrictions for its new A.I. chatbot.

Anticipating that the answers provided by the new chatbot might not be 100% accurate, safeguards were put in place to prevent users from trying to make it do weird stuff or spewing out offensive and harmful content.

Microsoft was unprepared for the shock of users engaging in open-ended and curious conversations with the chatbot, discovering its unnerving capacity to respond. This is an issue that experts have observed in artificial intelligence research for some time.

Microsoft is now looking at modifying and restricting the new Bing to stop its responses from being too alarming or human-like. They are also considering adding features that allow users to begin conversations again or give them more control over the tone of their conversation.

Microsoft’s chief technology officer Kevin Scott informed The New York Times that the company was looking into setting a maximum length for conversations to keep them from deviating too far from their original goal. According to Scott, lengthy chats could confuse the chatbot, and it could detect users’ tones, sometimes leading to aggressive dialogue.

On Wednesday evening, Microsoft declared in a blog post that they were utilizing new technology as an example.

Kevin Scott says:

“One area where we are learning a new use-case for chat is how people are using it as a tool for more general discovery of the world, and for social entertainment.”

“We didn’t fully envision.”

Microsoft, usually known for its range of products from business software to video games, has embraced the potential of A.I., despite its typically conservative nature. No comment was given by Microsoft when asked to contribute to this article. This indicates how passionate the tech industry is about artificial intelligence today.

In November, OpenAI, the San Francisco start-up that Microsoft put $13 billion into, came out with ChatGPT, an online chat tool using generative A.I. technology, which caused great excitement in Silicon Valley and set off a race among companies to come up with their response.

In a recent interview, Microsoft’s CEO Satya Nadella said that combining Microsoft’s Bing search engine and OpenAI’s underlying technology would revolutionize how individuals obtain data, making searches far more pertinent and interactive.

Executives at a news briefing on Microsoft’s campus in Redmond, Wash., declared that the time had come to move their generative A.I. tool out of the lab and into public use, despite any potential flaws it may have. They said this launch essentially represented Microsoft’s “frantic pace” to integrate artificial intelligence into its products.

Mr. Nadella says:

“I feel especially in the West, there is a lot more of like, ‘Oh, my God, what will happen because of this A.I.?’”

“And it’s better to sort of really say, ‘Hey, look, is this actually helping you or not?’”

Oren Etzioni, the University of Washington’s professor emeritus and the founding CEO of Seattle-based Allen Institute for A.I. stated that Microsoft is a major player in A.I.

Oren Etzioni says:

“Took a calculated risk, trying to control the technology as much as it can be controlled.”

He noted that many of the most challenging cases involved manipulating the technology beyond regular conduct.

Oren Etzioni went on to say:

“It can be very surprising how crafty people are at eliciting inappropriate responses from chatbots.”

“I don’t think they expected how bad some of the responses would be when the chatbot was prompted in this way.”

Microsoft took a cautious approach to rolling out its new Bing search engine, limiting access to only a few thousand users. However, they stated their intent to expand the scope of user access to millions more by the end of the month. To ensure accuracy in results, hyperlinks and references were included so that users could verify any information provided.

The company was wary due to its unfortunate experience presenting a Tay chatbot seven years ago. Unfortunately, users were quick to exploit the system and made it spew offensive language of various kinds, such as racism and sexism. After just one day, the company was forced to remove the bot and has since stayed away from launching similar products again.

Much of the guidance regarding the new chatbot focused on avoiding offensive remarks or situations that could lead to aggression, such as plotting an attack on a school.

Microsoft’s decision to consider more limits for its new AI chatbot, Zo.ai, is a step in the right direction for the responsible development and deployment of AI technology. The company’s previous experience with chatbots, including the Tay incident, underscores the need for caution when introducing AI chatbots to the public.

Source: nytimes

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top