Google has been exploring the integration of its cutting-edge Language Model for Dialogue Applications (LaMDA) into its popular Assistant since 2020, according to recent reports. LaMDA represents a significant step forward in conversational artificial intelligence, enabling more natural and engaging conversations between humans and AI systems.
Google aims to bring more natural and interactive conversational features into its popular products – Assistant, Search, and Workspace – by incorporating them via their announcement of LaMDA in May 2021.
A new report released today highlighted why Google has held off on launching and incorporating LaMDA into their products, unlike OpenAI and Microsoft.
Google’s experience with chatbots goes back memorably to its 2020 reveal of Meena. This was before LaMDA; today, the Wall Street Journal has gone into the company’s history with the bots to provide more detail.
The Google Brain team recently introduced a conversational neural model trained from end-to-end. In their research paper, this chatbot can discuss virtually any topic; it conveys meaning by comprehending the related context for each reply.
The team wanted to limit their tool’s release, similar to how OpenAI had done with GPT-2. Unfortunately, Google leadership disagreed with such a move due to this conflicting with Google’s AI principles around safety and fairness.
A Google spokesman says:
“The chatbot had been through many reviews and barred from wider releases for various reasons over the years.”
Meena advanced from an ongoing problem-solving exercise to becoming LaMDA in 2020 with more data and processing power. The team endeavored to integrate it into Google Assistant to leverage their success.
The people say:
“The team overseeing Assistant began conducting experiments using LaMDA to answer user questions, said people familiar with the efforts. However, Google executives stopped short of making the chatbot available as a public demo.”
Today’s report also discussed the importance of balancing the answers this technology can offer and its reliance on source material. It highlighted the importance of maintaining harmony between these two aspects when utilizing this technological innovation.
To keep website owners happy, executives at Google have suggested that deploying generative AI in the results should include source links, which have been heard from a person familiar with the inside workings of Google.
Google has recently unveiled the Conversational Bard for the direct unanswered question on Search, with announced plans to roll out LLMs (Low Latency Machine Learning) to Gmail and Google Docs. They remain rather quiet regarding advancements of their Google Assistant project, though.
Aitrends Take
Search accuracy is very important, and people have high expectations for it. At the same time, interactions through Google Assistant’s voice-first mechanisms may be less critical—making it a better choice as integration with LaMDA could be made more smoothly.
Google could take advantage of the short-format interactions and deliver an immediate answer — unlocking potential from real-world usage and gathering valuable training data — that differs from Bard’s tendency towards a preamble.
However, there are also potential concerns about such technology’s ethical and privacy implications. As with any new technology, it is important to ensure that it is developed and used responsibly and transparently.
Integrating LaMDA into Google Assistant can transform how we interact with AI systems. Still, realizing its benefits will require careful consideration and responsible development while mitigating potential risks.
Source: 9to5google