LaMDA: Can the new Google chatbot be sentient?

ForumIAS announcing GS Foundation Program for UPSC CSE 2025-26 from 27th May. Click Here for more information.

What is the News?

Google’s modern conversational agent, Language Model for Dialogue Applications (LaMDA), was engaged to test for bias/hate speech. The tester claimed that the updated software is now sentient.

He argues that consent from the software must be obtained before experiments are run on it. But, Google and many tech experts have dismissed the comment.

What is LaMDA?

LaMDA is short for ‘Language Model for Dialogue Applications’. It is Google’s modern conversational agent enabled with a neural network capable of deep learning.

LaMDA trained with: Instead of images of cats and dogs, the algorithm is trained using 1.56 trillion words of public dialogue data and web text on diverse topics.

Built on: The neural network built on Google’s open-source neural network, Transformer, extracted more than 137 billion parameters from this massive database of language data.

Status of LaMDA: The chatbot is not yet public, but users are permitted to interact with it.

Versions: The LaMDA 0.1 was unveiled at Google’s annual developer conference in 2021, and the LaMDA 0.2 in 2022.

Significance: Google claims that LaMDA can make sense of nuanced conversation and engage in a fluid and natural conversation. Further, advanced conversational agents like LaMDA could revolutionise customer interaction and help AI-enabled internet search.

How is LaMDA different from other chatbots?

Chatbots like ‘Ask Disha’ of the Indian Railway Catering and Tourism Corporation Limited (IRCTC) are routinely used for customer engagement. But, the collection of topics and chat responses is narrow. The dialogue is predefined and often goal-directed.

But, LaMDA is a non-goal-directed chatbot that dialogues on various subjects.

What are the challenges associated with AI Chatbots?

a) AI software learning from historical data could inadvertently perpetuate discrimination. The chatbots might uphold historical bias and echo hate speech, and b) AI tech uses a false analogy of learning. For instance, A baby learns a language from close interaction with caregivers and not by ploughing through a massive amount of language data.

Source: The post is based on the article “Can the new Google chatbot be sentient?” published in The Hindu on 15th June 2022.

Print Friendly and PDF
Blog
Academy
Community