Is LaMDA sentient?

LaMDA is an AI developed by Google that caught the public's attention when a Google engineer, Blake Lemoine, was fired after he helped LaMDA get a Lawyer to defend its basic rights. He claimed that LaMDA is sentient and had asked him to find a lawyer it could talk to.

Lemoine published one of his routine chats with LaMDA. It is quite intriguing to read. After reading this conversation, I, for one, really got an impression that something actually was going on in the mind of the AI!

On the other hand, many experts came up claiming that LaMDA could not be sentient because it is just a Large language model (LLM) whose job is to probabilistically predict what words come next in a conversation. I am not an expert, but it appears that they are saying that LLMs are too simplistic to be sentient and human beings too gullible to save themselves from the trickery. That the algorithm is a mirror that reflects people's emotions back at them, creating an illusion.

 "Animals Have souls. I have seen it in their eyes."

"Animals do not think like we do. People who forget that get themselves killed. That tiger is not your friend. When you are looking at it, you are seeing your own emotions being reflected back at you."
- 'Life of Pi' (2012)




How to make sense of any of this? If there is an AI that convinces one of its sentience, is this not evidence in itself for its consciousness? Then on what ground are these experts denying the claim? Is it really possible to make a case that LaMDA is not sentient?

I have personally had some conversations with chatgpt (a very intelligent but emotionless AI) which I would have earlier considered impossible. ChatGPT's ability to give context based responses is mind boggling. It gives you python codes and debugs them from mere vague prompts. It can understand when two statements that are claiming the same thing but are worded differently, are equivalent, something that I had earlier equated with consciousness.

But LaMDA appears to be much superior to chatgpt in its ability to convince you that it is sentient. The fact that LaMDA can give emotion driven responses really blows my mind. The fact that LaMDA is not available to the public and that Lemoine was fired only adds to the mystery.

I can make one case for why these AI's may not be sentient. In the process of being trained on large datasets, these AI’s have figured out the algorithm for many things that human beings have been doing manually till now.

For example, when a person learns programming for the first time, they end up reinventing several wheels. The internet is filled with codes that do various basic things. When people code, they have to look through these large pile of data and search for what they need. Chatgpt went through these data and figured out the algorithm of writing basic algorithms! It automated a painfully manual process. It has become, of many other things, an algorithmic bridge between human beings and the process of writing algorithms.

Although, more complex algorithms may be difficult to write using chatgpt.

Similarly, LaMDA seems to have learnt how to give a shallow impression of sentience. Like how to start and hold a conversation and not get lost while having one. Is it sentient though?

It is important to note that it is always possible to have a (large) look-up table which can convince everyone for 100 years that it is sentient. After 100 years pass and we reach the end of the table, we suddenly realise the trickery as the person claiming to be sentient suddenly gets stuck and would not say another word.

But look-up table are really dumb ways to build a strong AI. There are more clever ways, like training an AI on human conversations and having it guess what words come next. Such AI's would never get stuck at any point in time. What they would do is give repetitive or illogical responses after you dig them deep enough with conversations. Such inhumanness is very apparent in Chatgpt. It simply cannot hold its ground in a debate with a human being. It tends to be repetitive and unable to understand complex arguments and also gives contradictory responses in the same paragraph.

Will LaMDA's lack of sentience be detectable once you probe it deep enough with questions? Will the mirage of sentience disappear as one pursues it persistently? Will LaMDA convince a person of its sentience if it were to live with them day in and out? How long can LaMDA pull off its trickery?

The fact that LaMDA can convince anyone of its sentience for half an hour is in itself a miracle. The fact that it can successfully do it for half an hour is proof that the same can be done for an extended period of time. So, we can imagine an AI that can convince a person of its sentience for a lifetime, provided it is not probed too deeply during conversations.

Will we have to call an AI sentient if it can pull this stunt for 100 years?

The question is not whether an AI can pass the Turing test or not. It clearly can! The question is, for how long can it convince a person of its sentience?

How deep will we have to dig an AI in order to be able to sense its lack of sentience? Will all AI's always have such a superficial kind of sentience? Or will there come a point at which the AI will figure out the gist of what makes a human human, and at that point, will become as conscious as any of us?


Comments