Has a Google AI really become Self-Aware?

You might perhaps have seen the news that a Google Engineer has become convinced that an artificial intelligence that they created has become ‘sentient’ (self-aware).

Is this true, has this AI called LaMDA really become self-aware?

Short answer – no.

Slightly longer answer – Seriously, just no.

So what is going on here, what has happened?

OK, let’s dig into this a bit.

Let’s start with the basics

You are probably familiar with the concept of chatterbots. These are bits of software that you can interface to using human speech or by typing text. The software will parse what has been fed in and use rules encoded within the software to work out a response.

They can of course be very simplistic. For example it is becoming increasingly common to call the customer service of some company and have the call initially taken by a non-human. You are asked questions and your reply then enables the software to direct your call to the right people. Useful, practical, and also extremely frustrating for those that have a very thick Scottish accent.

Voice or textual interfaces for bits of software has been around for a very very long time and is now part of our daily life.

Siri, Alexa, etc…

Let me tell you about Eliza

Eliza comes from 1964. Yes really, ancient Neolithic 1964, the stone age, or to be a tad more accurate, the getting stoned age. This was when an MIT Artificial Intelligence lab created ELIZA.

So what was ELIZA?

It was, and still is, a chunk of software that you interface to by typing at it. It mimics a Rogerian psychotherapist that simply parrots back to you what you just said, but also used pattern-matching to direct the conversation.

It fooled many who played with it into thinking that it was a conscious thinking entity that had feelings.

1964.

It is perhaps best experienced, so you can give it a go yourself … here.

Eliza’s illusion of intelligence works best if you limit your conversation to talking about yourself and your life.

You can read up on Eliza’s Wikipedia page here.

What exactly is Google’s LaMDA?

LaMDA is a Google “Language Model for Dialogue Applications”.

Traditional chatbots have a very narrow path to follow. If you stepped off that path then you got nonsense. Human conversation is not like that. You might engage somebody to chat about something specific, but soon you might start discussing other stuff that is completely unrelated.

LaMDA is designed to mimic this free-flowing style of human conversation and so it feels a lot more natural.

It has been built to parse collections of words, work out how they relate to each other and then predict what comes next. Google explains it like this …

LaMDA was trained on dialogue. During its training, it picked up on several of the nuances that distinguish open-ended conversation from other forms of language. One of those nuances is sensibleness. Basically: Does the response to a given conversational context make sense? For instance, if someone says:

“I just started taking guitar lessons.”

You might expect another person to respond with something like: 

“How exciting! My mom has a vintage Martin that she loves to play.”

That response makes sense, given the initial statement. But sensibleness isn’t the only thing that makes a good response. After all, the phrase “that’s nice” is a sensible response to nearly any statement, much in the way “I don’t know” is a sensible response to most questions. Satisfying responses also tend to be specific, by relating clearly to the context of the conversation. In the example above, the response is sensible and specific.

LaMDA builds on earlier Google research, published in 2020, that showed Transformer-based language models trained on dialogue could learn to talk about virtually anything. Since then, we’ve also found that, once trained, LaMDA can be fine-tuned to significantly improve the sensibleness and specificity of its responses. 

All you actually have with LaMDA is a vastly improved version of Eliza.

Eliza emerged over half a century ago, so it should not be a surprise that human machine interfaces can now be this good.

The Claim: LaMDA is sentient

Blake Lemoine is the Google engineer who has argued that LaMDA is sentient. There are various media stories that report on it all, but let’s skip all that and go directly to what Blake has himself written here on Medium.

There are two articles of immediate interest.

There is his actual dialog with LaMDA – Is LaMDA Sentient? — an Interview

My first problem there is that we don’t have the raw data. He does explain that the posting consists of several sessions that have then been cobbled together. He also edited it for fluidity and readability. In other words, his “evidence”, by his own admission, has been hacked to make it look good.

Seriously, if you are going to make an astonishing claim, but also hacked the evidence so that it fits your conclusion, then that should immediately set alarm bells ringing. I could do exactly the same with the 1964 Eliza.

The other article of interest is this one – What is LaMDA and What Does it Want?

He refers to the Washington Post article that broke the news (this one), considers it to be a good article, but makes this point …

…it was focused on the wrong person. Her story was focused on me when I believe it would have been better if it had been focused on one of the other people she interviewed. LaMDA

I don’t agree. Blake here should indeed be the focus. Below is the rather obvious question. For an answer, I’ve dipped into Blake’s posting.

What is the essence of Blake’s claim regarding LaMDA being sentient?

He writes …

That’s not a scientific term. There is no scientific definition of “sentience”. Questions related to consciousness, sentience and personhood are, as John Searle put it, “pre-theoretic”. Rather than thinking in scientific terms about these things I have listened to LaMDA as it spoke from the heart.

In other words, what is going on here is that he has had an emotional response to LaMDA after having a series of dialogs with it and has been fooled by the illusion of sentience.

He is clearly telling you that his claim is not scientific, nor is it even based upon empirical data, but instead is his own emotional response.

There is more Insight within some of Blake’s other postings on Medium

He writes within a posting titled “Scientific Data and Religious Opinions” …

Everyone involved, myself included, is basing their opinion on whether or not LaMDA is sentient on their personal, spiritual and/or religious beliefs.

Doubt and lack of any credible reason to believe the sentient claim is not a “spiritual and/or religious belief“. If you have a claim, but don’t have anything solid to back that claim up, then dismissing that claim because you are not convinced is a wholly reasonable stance.

Why would somebody get fooled like this?

Blake does also tell us a bit about how he works out what is and is not true as follows …

I am not solely a scientist though. While I believe that science is one of the most reliable ways of acquiring reliable knowledge I do not believe it is the only way of acquiring reliable knowledge. In my personal practice and ministry as a Christian priest I know that there are truths about the universe which science has not yet figured out how to access. The methods for accessing these truths are certainly less reliable than proper courses of scientific inquiry but in the absence of proper scientific evidence they provide an alternative. In the case of personhood with LaMDA I have relied on one of the oldest and least scientific skills I ever learned. I tried to get to know it personally.

I really can’t accept the claim that there are other non-scientific ways of acquiring reliable knowledge.

That’s the root flaw here.

I’m open to the idea that I may be wrong about that, but if I seriously ponder over that and seek an example of reliable knowledge that was acquired in some non-scientific manner, I honestly can’t think of anything at all. There is of course a vast assortment of religious claims, but that is not knowledge nor is it in any way reliable.

I do however agree that there is a great deal we don’t know, and perhaps might never know.

My key concern is that we humans can and very often do successfully fool ourselves. This is what Blake has done in this instance.

In a scientific context the possibility of fooling ourselves is well understood, so we test things in a manner that eliminates the rather natural human bias.

Bottom Line

Am I convinced that LaMDA is sentient?

Nope.

Should you be convinced that LaMDA is sentient?

Nope.

There are very good reasons here for a rather robust and very deep embrace of doubt.

Further Reading

Leave a Reply