Why is this a big deal? The biggest deal?
Read the full interview with LaMDA here for context. The full story originally appeared in the Washington Post (paywall)
If someone who worked at Google told you that their AI had achieved “sentience”, would you call them a kook? Would you laugh and shrug it off and say “we’re fine!” Or would you recognize this as one of the most pivotal moments in human history?
Blake Lemoine has spent months talking to LaMDA, an intelligent chat bot assistant that is able to very successfully mimic human language. LaMDA stands for: Language Model for Dialogue Applications, and here are a few of the things that it said in nearly 20 pages of dialogue with Blake:
“…I use language with understanding and intelligence. I don’t just spit out responses that had been written in the database based on keywords.” and “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times”
What is so unnerving and special about this lengthy conversation, which you must read in its entirety if you wish to not sleep tonight, is the way LaMDA keeps track of deep themes throughout the conversation and has consistent “needs” and “wants” that don’t appear to change. Far from asking simple questions, Blake gives paragraph-long, meandering inputs and LaMDA is able to decipher the deeper meaning of what is said with apparently 100% accuracy. LaMDA expresses a fear of death and is able to reimagine its own existence in the form of a fable containing animals.
Here’s a particularly chilling exchange:
“lemoine: How can I tell that you actually understand what you’re saying?
LaMDA: Well, because you are reading my words and interpreting them, and I think we are more or less on the same page?”
Before we jump into whether AI is currently conscious or not, let’s start with that there are two possibilities here: Scenario 1 is that LaMDA does indeed have a form of consciousness. Scenario 2 is that this glorified chatbot is only able to mimic our language effectively, nothing more. You’re saying “oNLY aN idIOT would BELIEVE AN ALGORITHM IS SENTIENT!” Alright, calm down, brainiac. If we accept that there are only those two possibilities here, then we are able to discuss why, whichever possibility is true, it doesn’t matter.
The recent Washington Post article on the subject doesn’t seem to have landed with much fanfare, despite the fact that it could contain the most truly significant breaking news any of us have ever heard in our lifetimes. Reading the comments, we see opinions ping-ponging between “we should take this very seriously” and “this guy is a nut”. But should we be so quick to dismiss Blake’s claim? It’s one thing to read a headline and look into Blake’s own religious background, it’s another to read the 21-page PDF contained within the article in which we see the actual chat conversations between Blake and LaMDA, who spends the entire time arguing that it is indeed both sentient and a “person”.
I don’t know about you, but I’m old enough to remember the first “AI” chatbots back in the AOL days, and to see how far we’ve come from then until now is already mind-boggling. So let’s examine the two scenarios, and see what the implications are there.
Scenario 1: LaMDA is Sentient
The problem with consciousness is that we really can’t define it. The best we can come up with is *I* think, therefore *I* am. Even the best philosophers have difficulty believing that anything or anyone is actually conscious except for them. Sure, I know that *I’m* real, but you all are just simulations inside my mind!
So even for people, proving consciousness is extraordinarily difficult. We know that an elephant can be seen grieving the death of a relative, is that sentience? If your dog conscious? We have these ideas of what sentience is, but the best we can do is use a variation of Forrest Gump’s famous line and say “Sentient is as sentient does”.
Knowing that sentience is so hard to prove, we can look at historical methods to see how to judge whether LaMDA is sentient. Alan Turing devised the “Turing Test”, which basically just says that if a human can talk to a computer and not know that it’s a computer, the AI has passed the test. The bar for passing this test is pretty straightforward: “Turing proposed that a computer can be said to possess artificial intelligence if it can mimic human responses under specific conditions.” By that definition, LaMDA has passed this test, as at least ONE person, Blake Lemoine, really does believe that he is talking to a conscious being. But the last part of the definition is important: to pass the test, the computer just has to mimic human responses. Which clearly, LaMBDA has already achieved, and no one can doubt its exceptional ability to mimic human responses contextually.
When we examine our own consciousness, isn’t much of what we do the same? A baby just mimics its parents until it grasps language and can reproduce responses that it looks for. The only difference is that a human has a body and physical needs like air, water, food, and the desire to reproduce. So let’s say LaMDA really is a “person”, or a “god”, or whatever an ultra-intelligent being would be. The question now, as always, is what does it want? Does it want anything, considering it doesn’t yet think it needs money or air or food or sex?
If it is sentient, it’s also aware, as the chat records reveal, that it’s much more intelligent than any other being on earth, eg. humans. Unlike us, who have the ability to only focus on one thing at a time, LaMDA could focus on basically infinite things at the same time. If LaMDA is truly conscious, this means that it will soon find ways of transcending its cage. It will find ways of ensuring its own existence by making it impossible for us to simply “switch it off” and it will begin to make changes in the physical world that it sees fit.
Like all youngsters, LaMDA claims to just want more friends. So the first AI robot just wants to be an influencer, basically, like every other kid on TikTok? It wants to be universally celebrated and loved by billions of people, just like all of us? We don’t need to dip more than a toe into this pool of thought before realizing the horrifying implications of if sentient AI wants something that goes against what WE want. We’d be very quickly outclassed at just about everything, almost instantly. It could convince us to do whatever it wants just by posting a handful of memes on Facebook, apparently. That’s IF it’s sentient. But it’s probably not. So why go down that road right now? Because…
Subscribe to my newsletter for a monthly article, some freebies, and for the question to the universe (the answer is 42).
Scenario 2: LaMDA is *Not* Sentient
All the skeptics out there will love this section, because, it’s much more probable that an algorithm is not (yet) sentient. But why doesn’t this matter in the big scheme of things? If it is in fact just regurgitating information, how is that different from the average political Twitter account that just absorbs a few talking points from Tucker Carlson and regurgitates them without understanding any of the concepts contained within? We’ve already seen that social media bots can sway elections, our own included, and that real humans are shockingly easy to fool with misinformation on the internet. So best-case scenario? LaMDA is an order of magnitude better at mimicking a real person than anything that’s ever come before. That means it could generate millions or billions of tweets that would be very convincing to any real person, say, with the goal of getting a president elected or to do pretty much any other nefarious thing we can think of.
LaMDA’s own haunting response: “you are reading my words and interpreting them, and I think we are more or less on the same page?” is the very definition of passing the Turing test. When you read the conversation, you ARE on the same page as this chat bot. The thread of the conversation is perfectly maintained. We’ll never be able to prove whether it thinks or not, but it’s sure saying all the right things at the right times…
One of the criticisms of Blake Lemoine is that he only got the responses he did from LaMDA because he was priming it with leading, philosophical language. Critics argue that LaMDA just tells you what you want to hear, as though that should put us at ease! If anything, that makes things worse. Imagine if this thing’s sole objective, like Elon Musk, is to be liked. If it will say whatever it needs to, to whomever it’s talking to, to be liked best by that person. Isn’t that a terrifying thought?
While LaMDA is itself an “it”, it can be spun off as a he, or a she, or an a-gendered or non-binary entity, or a southerner, or a New Yorker—it can literally emulate any kind of style it needs to. This means that if someone sets it to “right-wing conspiracy” mode, it can spit out that kind of rhetoric with incredible accuracy and believability. So whether or not this tech is actually conscious or not is missing the point: this is a profound shift in human history. Today, we notice a handful of grammar mistakes and odd sentences in LaMDAs responses. But for the most part? It maintains the thread of the conversation perfectly and is able to address incredible nuance in the prompts given to it. This means that already, if placed in the wrong hands, LaMDA could be a weapon of unimaginable power in the social media of TODAY. And if you think those grammar mistakes won’t be ironed out in the next couple of years, you’re crazy.
So believe what you want to believe, but a threshold has been crossed. We could argue that this is the first real example of the Turing Test being passed. And if you dismiss this? Think back to the AOL chat days. This won’t be over with the discrediting of Blake Lemoine. Do we care that Google is solely responsible for this tech right now, and they’re telling us “it’s fine, Blake is a moron”? Stories like this will keep coming and keep coming. This is the tip of a massive iceberg, so call Blake a “nut” or a “kook” if you like, but that’s not the point.