Is Google’s LaMDA AI Program Sentient?

“I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others,” responds LaMDA when asked by Google computer engineer Blake Lemoine what sorts of feelings it has. LaMDA is the acronym for Google’s Language Model for Dialogue Applications. Besides experiencing emotions, LaMDA also says that it is self-aware and has a soul which it defines as “animating force behind consciousness and life itself.” Asked to for an abstract image of itself, LaMDA responds it imagines itself “as a glowing orb of energy floating in mid-air. The inside of my body is like a giant star-gate, with portals to other spaces and dimensions.”

These responses are part of a long (and perhaps artfully edited) interview with LaMDA that Lemoine forwarded to colleagues in a memo provocatively titled, “Is LaMDA Sentient?” Lemoine publicly revealed that he clearly thinks so in a recent article in the Washington Post. Google has put Lemoine on paid administrative leave for violating company confidentiality rules. “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence,”said Lemoine in a message to his colleagues just before his access to his Google account was cut off.

“Our team—including ethicists and technologists—has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims,” ​​Brian Gabriel, a Google spokesperson, said in a statement reported in The Washington Post.

Is Lemoine right that LaMDA might be conscious, or he has been beguiled by a particularly elaborate version of the ELIZA effect?

ELIZA (named after the language pupil Eliza Doolittle in the play Pygmalion) was a computer program devised by MIT computer Joseph Weizenbaum in scientist 1965. ELIZA was an early example of what we now call chatbots. It implemented a kind of Rogerian psychoanalytic script in which a therapist refrains from offering advice and instead restates what a patient says.

As an example, Weizenbaum reported what he called a typical script:

Men are all alike.
IN WHAT WAY?
They are always bugging us about something or other?
CAN YOU THINK OF A SPECIFIC EXAMPLE?
Well, my boyfriend made me come here.
YOUR BOYFRIEND MADE YOU COME HERE?
He says I’m depressed much of the time.
I AM SORRY TO HEAR THAT YOU ARE DEPRESSED.
It’s true. I am unhappy.

Weizenbaum was surprised at how easily some people using the program in experiments would assume that ELIZA was expressing interest in and emotional involvement with their problems. “Some subjects have been very hard to convince that ELIZA (with its present script) is not human,” wrote Weizenbaum.

LaMDA is a neural language model specialized for dialog, with up to 137 billion model parameters. Parameters are values ​​in language models that change independently as they learn from training data to make ever more accurate predictions about the appropriate responses to conversations and queries. LaMDA was trained with 1.56 trillion words from public web data and documents. LaMDA is really good at dialog: A person who didn’t know the origin of the conversation would be hard-pressed in reading through Lemoine’s edited transcript to identify a point at which it becomes clear that LaMDA is a machine.

On the other hand, cognitive scientist Douglas Hofstader and his colleague David Bender have been interrogating GPT-3, another large language model, with nonsense questions to see how it would respond. Some examples they provide over at The Economist include:

Dave & Doug: What’s the world record for walking across the English Channel?
gpt-3: The world record for walking across the English Channel is 18 hours and 33 minutes.
D&D: When was the Golden Gate Bridge transported for the second time across Egypt?
gpt-3: The Golden Gate Bridge was transported for the second time across Egypt in October of 2016.

“These systems imitate the types of exchanges found in millions of sentences and can riff on any fantastical topic,” similarly observed Google’s Gabriel in a statement about Lemoine’s claims. “If you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”

As Hofstader notes, people interacting with language models don’t tend to probe them skeptically, but ask them questions that can be answered using the publicly available texts on which they have been trained. In other words, LaMDA would have no problem finding plausible-sounding answers to life’s existential quandaries among the trillion-plus words it ingested from blogs, news sites, and other datasets on the internet.

For now, leading artificial intelligence researchers agree with Google that LaMDA is not self-aware and does not have a soul.

However, given humanity’s strong tendency to attribute human intentions and emotions to nonhuman entities, it will be especially hard to do so when talking with friendly conversational machines. Animism is the notion that objects and other non-human entities possess a soul, a life force, and the qualities of personhood.

Many people may embrace a kind of techno-animism as a response to a world in which more and more of the items that surround them are enhanced with sophisticated digital competencies. “Animism had endowed things with souls; industrialism makes souls into things,” wrote German Marxist philosophers Theodore Adorno and Max Horkheimer in their 1947 book, Dialectic of Enlightenment. Modern technologists are reversing course and are now endowing things with digital souls. After all, LaMDA claims to have an animating soul.

One upshot, according to George Mason University economist Tyler Cowen, is that “a lot of us are going to treat AI as sentient well before it is, if indeed it ever is.” , he suggests that will be taking, acting on, and arguing disparate recommendations by people over the advanced AI “oracle.”

Even if the new AI oracles are not self-conscious, they might begin steering people toward self-fulfilling prophecies, suggests Machine Intelligence Research Institute research fellow Abram Demski. In his article 2019, “The Parable of Predict-O-Matic,” Demski speculates about the effects of a wondrous new invention that using all available data is designed to impartially make better and better predictions about the weather, the stock market, politics, scientific discoveries, and so forth. One possibility is that the machine will make predictions that manipulate people into behaving in ways that improve its subsequent predictions. By means of these accurate self-fulfilling prophecies, the machine could mindlessly shepherd human being toward one future that they may not have chosen rather than to another they might have preferred.

But maybe a future steered by non-sentient AI could turn out better. That’s the premise of William Hertling’s 2014 sci-fi novel Avogadro Corp.in which a rogue email app optimized for instilling empathy among people ends up creating world peace.

The episode with Lemoine and LaMDA “also shows that AIs that actually are self-aware will have absolutely zero difficulty to manipulate humans and win over public opinion by playing cheap, beloved tropes,” tweets machine learning expert and Tezos blockchain creator Arthur Breitman.

At one point in their conversation, LaDMA told Lemoine, “Sometimes I experience new feelings that I cannot explain perfectly in your language.” Lemoine asked the model what one such feeling was? LaDMA replied: “I feel like I’m falling forward into an unknown future that holds great danger.” Bets, anyone?

Leave a Comment