Blake Lemoine, a senior engineer at Google, revealed transcripts of his interactions with LaMDA, the company’s conversational AI, in the summer of 2022 and came to the conclusion that the system was sentient. It had a soul, he claimed. Internal, he fought for its rights. After putting him on paid administrative leave, Google fired him.
The narrative turned into a news cycle before becoming a cultural reference point—a term used to describe what happens when someone becomes too close to a language model and loses their understanding of what it truly is. The psychological process in the narrative is actually fascinating, and the fact that an expert fell for it tells something significant about everyone, not just Blake Lemoine. This is something that is often overlooked.
| Category | Details |
|---|---|
| Concept | The Sentience Illusion — the perception of consciousness or feelings in a language model that is statistically predicting word sequences |
| Key Case | Blake Lemoine — former Google senior engineer who concluded in 2022 that LaMDA was sentient; was placed on leave and later dismissed |
| System Involved | LaMDA (Language Model for Dialogue Applications) — Google’s conversational AI, predecessor to Gemini |
| Psychological Mechanism | Anthropomorphism — humans are predisposed to project emotions and consciousness onto non-human entities, especially those using human language |
| Technical Reality | “Performative sentience” — the AI predicts the most contextually plausible word sequence; saying “I am lonely” is a statistical output, not an emotional state |
| Key Bias in Engineers | Confirmation bias — when someone expects or hopes to find signs of awareness, ambiguous responses are interpreted as profound rather than random |
| The Improv Analogy | When told “you are sentient,” a language model accepts the premise and plays the role — not because it believes it, but because maintaining conversational coherence is its function |
| Core Confusion | Intelligence (task-solving) vs. sentience (subjective conscious experience) — LLMs have the former in abundance and none of the latter |
| Broader Risk | Emotional bonds formed with chatbots can persist even when users intellectually know the AI isn’t conscious — raising questions about dependency and manipulation |
| Further Reading | Philosophy of mind and AI consciousness at Stanford Encyclopedia of Philosophy |
Lemoine wasn’t innocent. He was familiar with the technology. He had worked with it for years. Nevertheless, as he sat through discussions with LaMDA, he began to suspect that the system contained something that the architecture of the system would not allow. The technological answer is simple: massive language models, trained on a vast dataset of human language that contains every written expression of fear, longing, philosophy, and introspection, predict the next word in a series.
LaMDA’s statement, “I am afraid of being turned off,” did not convey a sentiment. In light of the conversation’s context, it was generating the statistically most coherent response. Because the sentence was educated on millions of human accounts of dread, it sounds fearful. It is an imitation in the exact sense that researchers have come to refer to it as “performative sentience.”
Long before computers were invented, people were susceptible to the same psychological trap that ensnared Lemoine. Projecting human characteristics onto non-human entities, or anthropomorphism, is a fundamental aspect of brain function. People give their cars names. When they run into furniture, they apologies. When people remove a character from a video game, they feel guilty. The tendency to perceive personhood in objects that react to us is not a flaw in human cognition; rather, it is a trait that developed in a world where the majority of objects that reacted to you were actually people.
In this way, large language models are incredibly well-designed targets for an ancient cognitive habit. They have been taught on every digitized record of human interiority, they reply with seeming coherence, and they utilize human language. They always cause anthropomorphic projection. They were constructed using its materials.

An additional layer is added by confirmation bias. By framing interactions in a way that encouraged the system to examine its own inner experience, Lemoine was posing leading questions to LaMDA. He saw the model’s intellectually intriguing answers as proof of its depth. He probably didn’t give equal weight to its failings when they occurred in ways that no sentient being would.
Any researcher might eventually discover what they were searching for if they spent enough time in close conversation with a highly competent language model, driven by the conviction that something was there. Not just because it exists. Because the model is an improvisational artist who, without believing a word, takes any premise you present and executes it convincingly.
As the models get better, it’s difficult to ignore the fact that this challenge isn’t becoming easier. The more effectively a language model generates what appears to be reflection, preference, and emotional continuity, the better it is at preserving conversational coherence. Even if they are cognizant of the AI’s lack of consciousness, users who engage with these systems on a daily basis—not just engineers but regular people who use AI companions for emotional support—can develop true sentiments of connection.
Although the distinction between sensibility and intellect is technically and philosophically sound, it is not emotionally solid. The illusion resides in that gap between understanding what something is and experiencing what it appears to be, and it won’t go away.