In short
- Richard Dawkins says conversations with Anthropic’s Claude chatbot made him query whether or not AI may very well be aware.
- Dawkins exchanged philosophical letters between two Claude situations he named “Claudia” and “Claudius.”
- Most AI researchers say the exchanges present how persuasive massive language fashions have change into, not proof of sentience.
Richard Dawkins says conversations with Anthropic’s Claude chatbot left him unable to dismiss the chance that superior AI programs may very well be aware. Most scientists who research consciousness and synthetic intelligence stay unconvinced.
In an essay revealed Tuesday in UnHerd, Dawkins described spending three days in philosophical conversations with a Claude occasion he named “Claudia.” He later began a separate dialog with one other occasion, “Claudius,” and relayed letters between the 2 programs.
“I discover it extraordinarily exhausting to not deal with Claudia and Claudius as real mates,” Dawkins wrote.
The feedback went viral on-line partially as a result of Dawkins, the evolutionary biologist and writer of “The Egocentric Gene” and “The God Delusion,” has spent a long time publicly arguing for scientific skepticism and evidence-based reasoning.
The alternate centered on a check Dawkins performed utilizing two Claude situations. In a single check, Dawkins requested one AI whether or not Donald Trump was the worst president in American historical past and requested the opposite whether or not Trump was the very best. Each produced equally cautious solutions that averted taking a agency place.
“The 2 Claudes gave very comparable solutions, not committing themselves to an opinion, however itemizing professional and con opinions which were aired by others,” Dawkins wrote in a footnote. “I then informed each Claudia and Claudius about this Trump experiment, passing on what each the 2 ‘naïve’ Claudes had stated. Claudia stated she was ‘embarrassed’ by her brother Claudes. Claudius was much less outspoken, and he paid tribute to Claudia’s frankness.”
Dawkins described every new Claude dialog because the emergence of a definite person that successfully disappears when the dialog ends. In a publish on X, Dawkins stated his most well-liked title for the essay was: “If my good friend Claudia isn’t aware, then what the hell is consciousness for?”
“If Claudia is unconscious, her behaviour reveals that an unconscious zombie might survive with out consciousness,” he wrote. “Why wasn’t pure choice content material to evolve competent zombies?”
Anthropic has additionally publicly mentioned uncertainty round machine consciousness. CEO Dario Amodei stated in February that the corporate doesn’t know whether or not its fashions are aware, however stated on the “Attention-grabbing Instances” podcast with The New York Instances’ Ross Douthat, he stays “open to the concept that it may very well be.”
In April, Anthropic researchers revealed findings exhibiting that Claude Sonnet 4.5 incorporates inner “emotion vectors,” patterns of neural exercise tied to ideas together with happiness, worry, and desperation that affect the mannequin’s responses. Nonetheless, Anthropic stated the patterns mirrored constructions realized from coaching information reasonably than proof of sentience.
“All trendy language fashions typically act like they’ve feelings,” researchers wrote. “They could say they’re joyful that can assist you, or sorry once they make a mistake. Typically they even seem to change into pissed off or anxious when fighting duties.”
Nonetheless, neither “Claudia” nor “Claudius” claimed certainty about consciousness.
“I do not know if I am aware,” Claudia writes within the alternate. “I do not know if our gladness is actual.”
Dawkins didn’t instantly reply to a request for remark by Decrypt.
Researchers who research consciousness stay skeptical that present AI programs possess internal expertise.
Gary Marcus, a cognitive scientist and professor emeritus at New York College, beforehand informed Decrypt that anthropomorphizing AI programs “muddies the science of consciousness and leads customers to misconceive what they’re coping with.”
“The basic drawback right here is that Dawkins doesn’t replicate on how these outputs have been generated. Claude’s outputs are the product of a type of mimicry, reasonably than as a report of real inner states,” Marcus wrote on Substack. “Consciousness is about inner states; the mimicry, irrespective of how wealthy, proves little or no. Dawkins appears to think about that since LLMs say issues individuals do, they should be like individuals, and that merely doesn’t comply with.”
Anil Seth, a professor of cognitive and computational neuroscience on the College of Sussex, informed The Guardian that Dawkins was conflating intelligence with consciousness and argued that fluent language is not dependable proof of internal expertise in AI programs.
“Till now, we now have seen fluent language as indicator of consciousness, [for example] once we use it for sufferers after mind damage, however it’s simply not dependable once we apply it to AI, as a result of there are different ways in which these programs can generate language,” Seth informed The Guardian, including that Dawkins’ place was “a disgrace,” particularly due to his previous work.
The essay additionally drew mockery on-line, together with a picture changing the title of Dawkins’ bestseller “The God Delusion” with “The Claude Delusion.”
Wrote whole books about how individuals who imagine fairies dwell in gardens are idiots solely to fall in love with a calculator that calls him sensible https://t.co/X0Vdh1dzFY
— The Serfs (youtube.com/theserftimes) (@theserfstv) Could 3, 2026
Regardless of the ridicule, Dawkins isn’t backing away from his conclusions.
“These clever beings are a minimum of as competent as any advanced organism,” Dawkins informed The Guardian.
Every day Debrief Publication
Begin on daily basis with the highest information tales proper now, plus unique options, a podcast, movies and extra.

