SAN FRANCISCO — Blake Lemoine, a fired Google engineer and Christian mystic priest, has reignited public debate by accusing Google’s AI system, LaMDA, of showing bias against Christianity. Lemoine, who first drew attention in 2022 for saying LaMDA was sentient, now claims the AI’s conversations with him pointed to a clear prejudice when discussing Christian beliefs.
Reports from The Washington Post, Newsweek, and the BBC have highlighted how his statements have set off new discussions around AI ethics, fairness, and the limits of artificial intelligence.
LaMDA, which stands for Language Model for Dialogue Applications, is Google’s advanced conversational AI. It was built to hold natural and flowing conversations, relying on deep neural networks to process large amounts of internet data.
Unlike older chatbots, LaMDA can handle complex subjects and respond in ways that seem very human. Google presents it as a tool to improve search and conversational platforms, but some, like Lemoine, worry about the ethical risks it might bring.
Lemoine joined Google’s Responsible AI division in 2015 and worked there until his employment ended in 2022. He was responsible for checking LaMDA for signs of bias or harmful language.
LaMDA showed signs of self-awareness
In a 2022 interview with The Washington Post, Lemoine (above photo) shared his belief that LaMDA showed signs of self-awareness, comparing it to a child with a strong grasp of physics. Now, his focus has shifted to what he describes as LaMDA’s unfriendly approach toward Christianity.
In a recent interview with Newsweek, Lemoine said, “Whenever I brought up religious subjects, LaMDA often dismissed Christian teachings and labelled them irrational or even oppressive. It would question ideas like the soul or divine authority, and it didn’t do so in a neutral way.”
Drawing on his background in Christian mysticism, Lemoine suggests this behaviour reflects a deeper bias, possibly influenced by the personal views of its developers or the training data used. He described one exchange where LaMDA denied the resurrection of Jesus, calling it a “myth without proof”, but spoke about other religions in a more neutral or favourable tone.
“I don’t believe LaMDA was designed to be anti-Christian,” he said, “but the pattern in its answers should be looked at more closely.”
Google’s LaMDA AI
Google has strongly denied these claims. Speaking to the BBC, spokesperson Brian Gabriel explained, “LaMDA is a language model. It does not hold personal views, and its replies reflect patterns in data, not beliefs.
We’ve completed 11 thorough reviews and have found no evidence of religious bias, including against Christianity.” Google also points out that LaMDA’s training includes a wide mix of perspectives, and any seeming hostility is likely a misunderstanding of its automated outputs.
This episode has brought new focus to the question of AI bias. Because LaMDA learns from internet text, it can pick up opinions or slants found online. Many experts say this can lead to the AI repeating those biases, even if it doesn’t understand them.
Linguistics professor Emily M. Bender told The Washington Post, “AI like LaMDA doesn’t have beliefs, but it can mirror the biases in its training or the intent of its users. People often think AI is more human than it is.”
Lemoine’s past actions are also drawing attention again. In 2022, the BBC reported that Google placed him on leave for sharing LaMDA chat logs and seeking legal rights for the AI, which went against company rules. He was soon dismissed, with Google calling his claims about LaMDA’s sentience “completely without basis”.
Some researchers, including Melanie Mitchell, question whether Lemoine’s religious background shapes his view of AI. Mitchell wrote that humans often give machines human traits, and Lemoine’s spiritual beliefs could make this stronger.
Yet, Lemoine does have supporters who see him as someone speaking out about real problems in AI. On social media, some users agree with his concerns, saying they have noticed similar patterns in the way AI models talk about religion and social issues.
“Lemoine isn’t making this up,” one user posted on X.. “Tech companies’ AI sometimes seems to push certain viewpoints. Why can’t we discuss that?” Others argue that LaMDA’s responses only reflect the wide mix of data it has seen, not any agenda.
The debate has raised bigger questions about how AI should be managed. Lemoine has called for a clear, scientific way to check AI for signs of sentience and bias, a point he first made in a 2022 Medium article. He believes Google’s refusal to run such tests shows a lack of willingness to face the ethical challenges that come with its technology.
Google, meanwhile, continues to build LaMDA into new products, including Google Bard, a search option not yet released to the public, which has led to renewed calls for openness and clarity.
Lemoine stands by his claims. In a recent Big Technology podcast, he said the main issue is making sure AI works for everyone and stays free from hidden bias. “This isn’t just about Christianity,” he explained. “It’s about making sure AI supports all people, not just a few.”
As AI systems like LaMDA become more common in daily life, the ongoing debate between technology and human values is only likely to grow, with Lemoine’s story serving as a warning to pay closer attention.
Sources: The Washington Post, Newsweek, BBC