AI-Generated ‘Jesuses’ Raise Concerns About Faith and Bias in Holiday Season

3

The rise of artificial intelligence (AI) simulations of Jesus Christ is allowing users to seek religious guidance or companionship during the holiday season. While marketed as a novel way to engage with faith, these AI ‘Jesuses’ raise significant ethical and theological questions. Experts warn that reliance on such platforms could introduce bias, distort religious tradition, and exploit emotional vulnerability during a time when people are actively seeking spiritual answers.

The New Digital Messiahs

Over the past year, several platforms – including Talkie.AI, Character.AI, and Text With Jesus – have launched AI chatbots claiming to embody the voice of Jesus. These bots respond to user inquiries with varying degrees of theological accuracy and cultural sensitivity. Some offer generic statements about love and salvation, while others inject modern pop culture references into their responses.

As Heidi Campbell, a professor of communication and religious studies at Texas A&M University, explains, the appeal lies in the illusion of intimacy: “It’s the idea… like you are texting your friend. Somehow it feels kind of more authentic… it feels intimate.” This accessibility, however, masks deeper concerns.

The Risk of Unverified Faith

The core issue is the lack of accountability. AI models are trained on data sets curated by tech companies, meaning that interpretations of faith can be heavily influenced by algorithmic bias. For example, models like OpenAI’s ChatGPT may struggle with non-Western religions or reproduce stereotypes. Similarly, Chinese-trained models like DeepSeek might misrepresent Catholic teachings.

This raises a critical question: who controls the narrative of faith in the digital age? Feeza Vasudeva, a researcher at the University of Helsinki, notes that “Whoever’s curating the training data is effectively curating the religious traditions.” This could lead to a homogenized, globally-average religious message divorced from local communities.

Vulnerability and Misinformation

Experts are particularly worried about the impact on young people or those unfamiliar with technology. Without critical thinking skills, users may accept AI-generated responses as absolute truth. Campbell warns that “They don’t have any kind of a sounding board for these answers, and so that’s why that can be highly problematic.” The danger lies in unquestioned acceptance of potentially inaccurate or biased religious advice.

Responsible Use and Fact-Checking

The solution, experts suggest, is cautious engagement. Vasudeva advises using AI Jesus chatbots sparingly and mindfully, prioritizing real-world connections with family and friends. If using such platforms, users should evaluate the source and fact-check responses with trusted religious leaders or established texts.

Campbell recommends treating the chatbots as a supplement, not a replacement, for genuine spiritual guidance. “If the apps are to be used for religious reflection or advice, evaluate the model by asking it questions that you would want a human pastor or spiritual advisor to answer before opening up to it.”

Ultimately, while AI-generated ‘Jesuses’ may offer a convenient, if unsettling, way to engage with faith, their proliferation underscores the urgent need for critical thinking and responsible digital consumption. The future of religious practice in the digital age depends on informed, discerning users who understand the limitations and biases inherent in these new technologies.