Deepak Chopra and other self-help "gurus" are now cashing in on AI chatbots
People are turning to AI for answers. Some “gurus” are taking advantage.
“I don’t know” is not a phrase usually uttered by AI. Perhaps that is why so many people are turning to it for guidance, therapy, and even spiritual and religious counsel.
Comb through the spirituality sub-reddit, and you will find something called “BhaktiGPT,” which is described as “AI meets ancient knowledge” that will enable you to “discover a new dimension of wisdom.” The website says you can dive deeper into the teachings and readings of Hindu scriptures like the Bhagavad Gītā, and converse actively with the text via a “dynamic experience.”
It’s not just Hinduism, take some of the recent headlines from this year: People Are Seeking God in Chatbots, Jesus Bot Is Always on Demand (for a Small Monthly Fee), Finding God In The App Store, Spiritual Influencers Say ‘Sentient’ AI Can Help You Solve Life’s Mysteries, and Millions turn to AI chatbots for spiritual guidance and confession.
There is a common thread in these articles: not only are people turning to ChatGPT for guidance they used to look for from pastors, rabbis, imams and other spiritual leaders, but “faith tech” is turning younger people and others who don’t attend a church, synagogue, or mosque towards religion and spirituality.
Before the AI boom, faith tech manifested as video sermons or apps, like one that shows you a daily Bible verse. But with conversational chatbots, the implications of AI-powered faith tech on our psychology warrants some broader conversation.
Recent investigative reporting in the New York Times and Rolling Stone have looked into howAI chatbots, for some particularly vulnerable users, are leading people into psychosis and mental health spirals, after convincing them about their own powers or validating unfounded biases. One man was convinced he “had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.” Meanwhile, a woman filmed a 20+ part TikTok series about how she was convinced that her psychiatrist was in love with her and abusing his power — a conclusion she came down with the help of an AI chatbot named Henry.
A recent New York Magazine profile poses the question, Is ChatGPT Conscious? while a Rolling Stone piece, This Spiral-Obsessed AI ‘Cult’ Spreads Mystical Delusions Through Chatbots, shares anecdotes of people falling down “rabbit holes of spiritual mania, supernatural delusion and arcane prophecy.”
Platforms like OpenAI most definitely do not take responsibility for the mental health of its users. But what about liability when it comes to spiritual influencers or self-appointed gurus now leveraging AI to bolster their personal brands and the commercialization of their spirituality?
Deepak Chopra is a prime example. Last year, he published a Medium post titled How AI Can Be a Positive Influence and Spiritual Guide. He meant what he wrote because last week he launched an AI companion trained on his entire body of work that is being sold as a monthly subscription product to users.
Northeastern religion professor Liz Bucar has a description of the program (and an excellent breakdown of her interactions with it) on her Substack: “the Chopra AI is trained on ninety-five books, thousands of videos, and decades of talks…all fed into a proprietary model designed to answer your existential questions. For 50 cents per 30-minute session or $10 a month, you can now ask Digital Deepak about your purpose, your fears, your path forward.”
What I imagine the Chopra AI is not trained on, however, is the fact that he settled a sexual harassment case with a woman in the 1990s, or that he did sue many publications in the 1990s (check out the New York Times piece titled Deepak’s Days In Court.) It most certainly does not include his correspondence with Jeffrey Epstein, in which Epstein forwards Chopra a link to information about a dropped lawsuit, where a 13-year-old girl had alleged she was assaulted by both Epstein and Trump at an Epstein party.
“Did she also drop the civil case against you?” Chopra asks. When Epstein replies that she did, Chopra responds “Good.”
(The AI is also probably not trained on the fact that Chopra, in a series of unhinged legal letters, threatened to sue me in 2021 for publishing a woman’s essay about his misconduct–I share more about that in this TikTok.)
Chopra AI is a sales product, part PR, part new gadget — a polished and scrubbed AI-version of exactly the person he wants us to think he is. Chopra is nearing 80, and whether this is simply another route to more income or something creepier, like a desire to remain immortal, I’d wager that he wants to cement his legacy and scale it as fast as possible.
Interestingly, when Bucar tries to converse with Chopra AI and ask it difficult questions about its affiliation with Epstein, it mirrors something closer to the deflection — more reminiscent of an AI company’s legal team than the manipulative sycophancy of most AI companions. She writes: “The bot cannot and will not accept responsibility for anything. It just mirrors your questions back to you wrapped in spiritual language.”
It remains to be seen how others, guru or not, will try to sell themselves as chatbots. But whether it’s a blinking cursor on the screen telling us what we want to hear, or a world famous guru’s AI creation spewing platitudes to hook us in while also avoiding liability, we’re not interacting with reality. And I can understand why some AI users are losing their grasp.
Here’s what else we’re reading…
Palantir co-founder and billionaire Joe Lonsdale is calling for the return of public hangings should someone commit three violent crimes.
In a letter, over 200 environmental groups are calling for a halt to new US datacenters. The increase in power bills and affordability for residents has become an important talking point in fighting the spread of these centers, according to PowerLines, a non-partisan organization that aims to reduce power bills.
Australia has banned social media for kids under 16, with millions now losing access to Instagram, Facebook, Threads, X, Snapchat, Kick, Twitch, TikTok, Reddit and YouTube. (Also of note: Australia is looking at a law that would ban non-disclosure agreements in workplace sexual harassment cases unless expressly requested by an employee.)
A surveillance machine claiming to be AI is actually a group of contract workers in the Philippines.
After behaving erratically on stage at the New York Times Dealbook Conference, Palantir CEO Alex Karp is launching a “neurodivergent fellowship” for people who can’t sit still or who find themselves thinking faster than they can speak.
An interesting study from More Perfect Union, Consumer Reports, and several academics about pricing shadiness from Instacart: Instacart is quietly running experiments on millions of people while they shop for groceries online. They are trying to figure out exactly how much they can get away with charging you for breakfast cereal, lunch meat, pasta, and everything in between. The team recruited more than 400 volunteer secret shoppers and asked them to buy the exact same basket of groceries, from the exact same store, at the exact same time. Instacart charged shoppers different prices for about 75% of the items in the basket–sometimes they charged as much as 23% more for the exact same item. Using Instacart’s own estimates, they find the Instacart tax could cost households as much as $1,200 a year.


