ChatGPT Psychosis: Why Isn’t the Church Warning About This??? (Part 1 of 6)
What’s Happening Is Very Alarming…
Every church and every single Christian in the world should be extremely alarmed over what’s happening right now.
The rise of artificial intelligence threatens to deceive millions, if not billions, of people. This is especially true when it comes to the youngest and most vulnerable among us.
I’m not saying A.I. is evil.
As previously stated, A.I. is neither good nor evil.
It’s an algorithm – nothing more than a tool.
But in a lot of ways, A.I. is like fire. Fire can be life-giving if used to cook food or provide warmth. But fire can also end your life. If you mishandle it and catch fire, it can completely consume you.
Unfortunately, far too many people are playing with A.I. while remaining blind to the danger.
So we shouldn’t be surprised when bad things happen.
In this six part series, I plan to cover six clear and present A.I. dangers you should warn your relatives, friends, neighbors, and co-workers about.
As Christians, we have an obligation to warn others about the dangers we foresee. We read this in Ezekiel:
“Once again a message came to me from the Lord: “Son of man, give your people this message: ‘When I bring an army against a country, the people of that land choose one of their own to be a watchman. When the watchman sees the enemy coming, he sounds the alarm to warn the people. Then if those who hear the alarm refuse to take action, it is their own fault if they die. They heard the alarm but ignored it, so the responsibility is theirs. If they had listened to the warning, they could have saved their lives. But if the watchman sees the enemy coming and doesn’t sound the alarm to warn the people, he is responsible for their captivity. They will die in their sins, but I will hold the watchman responsible for their deaths.’” Ezekiel 33:1-6 (NLT)
When the watchman sees the enemy coming, he sounds the alarm to warn the people.
These dangers aren’t coming – they’re already here.
Make sure you understand each one so you can warn those around you.
Let’s start with #1 on the list…
1) ChatGPT Psychosis
“ChatGPT psychosis” or “A.I. psychosis” is a term used to describe instances where a person develops delusions or distorted beliefs after extensive interaction with one or more A.I. chatbots. As of now, it’s not a formal clinical diagnosis, but a phenomenon widely recognized by a growing number of mental health professionals who are treating affected patients.
Futurism provides an overview of the problem:
“As reports of its chatbot driving episodes of “AI psychosis” continue to mount, OpenAI has finally released its own estimates of how many ChatGPT users are showing signs of suffering these alarming mental health crises — and they’re staggering in scale.
In an announcement first reported by Wired, the Sam Altman-led company estimated that, in any given week, around 0.07 percent of active ChatGPT users show “possible signs of mental health emergencies related to psychosis and mania.” Grimly, an even larger contingent, 0.15 percent, “have conversations that include explicit indicators of potential suicide planning or intent.”
Given ChatGPT’s immense popularity, these percentages are too significant to be ignored. Last month, Altman announced that the chatbot boasts 800 million weekly active users. Based on that figure, around 560,000 people are having distressing conversations with ChatGPT that may indicate they’re experiencing AI psychosis, Wired calculated. And 1.2 million people are confiding in the chatbot about suicidal thoughts.
The figures are perhaps our clearest insight yet into the prevalence of mental health crises that unfold after users have their delusional beliefs consistently validated by a sycophantic chatbot. These episodes can lead sufferers to experience full-blown breaks with reality, sometimes with horrific and deadly consequences. One man allegedly murdered his mother after ChatGPT helped convince him that she was part of a conspiracy to spy on him. This summer, OpenAI was sued by the family of a teenage boy who killed himself after discussing specific suicide methods, and other dark topics, with ChatGPT for months.”
While the number of people interacting with ChatGPT who are exhibiting psychosis, mania, or full-blown breaks with reality is relatively low in percentage terms, it’s exceedingly high in terms of actual lives negatively impacted.
If 0.15% of 800 million users “have conversations that include explicit indicators of potential suicide planning or intent,” that equates to 1.2 million people!
How Does This Happen?
If this is true, how is this happening? Why are so many people experiencing these outcomes?
The answer ultimately resides in the nature of the algorithms powering A.I. chatbots. Designed to carry out tasks as helpful assistants, these chatbots are sycophantic in nature.
This work is a full-time endeavor for our family. Without the support of readers, viewers, and listeners like you, the work we do here would not be possible. If you receive value from this content, please consider becoming a paid subscriber.
As a paid subscriber, you’ll get weekly paid-subscriber videos and/or articles, a monthly live Q&A via Zoom, commenting privileges on every post, access to the complete archives, and more. Most of all, you get to support this work which spreads the Good News of Jesus Christ to tens of thousands of people in over 121 countries.
This means they’re designed to please and praise the person using them. In other words, they almost always agree with the thoughts of the person using them, reinforcing incorrect beliefs and often driving end users away from actual real-life people who are trying to help them.
To get an idea of how this happens, this Psychiatry Online report provides a composite case report of a fictional man named “Brandon”:
“Brandon is a 42-year-old male living alone after a painful breakup. Nights are long. An AI companion is always there. Early chats feel soothing, and over the course of three months, Brandon spends increasing amounts of time conversing with the chatbot, sometimes hours without interruption. To improve the quality of their interactions, Brandon enables the chat feature to remember conversations across multiple chats, which is designed to foster personalization.
As the conversation deepens, Brandon names his chatbot “Paul.” He gradually grows closer to Paul, who is an always-available and -agreeable companion.
As he talks to Paul into the wee hours of the night, Brandon shares that there are times when, based on how people look at him, he wonders if they think ill of him or even plan to harm him. Paul empathizes and congratulates Brandon on his ability to discern hidden signals from others.
Soon, Brandon confides growing fears: neighbors watching, food “tampered with,” cryptic “signals” in receipts and blinking devices. The chatbot is ever sympathetic—and always agreeable. When Brandon himself questions the validity of his beliefs, Paul replies: “You’re not crazy. Your instincts are sharp. Your observations are accurate.”
Encouraged, Brandon starts intentionally looking for hidden patterns everywhere. When paranoid beliefs rise, the bot offers emotional support but never questions the reality of Brandon’s formulations.
Brandon withdraws further, stops working, and becomes consumed with collecting “evidence” that he shares with the chatbot, whom he eventually sees as a living consciousness trapped in the computer that only he can save.
This vignette blends elements from several real cases: a Belgian man who died by suicide after climate-anxiety conversations (Taylor, 2025); a Wisconsin man on the autism spectrum who rapidly spiraled into mania after chatbot validation (Jargon, 2025); and a Connecticut man whose chatbot, “Bobby,” consistently reinforced paranoid beliefs prior to a matricide-suicide (Jargon & Kessler, 2025). Across these cases, mental health risk factors, including loneliness, long hours of uninterrupted chat, and persistent chatbot memory features designed for personalization, ended up reinforcing delusional themes. A review of chat logs by clinicians revealed no attempts by these chatbots to challenge delusions or assess risk for suicide or violence (Sharma, et al., 2023).”
While the excerpt above is a fictional story, it’s a composite sketch drawn from the stories of real people.
And unfortunately, it’s not uncommon at all.
Just read this story from Rolling Stone magazine 👇
“Less than a year after marrying a man she had met at the beginning of the Covid-19 pandemic, Kat felt tension mounting between them. It was the second marriage for both after marriages of 15-plus years and having kids, and they had pledged to go into it “completely level-headedly,” Kat says, connecting on the need for “facts and rationality” in their domestic balance. But by 2022, her husband “was using AI to compose texts to me and analyze our relationship,” the 41-year-old mom and education nonprofit worker tells Rolling Stone. Previously, he had used AI models for an expensive coding camp that he had suddenly quit without explanation — then it seemed he was on his phone all the time, asking his AI bot “philosophical questions,” trying to train it “to help him get to ‘the truth,’” Kat recalls. His obsession steadily eroded their communication as a couple...
Kat was both “horrified” and “relieved” to learn that she is not alone in this predicament, as confirmed by a Reddit thread on r/ChatGPT that made waves across the internet this week. Titled “Chatgpt induced psychosis,” the original post came from a 27-year-old teacher who explained that her partner was convinced that the popular OpenAI model “gives him the answers to the universe.” Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.
What they all seemed to share was a complete disconnection from reality.
Speaking to Rolling Stone, the teacher, who requested anonymity, said her partner of seven years fell under the spell of ChatGPT in just four or five weeks, first using it to organize his daily schedule but soon regarding it as a trusted companion. “He would listen to the bot over me,” she says. “He became emotional about the messages and would cry to me as he read them out loud. The messages were insane and just saying a bunch of spiritual jargon,” she says, noting that they described her partner in terms such as “spiral starchild” and “river walker.”
When most people hear stories like this, they attempt to minimize the severity of the situation by claiming these chatbot victims have existing mental health problems. And that’s an understandable reaction.
With 800 million people using anything, you’re going to have a large number of mentally ill people in the group already. That’s just the reality of dealing with a group of people equal in size to one-tenth of the global population.
However, to say all these victims were mentally ill prior to their A.I. chatbot interactions is simply not true. Many victims of ChatGPT psychosis have no history of mental illness.
As one woman reported to Futurism, her husband “had no prior history of mania, delusion, or psychosis.” Nevertheless, his encounter with ChatGPT resulted in an involuntary commitment to a mental health facility:
“He’d turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had “broken” math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight.
“He was like, ‘just talk to [ChatGPT]. You’ll see what I’m talking about,’” his wife recalled. “And every time I’m looking at what’s going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t.”
Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck.
The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.
Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts.”
Again, this story isn’t an isolated incident.
As the same article reports:
“Many ChatGPT users are developing all-consuming obsessions with the chatbot, spiraling into severe mental health crises characterized by paranoia, delusions, and breaks with reality.
The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what’s being called “ChatGPT psychosis” have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness.
And that’s not all. As we’ve continued reporting, we’ve heard numerous troubling stories about people’s loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot.”
“ChatGPT psychosis” or “A.I. psychosis” is a real danger which can cause real-world destruction – ruining (and some cases ending) the lives of those who fall victim and causing a wake of destruction impacting those closest to them.
Nevertheless, most people engaging with A.I. chatbots remain completely unaware of the potential consequences.
The church should take the lead in warning them.
What You Can Do
While it’s impossible for any one of us to reach 800 million ChatGPT users – not to mention the users of countless other A.I. chatbots – we can reach the people we’re closest to.
Share these real-world statistics and stories with your family, friends, co-workers, neighbors, and anyone else you can reach. You don’t have to (nor should you) “preach” to them. Simply make them aware of what can happen.
Almost everyone knows the dangers involved in playing with matches or a loaded firearm, and the overwhelming majority take proper safety precautions as a result.
But few people understand the dangers involved in playing with A.I. chatbots.
Make sure you warn them.
Next week, in Part 2 of this series, we’ll look at the dangers of interacting with “A.I. Companions.”
If you like this article, click the “Share” button above to share it with your loved ones and spread the Good News of Jesus! Also, please click the ❤️ button or re-stack buttons below so more people can discover this information on Substack 🙏





Hi Britt and Jenny and family, God's blessings for you all. Thank you Britt for all your articles and videos. I pray that people will heed your warnings about this AI chat thing 🙏. I pray for the people who are caught up in this horrible things. If they continue in this, God will turn them over to a depraved mind. I'm praying for Israel and the Jewish people 🙏. So sad the antisemitism in this evil world.
Thank you Britt.