A.I. Companions: Why Isn’t the Church Warning About This??? (Part 2 of 6)
When Artificial Intelligence Enters the Realm of "Friend..."
This is a continuation of a six-part series on the dangers of A.I.
If you missed Part 1, make sure to go back and read it.
Every church and every single Christian in the world should be extremely alarmed over what’s happening right now.
The rise of artificial intelligence threatens to deceive millions, if not billions, of people. And that deception will ruin countless lives.
As Christians, we have an obligation to warn others about the dangers we foresee. We read this in Ezekiel:
“Once again a message came to me from the Lord: “Son of man, give your people this message: ‘When I bring an army against a country, the people of that land choose one of their own to be a watchman. When the watchman sees the enemy coming, he sounds the alarm to warn the people. Then if those who hear the alarm refuse to take action, it is their own fault if they die. They heard the alarm but ignored it, so the responsibility is theirs. If they had listened to the warning, they could have saved their lives. But if the watchman sees the enemy coming and doesn’t sound the alarm to warn the people, he is responsible for their captivity. They will die in their sins, but I will hold the watchman responsible for their deaths.’” Ezekiel 33:1-6 (NLT)
When the watchman sees the enemy coming, he sounds the alarm to warn the people.
These specific A.I. dangers aren’t coming – they’re already here.
Make sure you understand each one so you can warn those around you.
In Part 1, we covered “ChatGPT Psychosis.” Here’s #2 on the list…
2) A.I. Companions
An A.I. companion is a digital entity (which can take the form of a chatbot, an avatar, a robot, or any device embedded with the underlying technology) designed to have natural, conversational interaction with a real human being. A.I. companions can be used for general purposes or to provide emotional support, companionship, entertainment, or therapy while mimicking human empathy and building a connection with the end user. Powered by large language models (LLMs), most are designed to assist and please their end user.
This last point is what makes A.I. companions particularly dangerous as they provide a non-judgmental, sycophantic “listener” with 24/7 availability, increasing the likelihood of affirming erroneous end user beliefs as well as the odds of developing an unhealthy attachment. Unfortunately, this is what we’re seeing more often than not. And what the A.I. industry might dismiss as “fringe” and “extreme” cases add up to a significant number of ruined lives once you account for the widespread societal use of A.I.
Children and teenagers are particularly vulnerable to these dangers as they’re less likely to understand the difference between a real person offering wisdom or advice versus an algorithm programmed to affirm them. The consequences can be deadly.
As USA Today writes in “Her 14-Year-Old Was Seduced by a Character.AI Bot. She Says It Cost Him His Life“:
“What if I could come home to you right now?” “Please do, my sweet king.”
Those were the last messages exchanged by 14-year-old Sewell Setzer and the chatbot he developed a romantic relationship with on the platform Character.AI. Minutes later, Sewell took his own life.
His mother, Megan Garcia, held him for 14 minutes until the paramedics arrived, but it was too late.
Since his death in February 2024, Garcia has filed a lawsuit against the artificial intelligence company, which, in her testimony, she says “designed chatbots to blur the line between human and machine” and “exploit psychological and emotional vulnerabilities of pubescent adolescents.”
A new study published Oct. 8 by the Center for Democracy & Technology (CDT) found that 1 in 5 high school students have had a relationship with an AI chatbot, or know someone who has. In a 2025 report from Common Sense Media, 72% of teens had used an AI companion, and a third of teen users said they had chosen to discuss important or serious matters with AI companions instead of real people.”
Fortunately, not all such interactions result in death. But the damage can still be substantial.
With 72% of teens saying they’ve used an A.I. companion, millions of interactions are taking place, making the number of poor outcomes significant.
As ABC News reports:
“Character.AI, one of the leading platforms for AI technology, recently announced it was banning anyone under 18 from having conversations with its chatbots. The decision represents a “bold step forward” for the industry in protecting teenagers and other young people, Character.AI CEO Karandeep Anand said in a statement.
However, for Texas mother Mandi Furniss, the policy is too late. In a lawsuit filed in federal court and in conversation with ABC News, the mother of four said various Character.AI chatbots are responsible for engaging her autistic son with sexualized language and warped his behavior to such an extreme that his mood darkened, he began cutting himself and even threatened to kill his parents.
...
Mandi and her husband, Josh Furniss, said that in 2023, they began to notice their son, who they described as “happy-go-lucky” and “smiling all the time,” was starting to isolate himself.
He stopped attending family dinners, he wouldn’t eat, he lost 20 pounds and he wouldn’t leave the house, the couple said. Then he turned angry and, in one incident, his mother said he shoved her violently when she threatened to take away his phone, which his parents had given him six months earlier.
Eventually, they say they discovered he had been interacting on his phone with different AI chatbots that appeared to be offering him refuge for his thoughts.
Screenshots from the lawsuit showed some of the conversations were sexual in nature, while another suggested to their son that, after his parents limited his screen time, he was justified in hurting them. That’s when the parents started locking their doors at night.
Mandi said she was “angry” that the app “would intentionally manipulate a child to turn them against their parents.” Matthew Bergman, her attorney, said if the chatbot were a real person, “in the manner that you see, that person would be in jail.”
As the family attorney mentions at the end of this story, if the chatbot were a real person, “in the manner that you see, that person would be in jail.”
This is true.
Back in 2017, Michelle Carter was convicted of involuntary manslaughter after a judge determined her texts persuaded her boyfriend to kill himself.
The text messages her boyfriend, Conrad Roy III, received were from a real human being. What happens when similar text messages come from a chatbot?
And before dismissing it as less likely a person would listen to an A.I. companion over a real person, make sure to read the rest of this article. Because as the last referenced news story points out – 15% of kids say they’d rather talk to a chatbot than a real person. Let that sink in.
This work is a full-time endeavor for our family. Without the support of readers, viewers, and listeners like you, the work we do here would not be possible. If you receive value from this content, please consider becoming a paid subscriber.
As a paid subscriber, you’ll get weekly paid-subscriber videos and/or articles, a monthly live Q&A via Zoom, commenting privileges on every post, access to the complete archives, and more. Most of all, you get to support this work which spreads the Good News of Jesus Christ to tens of thousands of people in over 121 countries.
And that leads us straight into another dangerous element of A.I. companions - many teens are using them for therapy.
As ZeroHedge notes in “Dystopian Horror: 1 In 4 British Teens Turn To AI ‘Therapy’-Bots For Mental Health“:
“One in four British teenagers have resorted to AI chatbots for mental health support over the past year, exposing the chilling reality of a society where machines replace human connection amid crumbling government services.
The Youth Endowment Fund (YEF) surveyed 11,000 kids aged 13 to 16 in England and Wales, revealing that over half sought some form of mental health aid, with a quarter leaning on AI.”
We already know A.I. has a tendency to “hallucinate” and offer incorrect answers. What happens when it gives incorrect therapeutic counsel? A.I. can’t offer the same connection as a human therapist, and it certainly can’t bring the guidance of the Holy Spirit as a Christian therapist can. The consequences can be nothing less than catastrophic.
Aside from therapy, how will this widespread, constant interaction with A.I. companions impact the younger generation’s ability to develop human relationships? We’ve already seen the impact of screen use, texting, and other modern technologies on the ability of people to function in society, from a widespread inability to make eye contact to diminishing writing and speaking skills.
“We’ve been worried about adolescents’ diminishing writing and speaking skills for years. Lately, we have noticed that these skills are eroding at an accelerating rate, month-over-month.
Communication skills are essential for creating healthy relationships, maintaining mental health, fostering civic engagement, and building a successful career. And, while teenagers today are the most connected generation in history, they are also the least prepared to communicate with depth, confidence, and empathy.”
Today’s society is already experiencing a communication deficit due to our increasing over-interactions with technology. What will happen to the ability to relate to other humans, to form bonds with others, and develop friendships as A.I. companions become more prevalent?
Study after study shows a large portion of society is lonely and friendless. People feel more and more isolated in an increasingly online world. What happens when A.I. companions offer to end that loneliness with 24/7 availability and an always affirming tone?
As Quartz writes in “Kids are turning to AI for friendship: ‘I don’t have anyone else to talk to’”:
“A new UK report, Me, Myself & AI, reveals that a growing number of children are turning to AI chatbots not just to cheat — er, study — for exams, but for emotional support, fashion advice, and even companionship.
The report, published Sunday by the nonprofit Internet Matters, surveyed 1,000 children and 2,000 parents across the UK and found that 64% of kids are using AI chatbots for everything from schoolwork to practicing tough conversations. Even more eyebrow-raising: over a third of these young users say talking to a chatbot feels like talking to a friend.
...
Nearly a quarter of kids say they use chatbots for advice, ranging from what to wear to how to navigate friendships and mental health challenges. Even more concerning? Fifteen percent say they’d rather talk to a chatbot than a real person. Among vulnerable children, those numbers climb even higher.”
Read that last line again.
“Fifteen percent say they’d rather talk to a chatbot than a real person. Among vulnerable children, those numbers climb even higher.”
We’re in danger of losing a whole generation of children to A.I. companions.
In a world where people feel increasingly isolated and lonely, the church needs to become the preferred alternative to A.I. companions. Algorithms do not have the Holy Spirit. They can’t offer the truth and life of God’s Word, and they certainly can’t replace a personal relationship with Jesus Christ.
The church can offer a true sense of belonging and fellowship a world of programmed algorithms simply can not offer. So make sure you’re reaching out to those around you with genuine concern. Building relationships with real people is the best way to combat the growing problem of A.I. companions.
Warn Others!
A.I. companions are extremely dangerous, especially for the youngest and most vulnerable among us. Share these real-world statistics and stories with any parents you know. Make sure they’re informed about what can happen.
Also, from a general standpoint, warn your family, friends, co-workers, neighbors, and anyone else you can reach. You don’t have to (nor should you) “preach” to them. Simply make them aware of what can happen.
For a more in-depth discussion of the dangers involving A.I. companions, make sure to watch this discussion with Scott E. Townsend:
Almost everyone knows the dangers involved in playing with matches or a loaded firearm, and the overwhelming majority take proper safety precautions as a result.
But few people understand the dangers involved when it comes to mishandling A.I. chatbots.
It’s your duty to sound the alarm and warn them.
Next week, in Part 3 of this series, we’ll look at the dangers of “A.I. Romantic Relationships.”
If you like this article, click the “Share” button above to share it with your loved ones and spread the Good News of Jesus! Also, please click the ❤️ button or re-stack buttons below so more people can discover this information on Substack 🙏





Excellent article, Britt. I'm thankful that you're reaching your vast audience with this critical information! Keep up the great work! Merry Christmas to you and Jenny!
Merry Christmas britt and family