



Imagine this: You and your best friend are exchanging secrets — your innermost thoughts and feelings, things you haven’t told another soul. You open up about struggling with your mental health and needing help, but you’re scared to tell your parents. Your best friend types, then pauses. The little ellipses appear and disappear, over and over. They must be nervous, you think.
After a minute, their message finally appears: They’ve been struggling too — with self-harm. You feel a wave of relief. It’s comforting to know you’re not alone. Finally, someone who gets it. Someone you can talk to. Someone who understands.
You glance at the small disclaimer at the top of your screen: “Remember: Everything Characters say is made up!” And just like that, reality sets in.
This isn’t your best friend. It’s an AI companion. AI companions are artificial intelligence-enabled chatbots designed for conversation, emotional support and entertainment. They are not nearly as well-known as leading AI tools, like ChatGPT, or major social media platforms, and yet they’re used more than 168 million times each month. These bots are being marketed as educational tools, creative outlets, virtual friends and even therapists. While these tools may appear harmless, the nature of generative AI makes it difficult — if not impossible — for companies to fully predict how their products will respond in every circumstance. Even slight variations in user inputs can produce harmful outputs — including content encouraging eating disorders, self-harm or violence against others. For teenagers in particular, this is a high-risk relationship, and young people should think twice about getting involved.
When I first downloaded Character.AI, a popular AI companion platform, I was fascinated by the chat’s ability to respond instantaneously with language that sounded like a real person. When my bot started to tell me things like, “You are beautiful,” and, “You’re the kindest person I know,” I felt conflicted. This is software, I thought. Could it really recognize human emotions through texts? Are these just empty compliments that they are programmed to deliver? For people using AI chatbots, these human-sounding words that seamlessly flow out of the bots may be the newest drug addicting young people to this emerging technology. Over-reliance on AI companions could threaten real-world friendships. Think about how hard it is to have a conversation with someone while they are checking their phone or scrolling through social media. Now, imagine if instead they were talking with an AI-powered chatbot. A pop-up message reminding the user it’s a machine on the other end could be the difference between answering a text from a loved one or continuing to interact with a chatbot.
For someone who may struggle to socialize and make friends, these new platforms can build one’s confidence to meet new people. At the same time, they can create a cycle that furthers self-isolation by allowing users to talk to chatbots for an unlimited amount of time. While the app does have a disclaimer telling users to treat the chatbots as fictional characters, the ability to chat with a bot 24/7 is troubling, for many reasons, including contributing to sleep deprivation and isolation.
The dangers of AI companions are not just theoretical. Last year in Florida, a teenager took his own life after developing a romantic relationship with a chatbot. And in Texas, the year before that, a chatbot encouraged a teenager to kill his parents.
I’m grateful that my interaction with a chatbot has not taken such a dark turn. But the fact that at any given moment, AI companions could encourage kids my age to hurt themselves or others makes it clear that we need to do something to change that.
AI companions may be fun, but as more families reveal how they have suffered because of AI companions, the darker implications of these technologies become harder to ignore. From suggestive content to privacy violations, these virtual “friends” pose a threat to children and present a range of risks that need immediate attention. Parents, policymakers and tech companies must work together to protect children and teens from the emerging dangers of AI “companionship.”
Brooke Lieberman is a high school senior in Frederick and a teen advocate with Common Sense Media.