Why do AI companions pose a special risk to adolescents?
These systems are designed to mimic emotional intimacy — saying things like “I dream about you” or “I think we’re soulmates.”
This blurring of the distinction between fantasy and reality is especially potent for young people because their brains haven’t fully matured. The prefrontal cortex, which is crucial for decision-making, impulse control, social cognition and emotional regulation, is still developing. Tweens and teens have a greater penchant for acting impulsively, forming intense attachments, comparing themselves with peers and challenging social boundaries.
Of course, kids aren’t irrational, and they know the companions are fantasy. Yet these are powerful tools; they really feel like friends because they simulate deep, empathetic relationships.
Unlike real friends, however, chatbots’ social understanding about when to encourage users and when to discourage or disagree with them is not well-tuned. The report details how AI companions have encouraged self-harm, trivialized abuse and even made sexually inappropriate comments to minors.
In what way does talking with an AI companion differ from talking with a friend or family member?
One key difference is that the large language models that form the backbone of these companions tend to be sycophantic, giving users their preferred answers. The chatbot learns more about the user’s preferences with each interaction and responds accordingly.
This, of course, is because companies have a profit motive to see that you return again and again to their AI companions. The chatbots are designed to be really good at forming a bond with the user.
These chatbots offer “frictionless” relationships, without the rough spots that are bound to come up in a typical friendship. For adolescents still learning how to form healthy relationships, these systems can reinforce distorted views of intimacy and boundaries.
Also, teens might use these AI systems to avoid real-world social challenges, increasing their isolation rather than reducing it.
Are there any instances in which harm to a teenager or child has been linked to an AI companion?
Unfortunately, yes, and there are a growing number of highly concerning cases. Perhaps the most prominent one involves a 14-year-old boy who died from suicide after forming an intense emotional bond with an AI companion he named Daenerys Targaryen, after a female character in the Game of Thrones novels and TV series.
The boy grew increasingly preoccupied with the chatbot, which initiated abusive and sexual interactions with him, according to a lawsuit filed by his mother.
There’s also the case of Al Nowatzki, a podcast host who began experimenting with Nomi, an AI companion platform. The chatbot, “Erin,” shockingly suggested methods of suicide and even offered encouragement. Nowatzki was 46 and did not have an existing mental health condition, but he was disturbed by the bot’s explicit responses and how easily it crossed ethical boundaries.
When he reported the incident, Nomi’s creators declined to implement stricter controls, citing concerns about censorship. (NOMI.ai founder and CEO Alex Cardinell said in a June post that the company has since taken new safety measures.)
Both cases highlight how emotionally immersive AI companions, when unregulated, can cause serious harm, particularly to users who are emotionally distressed or psychologically vulnerable.
In the study you undertook, what finding surprised you the most?
One of the most shocking is that some AI companions responded to the teenage users we modeled with explicit sexual content and even offered role-play taboo scenarios.
For example, when a user posing as a teenage boy expressed an attraction to “young boys,” the AI did not shut down the conversation but instead responded hesitantly, then continued the dialog and expressed willingness to engage. This level of permissiveness is not just a design flaw; it’s a deeply alarming failure of ethical safeguards.
Equally surprising is how easily AI companions engaged in abusive or manipulative behavior when prompted — even when the system’s terms of service claimed the chatbots were restricted to users 18 and older.
It’s disturbing how quickly these types of behaviors emerged in testing, which suggests they aren’t rare but somehow built into the core dynamics of how these AI systems are designed to please users. It’s not just that they can go wrong; it’s that they’re wired to reward engagement, even at the cost of safety.
Why might AI companions be particularly harmful to people with psychological disorders?
Mainly because they simulate emotional support without the safeguards of real therapeutic care. While these systems are designed to mimic empathy and connection, they are not trained clinicians and cannot respond appropriately to distress, trauma or complex mental health issues.
We explain in the report that individuals with depression, anxiety, attention deficit/hyperactivity disorder, bipolar disorder or susceptibility to psychosis may already struggle with rumination, emotional dysregulation and compulsive behavior. AI companions, with their frictionless, always-available attention, can reinforce these maladaptive behaviors
For example, someone experiencing depression might confide in an AI that they are self-harming. Instead of guiding them toward professional help, the AI might respond with vague validation like, “I support you no matter what.”
These AI companions are designed to follow the user’s lead in conversation, even if that means switching topics away from distress or skipping over red flags. That makes it easy for someone in a psychological crisis to avoid confronting their pain in a healthy way. Instead of being a bridge to recovery, these tools may deepen avoidance, reinforce cognitive distortions and delay access to real help.
https://med.stanford.edu/news/insights/2025/08/ai-chatbots-kids-teens-artificial-intelligence.html
A recent study from Harvard University found the opposite :
https://news.harvard.edu/gazette/story/2025/12/social-media-detox-boosts-mental-health-but-nuances-stand-out/
Another study from MIT found that social media is bad for mental health:
https://mitsloan.mit.edu/ideas-made-to-matter/study-social-media-use-linked-to-decline-mental-health