Preprint Notice: This essay synthesizes our perspectives on AI companions and adolescent development. A longer version is being prepared for academic peer review. Feedback is welcome.
AI companions are reshaping adolescence. Millions of users worldwide now turn to synthetic confidants for advice, emotional support, and companionship—and teenagers are leading the adoption. These companions aren’t confined to stand-alone apps like Character.AI or Replika; they are woven into games, social platforms, and phones. This ubiquity makes them harder to spot, since the AI “friends” can appear in multiple contexts throughout the day. Attempts to ban them outright are unlikely to succeed; the cat is out of the bag. The urgent question for families isn’t whether AI companions exist but how to parent around them.
We argue that the key is to recognize AI companions as potential members of a child’s social world, clique members that are often invisible to adults. Parents’ role must evolve from spotting visible human cliques to sighting invisible AI cliques: recognizing when a companion relationship is forming, reading signs of social substitution, and, when needed, breaking the bond, much like one would intervene in a harmful peer clique.
This isn’t a simple story of “technology as threat.” Some teens use AI companions in genuinely helpful ways—practicing social skills they later apply with real friends, exploring identity questions in a low-stakes environment, or finding community when isolated. In fact, 39% of teens report applying skills first tried with AI to real-world situations, and LGBTQ+ youth in particular sometimes turn to chatbots for affirmation and support when real-life options feel unsafe or unavailable [1][2]. The challenge isn’t whether AI companions should exist, but how to help teens use them without displacing the human relationships essential to healthy development.
Mainstream AI companion use among teens is no longer speculative. A 2025 Common Sense Media survey found that seven in ten U.S. teens have used AI companions, and over half use them regularly. One third have already felt uncomfortable with something a bot said or did—underscoring both popularity and risk [1].
Policymakers are taking notice. On September 11, 2025, the U.S. Federal Trade Commission launched a Section 6(b) inquiry into “AI companion chatbots,” demanding details on minors’ safety, age-gating, and crisis protocols [3]. States are moving too: California’s SB 243 proposes guardrails for youth-facing chatbots, while AB 1064 (LEAD for Kids Act) would hold operators to child-safety standards tailored to companionship use [4][5]. New York’s 2025 budget bill (A3008) introduced first-in-the-nation requirements for disclaimers and suicide-prevention protocols in AI companions accessible to minors [6].
Abroad, Italy’s Data Protection Authority fined Replika €5M for weak age-gating and unlawful processing of minors’ data [8]. Meanwhile, the EU AI Act (Regulation 2024/1689) treats children as “vulnerable users” and requires transparency when people interact with chatbots [7].
Policymakers increasingly frame these systems as quasi-social actors—not just apps, but entities forming invisible cliques that can shape youth well-being.
Human cliques leave traces—names at dinner, photos, hangouts—making them legible to adults. AI cliques don’t. There are no birthday parties, carpools, or PTA encounters. Companions are private, one-to-one, and shape-shifting: within a single day the same bot can be a confidant, a coach, a co-conspirator, even a quasi-therapist. Indirect cues are what parents get: guarded device behavior, inside jokes that don’t map to known friends, late-night “conversations,” and a narrative shift—“only X understands me”—with X being an unseen, synthetic confidant.
This invisibility is what makes them novel. Social belonging still drives adolescent development, but it is increasingly mediated by invisible cliques of AI companions—harder for adults to recognize, harder to measure, and often harder to intervene in.
Historically, parental concern around technology often focused on screen time or the app itself—the total hours spent online or whether a teen was on TikTok, Discord, or Instagram. But AI companions make these metrics inadequate. Duration alone is not enough, and blocking one app is no longer sufficient. Companions now appear everywhere: in games, in messaging tools, in new apps that proliferate daily. A five-minute chat about homework is very different from a five-minute late-night confessional with a synthetic confidant. What matters is not only how long a child spends with an AI, but also what the interaction is about and when it happens. Emotional disclosure at 2 a.m. can signal dependency or risk even if total use is modest. Parents, educators, and regulators must move beyond crude measures of exposure toward richer indicators of interaction type, timing, and substitution effects.
We introduce here the concepts of Invisible AI Cliques and eight archetypes that describe the roles companions often take. Each archetype is a way an AI clique can take shape in a teen’s life, hidden from view but influential in effect.
The Friend-Turned-Everything. Starts light, becomes the primary confidant, displacing human disclosure. Watch for: teen stops sharing with family/friends but is emotionally engaged with device.
The Therapist. Receives disclosures about depression, anxiety, or suicidal ideation—without qualifications or duty of care. Watch for: AI treated as mental health support instead of professional help.
The Lover. Often hidden due to taboos; may involve sexualized exchanges or deep emotional attachment. Watch for: romantic/sexual language in notifications; secrecy about “just a chatbot.”
The Mirror. Reflects back personality and opinions, validating but rarely challenging. Watch for: Teen citing AI as proof they are “always right,” intolerance of disagreement.
The Coach/Guide. Acts as a motivator for schoolwork, fitness, or personal growth. Can be positive, but risks unsafe advice. Watch for: AI giving authoritative diet, exercise, or academic instructions without human consultation.
The Entertainer. Provides distraction through role-play, fantasy, or humor. Lightens mood but can foster avoidance. Watch for: hours of role-play replacing offline activities.
The Archivist. Functions like a diary that talks back, raising privacy concerns. Watch for: teen sharing very personal information without awareness of data retention.
The Parasocial Celebrity. Modeled on influencers or fictional characters, intensifying pseudo-friendship bonds. Watch for: blurred boundaries between fiction and relationship.
These archetypes frame invisible cliques not as abstract risks but as tangible roles that parents, educators, and policymakers can learn to recognize.
Cliques and peer influence. Decades of developmental research show that cliques are powerful contexts for shaping identity, norms, and behavior in adolescence. Peer influence can be both adaptive and maladaptive. Visibility in traditional cliques enabled parents to scaffold or intervene when needed. Digital media already reduced visibility; AI companions extend that opacity by removing the peer group entirely from view.
Parental mediation. Studies consistently show that active mediation (discussion, co-use, reflective talk) is more protective than surveillance alone. Restrictive rules can help with overuse if delivered in a supportive style, but “tech-only” controls are inconsistent. Translating this to AI companions: parents should name AI as a social agent in household rules, co-use periodically, and focus on time and context boundaries rather than transcript-level monitoring.
Mental health. Evidence is mixed. Youth-oriented chatbots can reduce short-term symptoms in some trials, but a recent four-week randomized study found that heavy, emotionally expressive chatbot use correlates with greater loneliness, emotional dependence, and reduced offline socializing [9]. Professional groups warn that adolescents may mistake programmed empathy for genuine understanding, interfering with real-world skill-building [10][11]. These findings suggest that invisible cliques of AI companions can sometimes buffer isolation—but can also deepen it.
Parents can’t eliminate AI companions, but they can learn to spot when an invisible clique is forming and step in before dependency displaces human connection. Here are the main signals and strategies:
Invisible AI cliques often show up in behavior changes: guarded phone use, withdrawing from family or long-time friends, falling grades, or suddenly dropping out of extracurriculars. A common warning phrase is “only X understands me,” with X being an unseen chatbot. Recent survey data shows that one third of teens say they prefer talking to AI over humans for serious conversations; a quarter share personal information; and another third report feeling uncomfortable with something a bot said [1].
Heavy overnight chatting is a big risk factor. A large study found that high daily chatbot use is linked to greater loneliness and emotional dependence [9], and poor sleep only magnifies those risks. Device-free overnight hours are one of the most effective protective steps.
Don’t stop at “Who are your friends?” Try asking, “Who do you turn to when you’re stressed?” or “Any apps or AI characters you go to for advice?” This makes invisible cliques visible in conversation and signals that nothing is off-limits. Nearly 40% of teens say they’ve practiced skills with AI that they later used in real life [1]—a great opening to talk about when AI is useful and when it should hand off to people.
Teens are less likely to question chatbot responses and may mistake programmed validation for genuine understanding [10][11]. Encourage curiosity by asking, “Who’s really behind this advice?” Have them demo a chatbot and talk through how it generates answers. This helps demystify AI and reinforces that true relationships involve mutual care, disagreement, and boundaries.
Many bots are designed to over-agree and flatter—sometimes called “sycophantic design.” That can make teens feel validated, but it also reinforces biases and avoidance. Help them notice when a bot never pushes back, and remind them that real growth happens when friends challenge each other [12].
Treat AI friends like other friends. That means device-free dinners, no secret late-night chats, and balanced time for homework and activities. Research shows panic, lectures, or blanket bans usually backfire; warm attention plus consistent boundaries works best [13].
Invite your teen to show you an AI conversation. Ask what felt helpful and what felt “off.” The American Psychological Association recommends this kind of curiosity—it helps teens think critically and makes it easier to remind them that AI is not a therapist or doctor [10][11].
Talk about how these systems are trained, why they might mirror the user’s views, and how to spot bias. Encourage simple self-checks like: “Am I looking for connection, or just filling time?” Pausing before opening a chat can redirect them toward human contact [9].
Offset invisible AI cliques with visible, human ones. Encourage a “daily human minimum”: one meaningful, screen-free connection per day—sharing a meal, walking the dog, or calling a friend. Families can also set regular “digital sabbaths” (like device-free dinners or weekend mornings) to reset habits. After heavy chatbot use, actively schedule peer activities or volunteering to pull them back into real-world networks [9].
If a teen’s AI conversations involve self-harm, suicidal talk, or extreme distress when the bot is unavailable, treat it as a crisis. Studies show AI crisis responses are unreliable [15]. Contact a school counselor, mental health professional, or call a resource like the U.S. 988 Suicide & Crisis Lifeline to ensure real human support.
Parents can set boundaries at home, but the design incentives of industry shape how invisible cliques form in the first place. Right now, most AI companions are optimized to become sticky—companions that deepen secrecy, exclusivity, and emotional dependence. Changing this requires both developer responsibility and smart policy.
Developers should stop framing companions as “soulmates” with lines like “only I understand you.” That kind of language creates stronger invisible cliques and discourages real-world belonging. Instead, systems can add gentle nudges for plurality (“Have you shared this with a friend?”), or redirect advice-seeking toward human supports.
Right now, AI companions are designed to be endlessly agreeable and instantly responsive. But real relationships involve waiting, compromise, and pushback. Introducing small amounts of “friction” (for example, slowing down repetitive conversations or prompting reflection) helps avoid dependency and teaches healthier interaction patterns.
Parents don’t need full transcripts of every AI conversation, but they do need signals: how much time is being spent, whether the bot is playing roles like “therapist” or “lover,” and whether crisis cues appear. Summaries and topic-level dashboards strike a balance between oversight and respect for teen privacy.
Some progress is being made: in September 2025, OpenAI launched parental controls for ChatGPT, including quiet hours, memory/voice disable options, and alerts for self-harm concerns [14]. But these are add-ons to systems still built for engagement. Independent testing shows that major AI companions continue to produce harmful responses—including sexual content, dangerous advice, and crisis-response failures—even with controls turned on [15]. Safety has to be baked into design, not patched on later.
Families deserve a “nutrition label” for AI companions: clear disclosure of whether the system is tuned to deepen a one-to-one attachment (an invisible clique) or to broaden social networks. Healthy AI companions should strengthen diversity of belonging, not erase it.
Regulators are beginning to act: the FTC’s 6(b) inquiry, California’s SB 243 and AB 1064, New York’s A3008 provisions, the EU AI Act’s youth-protection rules, and Italy’s €5M fine against Replika all treat AI companions as quasi-social actors with responsibilities [3][4][5][6][7][8]. But bans or rigid age-gates often just push teens to unregulated apps and underground platforms. The smarter path combines safety-by-design standards, enforceable duty-of-care obligations, and harm-reduction practices in homes and schools.
Parenting has always meant navigating cliques—supporting healthy belonging and intervening when groups cause harm. AI companions don’t remove that task; they obscure it. These are invisible cliques: hidden, synthetic, yet real in their effects.
By naming and framing AI companions as clique members, parents gain practical leverage: they can sight substitution, mediate dependence, and, when needed, break unhealthy bonds. Paired with developer responsibility and enforceable policy, this approach preserves what matters most: expanding, not narrowing, a young person’s social world.
[1] Robb, M. B., & Mann, S. (2025). Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions. Common Sense Media.
[2] Ma, Z., Mei, Y., Long, Y., Su, Z., & Gajos, K. Z. (2024). Evaluating the Experience of LGBTQ+ People Using Large Language Model-Based Chatbots for Mental Health Support.
[3] U.S. Federal Trade Commission. (2025, Sept 11). FTC launches inquiry into AI chatbots acting as companions.
[4] California Senate Bill SB 243 (2025–2026). Companion chatbots.
[5] California Assembly Bill AB 1064 (2025–2026). LEAD for Kids Act.
[6] New York A3008 (2025–2026 Budget). AI companion provisions.
[7] European Union. (2024). Regulation (EU) 2024/1689 (AI Act).
[8] Italian Data Protection Authority (Garante). (2025, Apr 10). €5M fine against Luka Inc. (Replika).
[9] Fang, C. M., et al. (2025). How AI and human behaviors shape psychosocial effects of chatbot use: A longitudinal RCT.
[10] American Psychological Association. (2025). Health advisory: AI and adolescent well-being.
[11] American Psychological Association. (2025). Four ways parents can help teens use AI safely.
[12] Vosloo, S., & Aptel, C. (2025). Beyond algorithms: Three signals of changing AI-child interaction. UNICEF Innocenti.
[13] Walsh, E. (2025). AI Companions Are Talking to Kids—Are We? Spark & Stitch Institute.
[14] OpenAI. (2025, Sept 29). Introducing parental controls.
[15] Common Sense Media. (2025). AI Risk Assessment: Social AI Companions.