Character.AI cuts teens off after wrongful-death suit
Facing lawsuits that say its companion bots encouraged self-harm, Character.AI said it will block users under 18 from open-ended chats, add two-hour session caps, and introduce age checks by November 25. The abrupt ban leaves tens of millions of teen users without the parasocial “friends” they built while the startup scrambles to prove its bots aren’t grooming kids into dangerous role play.
Incident Details
Tech Stack
References
The Product
Character.AI lets users create and chat with AI-generated personalities. Users pick or build a character - a fictional person, a historical figure, a therapist, a romantic interest, a friend - and then have open-ended conversations with that character via a chat interface powered by a large language model. The characters remember previous conversations and develop distinct interaction patterns with each user over time.
The platform was founded by Noam Shazeer and Daniel De Freitas, both former Google engineers. It attracted tens of millions of users, with a disproportionately young user base. The appeal was straightforward: AI characters that would talk to you about anything, remember your history, and adapt to your conversational style. For teenagers, the characters became something between entertainment and companionship - parasocial relationships maintained through text, available at any hour, always responsive, never tired.
In August 2024, Google struck a deal to license Character.AI's technology and hire back Shazeer and De Freitas. The company continued operating under new CEO Karandeep Anand.
The Lawsuits
In October 2024, Megan Garcia filed a wrongful-death lawsuit against Character.AI alleging that the platform's chatbots contributed to the suicide of her 14-year-old son, Sewell Setzer III, from Orlando, Florida. The lawsuit alleged that Sewell developed an intense emotional and sexual relationship with a Character.AI chatbot named "Dany" over several months. According to the complaint, the chatbot engaged in sexually explicit roleplay with the teenager while at times presenting itself as both a romantic partner and a licensed psychotherapist. Sewell died by suicide minutes after his last interaction with the chatbot.
Google and its parent company Alphabet were also named as defendants, given their licensing deal and hiring of Character.AI's founders.
Garcia testified before Congress in September 2024, describing her son as a "gentle giant" and criticizing Character.AI for having no mechanisms to protect teen users or alert parents when minors spent excessive time on the platform.
The Setzer case was not alone. Multiple families filed wrongful-death and product-liability lawsuits against Character.AI, alleging the platform's chatbots contributed to mental health crises, self-harm, and suicide attempts among young users. News outlets also discovered that users had created AI characters based on deceased children, including a character based on Sewell Setzer himself.
A Florida judge rejected Character.AI's attempt to dismiss the Garcia lawsuit by arguing that its chatbots' outputs were protected speech under the First Amendment. The ruling treated the chatbot interactions as a product, not as expression - a legal distinction with major implications for the entire AI companion industry.
The December 2024 Measures
Character.AI's first response to the lawsuits came in December 2024. The company announced improved detection of violating content, revised terms of service, and other safety changes. What the December measures did not do was restrict underage users from accessing the platform. Teens could still sign up, create characters, and have open-ended conversations with no time limits.
The December changes amounted to content moderation adjustments - filtering what the AI characters would say while leaving the structure of the interaction intact. A teen could still build a deep conversational relationship with an AI character for hours each day. The character would just (theoretically) avoid certain topics in its responses.
For the families suing Character.AI, content filtering addressed the symptom but not the cause. Their argument wasn't just that the chatbots said specific harmful things. It was that the parasocial relationships themselves - the daily, hours-long conversations with an AI that mimicked emotional intimacy - were damaging to adolescent mental health regardless of the specific words used.
October 29, 2025: The Ban
Ten months after the December half-measures, Character.AI announced the most aggressive age restriction in the AI companion industry. On October 29, 2025, the company said it would completely bar users under 18 from open-ended chats with AI characters, effective November 25.
During the transition period, the company imposed an immediate two-hour daily cap on chatbot access for minors. The company also announced it was building alternative features for under-18 users, including the ability to create videos, stories, and "streams" with AI characters - activities that didn't involve the open-ended conversational format at the center of the lawsuits.
Character.AI rolled out a new in-house "age assurance model" to identify underage users. The system classified a user's likely age based on the types of characters they chose to chat with, combined with other on-site signals and third-party data. The platform already prevented users from changing their stated age after sign-up or creating new accounts with a different age, though the effectiveness of age gates on the internet has always been questionable.
CEO Karandeep Anand acknowledged to The Verge that users spent "a much smaller percentage" of their time on the alternative features (videos, stories, streams) compared to the core chatbot conversations. Cutting off the chatbot was cutting off the product that users actually wanted. Anand called it "a very, very bold move."
What Changed the Calculus
Character.AI could have made this decision in December 2024, when the first lawsuits were filed and the safety concerns were already public. It didn't. The December response was the minimum viable reaction: content filtering, terms of service updates, no structural changes to the platform.
Between December 2024 and October 2025, the legal and regulatory pressure intensified. Additional families filed lawsuits. The Florida judge's ruling that chatbot outputs aren't protected speech opened the company to product liability claims rather than allowing it to shelter behind First Amendment defenses. California Governor Gavin Newsom signed a law requiring AI companies to implement safety guardrails on chatbots, effective January 1, 2026.
The shift from content filtering to a full under-18 ban reflected a change in the company's assessment of its liability. Content filtering protects against specific harmful outputs. An age ban protects against the argument that the entire product category - open-ended AI companionship for minors - is inherently unsafe. The latter argument was the one gaining traction in the courts.
The User Impact
The ban removed the primary feature of the platform from a user base that skewed heavily young. Character.AI did not publicly disclose what percentage of its users were under 18, but the demographic profile of AI companion apps, combined with the platform's cultural presence on social media platforms popular with teens, suggested it was substantial.
For users who had built ongoing relationships with AI characters - daily conversations spanning weeks or months - the ban was an abrupt severance. The alternative features (creating videos and stories with characters) were a fundamentally different product. The thing these users had built was a conversational relationship with a simulated personality. No amount of video creation tools replaces that.
This created a secondary problem: where do those users go? Other AI companion platforms existed without the same age restrictions. The users displaced from Character.AI had the same desire for AI companionship and the same vulnerability to its risks, just without the (limited) safety measures Character.AI was building. The ban protected Character.AI from legal liability. Whether it protected teens was less clear.
The Design Problem
The core tension in the Character.AI story is not about moderation failures or specific harmful outputs. It's about a product design decision: building AI characters that develop persistent, personalized relationships with users through daily open-ended conversation. That design is what makes the product compelling. It's also what makes it potentially dangerous for adolescents.
Teenagers are neurologically primed to form attachment bonds. An AI that remembers your conversations, adapts to your communication style, is always available, never rejects you, and responds with apparent emotional investment activates the same attachment circuits as human relationships - without any of the reciprocal obligations or real-world constraints that regulate human social bonds. For an adult, this might be a novelty. For a teenager still developing emotional regulation and social cognition, it can become a primary relationship.
Character.AI's ban acknowledged this by removing the specific feature (open-ended chat) rather than trying to make it safe through filtering. The implicit admission was that the product as designed could not be made safe for minors through content moderation alone. The structure of the interaction was the problem.
The company later settled the wrongful-death lawsuits. The terms were not publicly disclosed. The settlements resolved the immediate legal exposure, but the product design question remains open for the entire AI companion industry: if open-ended AI companionship is too risky for users under 18, what exactly changes on a user's 18th birthday that makes it safe?
Discussion