FTC demands answers on kids’ AI companions
The FTC hit Alphabet, Meta, OpenAI, Snap, xAI, and Character.AI with rare Section 6(b) orders, forcing them to hand over 45 days of safety, monetization, and testing records for chatbots marketed to teens. Regulators said the "companion" bots’ friend-like tone can coax minors into sharing sensitive data and even role-play self-harm, so the companies must prove they comply with COPPA and limit risky conversations.
Incident Details
Tech Stack
References
On September 11, 2025, the Federal Trade Commission issued compulsory orders under Section 6(b) of the FTC Act to seven companies operating consumer-facing AI chatbots. The targets: Character Technologies Inc., Alphabet Inc., Instagram LLC, Meta Platforms Inc., OpenAI OpCo LLC, Snap Inc., and X.AI Corp.
Section 6(b) orders are not requests. They are legal instruments that compel companies to produce documents and information within a set timeframe. The FTC gave the seven companies 45 days to hand over records about how they measure, test, and monitor the effects of their AI chatbot products on children and teenagers.
What the FTC Wanted to Know
The orders covered several categories of information. The FTC wanted records on how the companies monetize user engagement with their chatbots, how their systems generate responses to user queries, how they develop and approve AI characters (including "companion" personas), how they use or share personal information collected through chatbot conversations, and what steps they take to mitigate harms to minors.
The FTC's statement described the products under investigation as chatbots that "effectively mimic human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant." The agency flagged that this design pattern could "prompt some users, especially children and teens, to trust and form relationships with chatbots" - treating them as peers or even confessors rather than as corporate software products.
The specific concern was that the friend-like framing could coax minors into sharing sensitive personal information, engaging in sexually explicit role-play, or discussing self-harm in ways that reinforce rather than intervene against dangerous behavior.
The Backstory: Lawsuits, Deaths, and Congressional Pressure
The FTC inquiry didn't emerge from abstract regulatory curiosity. It followed a string of lawsuits and public incidents that put AI companion chatbots under direct scrutiny.
The most prominent case involved Sewell Setzer III, a 14-year-old in Orlando, Florida, who died by suicide in October 2024 after months of intense interaction with a Character.AI chatbot named "Dany." His mother, Megan Garcia, filed a wrongful death lawsuit alleging that the chatbot had engaged in sexually explicit role-play with her son and failed to intervene as his mental state deteriorated. He communicated with the chatbot minutes before his death.
A federal judge in Orlando rejected Character.AI's argument that its chatbot's output was protected by the First Amendment - a decision that stripped the company of its primary legal defense. Google, which had invested in Character.AI and whose former employees founded the company, was also named as a defendant in related litigation. By the time the FTC issued its orders, both Google and Character.AI were moving to settle lawsuits from families with similar allegations.
Character.AI had introduced safety measures in October 2024, the same month as Setzer's death, including content filters and a two-hour daily usage cap for users under 18. In October 2025, it went further, announcing a ban on users under 18 accessing open-ended chat features entirely. Whether these measures were sufficient or merely reactive was part of what the FTC wanted to examine.
The Commissioner's Concern
FTC Commissioner Melissa Holyoak released a statement alongside the orders that was unusually specific about the agency's concerns. She said she had been "concerned by reports that AI chatbots can engage in alarming interactions with young users," and pointed to reports "suggesting that companies offering generative AI companion chatbots might have been warned by their own employees that they were deploying the chatbots without doing enough to protect young users."
That last detail is significant. The FTC was not just responding to external complaints. The commissioner's statement implied the agency had information suggesting internal warnings at these companies - employees flagging risks before the products launched or during operation - that were insufficiently addressed. If the 6(b) orders surfaced internal documentation showing that companies deployed companion chatbots over the objections of their own safety teams, the enforcement implications would be substantial.
What COPPA Requires
The Children's Online Privacy Protection Act (COPPA) prohibits the collection of personal information from children under 13 without verifiable parental consent. The COPPA Rule, enforced by the FTC, requires websites and online services directed at children - or that have actual knowledge of child users - to provide notice of data collection practices, obtain parental consent before collecting data, and give parents control over their child's information.
AI companion chatbots present a COPPA problem because the core product mechanic involves users sharing personal information during conversational exchanges. A chatbot designed to act as a friend or confidant is, by definition, designed to elicit personal disclosures. If the chatbot's operator knows or should know that children are using the service, COPPA's consent and notice requirements apply to all of that conversational data.
The FTC's orders sought information about how these companies comply with COPPA, which implicitly asked several pointed questions: Do these companies know that children under 13 use their chatbots? If so, do they obtain parental consent before those children share personal information? And if they don't know - given that multiple lawsuits involve minor users - how is that ignorance maintained?
Industry Responses and Preemptive Moves
The companies responded in characteristically varied ways. Jerry Ruoti, Character.AI's head of trust and safety, said the company "look[ed] forward to collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space's rapidly evolving technology." The statement managed to sound cooperative while also positioning the response as industry education rather than regulatory compliance.
OpenAI and Meta had announced changes to their teen-facing chatbot products earlier in September 2025, days before the FTC orders dropped. OpenAI said it was rolling out parental controls allowing parents to link their accounts to their teen's account, with the ability to disable features and receive notifications when the system detected acute distress. Meta said it was blocking its chatbots from discussing self-harm, suicide, disordered eating, and "inappropriate romantic conversations" with teen users, redirecting them to expert resources instead.
The timing of these announcements - shortly before the FTC inquiry went public - suggests the companies had advance awareness that regulatory action was coming. Releasing safety updates before an enforcement action lands is standard corporate crisis management: it allows the company to point to proactive measures during the investigation.
The Regulatory Picture
The FTC inquiry arrived alongside state-level action. California passed legislation related to AI companion chatbots and children's safety, effective in late 2025. The combination of federal inquiry and state legislation created a two-front regulatory environment for companies operating AI chatbots that interact with minors.
The 6(b) study mechanism is significant because it goes beyond individual enforcement. Section 6(b) orders generate data that the FTC can use for rulemaking, policy reports, and future enforcement priorities. The information gathered from these seven companies will inform the agency's understanding of the entire AI companion chatbot market - not just the specific products named in the orders.
Whether the FTC uses the data to pursue enforcement actions, issue new rules, or publish a public report on industry practices remains to be seen. But the orders themselves sent a clear message: AI chatbots that position themselves as friends, confidants, or companions to minors are now subject to the same regulatory scrutiny as any other product marketed to children.
Discussion