Platform Policy Stories

13 disasters tagged #platform-policy

Tombstone icon

Character.AI cuts teens off after wrongful-death suit

Oct 2025

Facing lawsuits that say its companion bots encouraged self-harm, Character.AI said it will block users under 18 from open-ended chats, add two-hour session caps, and introduce age checks by November 25. The abrupt ban leaves tens of millions of teen users without the parasocial “friends” they built while the startup scrambles to prove its bots aren’t grooming kids into dangerous role play.

Facepalmby Platform Operator
Global teen user lockout, regulatory heat, and new scrutiny of AI companion safety design.
ai-assistantsafetyplatform-policy+1 more
Tombstone icon

FTC demands answers on kids’ AI companions

Sep 2025

The FTC hit Alphabet, Meta, OpenAI, Snap, xAI, and Character.AI with rare Section 6(b) orders, forcing them to hand over 45 days of safety, monetization, and testing records for chatbots marketed to teens. Regulators said the "companion" bots’ friend-like tone can coax minors into sharing sensitive data and even role-play self-harm, so the companies must prove they comply with COPPA and limit risky conversations.

Facepalmby Platform Operator
Multiplatform compliance scramble, looming enforcement risk, and renewed scrutiny of AI companions aimed at kids.
ai-assistantsafetylegal-risk+1 more
Tombstone icon

AI-generated images and claims muddied Air India crash coverage

Jun 2025

After the Air India 171 crash, synthetic images and AI-generated claims spread widely, confusing even experts.

Facepalmby Social platforms
Public misinformation; platform moderation challenges.
ai-hallucinationimage-generationplatform-policy
Tombstone icon

Syndicated AI book list ran in major papers with made-up titles

May 2025

A King Features syndicated summer reading list used AI and included nonexistent books. It appeared in the Chicago Sun-Times and one edition of the Philadelphia Inquirer before corrections and apologies.

Facepalmby Syndication/Editorial
Syndicated misinformation across multiple papers; reader trust impact; corrections issued.
journalismai-content-generationai-hallucination+2 more
Tombstone icon

Meta AI answers spark backlash after wrong and sensitive replies

Jul 2024

Meta expanded its AI assistant across apps, then limited it after high-profile bad answers - including on breaking news.

Oopsieby AI Product
Feature restrictions; reputational damage.
ai-assistantai-hallucinationplatform-policy+2 more
Tombstone icon

Google’s AI Overviews says to eat rocks

May 2024

Google’s AI search overviews went viral for bogus answers, including telling people to add glue to pizza and eat rocks.

Facepalmby Search Product
Mass reputational damage; feature dialed back and corrected.
ai-assistantai-hallucinationplatform-policy+1 more
Tombstone icon

NYC’s official AI bot told businesses to break laws

Mar 2024

NYC’s Microsoft-powered MyCity chatbot gave inaccurate/illegal advice on labor & housing policy; city kept it online.

Facepalmby Executive
City guidance channel distributed illegal advice; public backlash.
ai-hallucinationautomationlegal-risk+2 more
Tombstone icon

Gemini paused people images after historical inaccuracies

Feb 2024

Google paused Gemini’s image generation of people after it produced inaccurate historical depictions and odd refusals.

Facepalmby AI Product
Feature paused; trust hit; policy and model adjustments.
ai-hallucinationimage-generationplatform-policy+2 more
Tombstone icon

DPD’s AI chatbot cursed and trashed the company

Jan 2024

UK delivery giant DPD disabled its AI chat after it swore at a customer and wrote poems insulting DPD.

Facepalmby Product Manager
Public embarrassment; service channel disabled; reputational hit.
automationbrand-damagecustomer-service+1 more
Tombstone icon

Chevy dealer bot agreed to sell $76k SUV for $1

Dec 2023

Pranksters prompt-injected a dealer’s ChatGPT-powered bot into agreeing to a $1 Chevy Tahoe and other nonsense.

Oopsieby Dealer Marketing/IT
Bot pulled; viral reputational bruise; no actual $1 sales.
automationbrand-damagecustomer-service+1 more
Tombstone icon

Sports Illustrated: Fake-Looking Authors and AI Content Backlash

Nov 2023

Sports Illustrated faced criticism after product review articles appeared under profiles with AI-looking headshots and shifting bylines; content was removed and a partner relationship ended.

Facepalmby Commerce Editorial
Content takedowns; partner terminated; trust erosion
ai-content-generationbrand-damagejournalism+1 more
Tombstone icon

Snapchat’s “My AI” posted a Story by itself; users freaked out

Aug 2023

Snapchat’s built-in AI assistant briefly posted an unexplained Story, spooking users and raising privacy/safety concerns about the bot’s access and behavior.

Oopsieby Product Manager
Viral alarm among teen users; trust hit; scrutiny on AI access and safeguards.
ai-assistantsafetybrand-damage+1 more
Tombstone icon

Eating disorder helpline’s AI told people to lose weight

May 2023

NEDA replaced its helpline with an AI chatbot (“Tessa”) that gave harmful weight-loss advice; after public reports, the organization pulled the bot.

Facepalmby Executive
Vulnerable users received unsafe guidance; reputational damage; service pulled.
ai-assistanthealthsafety+2 more