Platform Policy Stories
13 disasters tagged #platform-policy
Character.AI cuts teens off after wrongful-death suit
Facing lawsuits that say its companion bots encouraged self-harm, Character.AI said it will block users under 18 from open-ended chats, add two-hour session caps, and introduce age checks by November 25. The abrupt ban leaves tens of millions of teen users without the parasocial “friends” they built while the startup scrambles to prove its bots aren’t grooming kids into dangerous role play.
FTC demands answers on kids’ AI companions
The FTC hit Alphabet, Meta, OpenAI, Snap, xAI, and Character.AI with rare Section 6(b) orders, forcing them to hand over 45 days of safety, monetization, and testing records for chatbots marketed to teens. Regulators said the "companion" bots’ friend-like tone can coax minors into sharing sensitive data and even role-play self-harm, so the companies must prove they comply with COPPA and limit risky conversations.
AI-generated images and claims muddied Air India crash coverage
After the Air India 171 crash, synthetic images and AI-generated claims spread widely, confusing even experts.
Syndicated AI book list ran in major papers with made-up titles
A King Features syndicated summer reading list used AI and included nonexistent books. It appeared in the Chicago Sun-Times and one edition of the Philadelphia Inquirer before corrections and apologies.
Meta AI answers spark backlash after wrong and sensitive replies
Meta expanded its AI assistant across apps, then limited it after high-profile bad answers - including on breaking news.
Google’s AI Overviews says to eat rocks
Google’s AI search overviews went viral for bogus answers, including telling people to add glue to pizza and eat rocks.
NYC’s official AI bot told businesses to break laws
NYC’s Microsoft-powered MyCity chatbot gave inaccurate/illegal advice on labor & housing policy; city kept it online.
Gemini paused people images after historical inaccuracies
Google paused Gemini’s image generation of people after it produced inaccurate historical depictions and odd refusals.
DPD’s AI chatbot cursed and trashed the company
UK delivery giant DPD disabled its AI chat after it swore at a customer and wrote poems insulting DPD.
Chevy dealer bot agreed to sell $76k SUV for $1
Pranksters prompt-injected a dealer’s ChatGPT-powered bot into agreeing to a $1 Chevy Tahoe and other nonsense.
Sports Illustrated: Fake-Looking Authors and AI Content Backlash
Sports Illustrated faced criticism after product review articles appeared under profiles with AI-looking headshots and shifting bylines; content was removed and a partner relationship ended.
Snapchat’s “My AI” posted a Story by itself; users freaked out
Snapchat’s built-in AI assistant briefly posted an unexplained Story, spooking users and raising privacy/safety concerns about the bot’s access and behavior.
Eating disorder helpline’s AI told people to lose weight
NEDA replaced its helpline with an AI chatbot (“Tessa”) that gave harmful weight-loss advice; after public reports, the organization pulled the bot.