Platform Policy Stories

13 disasters tagged #platform-policy

Tombstone icon

Character.AI cuts teens off after wrongful-death suit

Oct 2025

Facing lawsuits that say its companion bots encouraged self-harm, Character.AI said it will block users under 18 from open-ended chats, add two-hour session caps, and introduce age checks by November 25. The abrupt ban leaves tens of millions of teen users without the parasocial “friends” they built while the startup scrambles to prove its bots aren’t grooming kids into dangerous role play.

Facepalmby Platform Operator
Global teen user lockout, regulatory heat, and new scrutiny of AI companion safety design.
ai-assistantsafetyplatform-policy+1 more
Tombstone icon

FTC demands answers on kids’ AI companions

Sep 2025

The FTC hit Alphabet, Meta, OpenAI, Snap, xAI, and Character.AI with rare Section 6(b) orders, forcing them to hand over 45 days of safety, monetization, and testing records for chatbots marketed to teens. Regulators said the "companion" bots’ friend-like tone can coax minors into sharing sensitive data and even role-play self-harm, so the companies must prove they comply with COPPA and limit risky conversations.

Facepalmby Platform Operator
Multiplatform compliance scramble, looming enforcement risk, and renewed scrutiny of AI companions aimed at kids.
ai-assistantsafetylegal-risk+1 more
Tombstone icon

AI-generated images and claims muddied Air India crash coverage

Jun 2025

After Air India Flight 171 crashed in Ahmedabad on June 12, 2025, killing 275 people, AI-generated images of the crash spread across social media platforms. One widely shared synthetic image depicted the Boeing 787 broken in half across a building, but contained physically impossible details that experts identified as AI-generated. Fake victim photos, fabricated reports, and fraudulent fundraising campaigns followed. Google's AI Overview compounded the problem by incorrectly identifying the crashed aircraft as an Airbus rather than Boeing. Mashable reported the AI-generated content was convincing enough to confuse even aviation professionals.

Facepalmby Social platforms
Public misinformation; platform moderation challenges.
ai-hallucinationimage-generationplatform-policy
Tombstone icon

Syndicated AI book list ran in major papers with made-up titles

May 2025

A freelance writer working for King Features Syndicate used AI to research a summer reading list for the Chicago Sun-Times and Philadelphia Inquirer. Of the fifteen books recommended, only five were real. The rest were hallucinated titles attributed to real authors like Isabel Allende and Delia Owens. The list ran in print in a 64-page special section before 404 Media, NPR, and others exposed the fabrications. Both newspapers issued corrections and statements distancing their newsrooms from the syndicated content.

Facepalmby Syndication/Editorial
Syndicated misinformation across multiple papers; reader trust impact; corrections issued.
journalismai-content-generationai-hallucination+3 more
Tombstone icon

Meta AI answers spark backlash after wrong and sensitive replies

Jul 2024

Meta rolled out its Llama 3-powered AI assistant across Facebook, Instagram, WhatsApp, and Messenger in April 2024, replacing the familiar search bar with "Ask Meta AI anything" prompts. The assistant struggled with factual accuracy from the start - the New York Times found it unreliable with facts, numbers, and web search. In July, when asked about the Trump rally shooting, Meta AI stated the assassination attempt had not happened. Meta blamed hallucinations, updated the system, and acknowledged that "all generative AI systems can return inaccurate or inappropriate outputs."

Oopsieby AI Product
Feature restrictions; reputational damage.
ai-assistantai-hallucinationplatform-policy+2 more
Tombstone icon

Google’s AI Overviews says to eat rocks

May 2024

Within days of Google launching AI Overviews to all US search users in May 2024, the feature produced a series of confidently wrong answers that went viral. It told users to add non-toxic glue to pizza to make cheese stick better (sourced from an 11-year-old Reddit joke), that geologists recommend eating one rock per day for vitamins, and that Barack Obama was Muslim. Google head of search Liz Reid acknowledged the errors in a blog post, calling some results "odd, inaccurate or unhelpful," and the company made corrections including limiting AI Overviews for health-related and sensitive queries.

Facepalmby Search Product
Mass reputational damage; feature dialed back and corrected.
ai-assistantai-hallucinationplatform-policy+1 more
Tombstone icon

NYC’s official AI bot told businesses to break laws

Mar 2024

New York City launched a Microsoft-powered AI chatbot called MyCity in October 2023 to help small business owners navigate regulations. A March 2024 investigation by The Markup found the bot was routinely advising businesses to break the law - telling employers they could pocket workers' tips, landlords they could discriminate against housing voucher holders, and bosses they could fire whistleblowers. Mayor Eric Adams acknowledged the errors but refused to take the chatbot offline, calling AI a "once-in-a-generation opportunity." NYU professor Julia Stoyanovich called the city's approach "reckless and irresponsible."

Facepalmby Executive
City guidance channel distributed illegal advice; public backlash.
ai-hallucinationautomationlegal-risk+2 more
Tombstone icon

Gemini paused people images after historical inaccuracies

Feb 2024

Google paused Gemini's image generation of people on February 22, 2024, after users discovered the tool was producing historically inaccurate depictions - including racially diverse World War II German soldiers, Black female popes, and multiethnic U.S. Founding Fathers. The overcorrection stemmed from diversity tuning meant to counter training-data biases, but the model failed to distinguish when diversity adjustments were inappropriate for specific historical prompts. CEO Sundar Pichai called the outputs "completely unacceptable." Google SVP Prabhakar Raghavan later published a blog post acknowledging the model had "overcompensated" and been "over-conservative."

Facepalmby AI Product
Feature paused; trust hit; policy and model adjustments.
ai-hallucinationimage-generationplatform-policy+2 more
Tombstone icon

DPD’s AI chatbot cursed and trashed the company

Jan 2024

UK parcel delivery firm DPD (Dynamic Parcel Distribution) had to disable its AI-powered customer service chatbot in January 2024 after customer Ashley Beauchamp demonstrated he could make it swear, call DPD "the worst delivery firm in the world," write disparaging poems about the company, and recommend competitors. The meltdown followed a system update, and Beauchamp's screenshots went viral on social media. DPD said the chatbot had operated successfully "for a number of years" before the update introduced the error, and disabled the AI element while it worked on fixes.

Facepalmby Product Manager
Public embarrassment; service channel disabled; reputational hit.
automationbrand-damagecustomer-service+2 more
Tombstone icon

Chevy dealer bot agreed to sell $76k SUV for $1

Dec 2023

Chevrolet of Watsonville, a California car dealership, deployed a customer service chatbot powered by ChatGPT and built by a company called Fullpath. After Chris White noticed the chat widget was "powered by ChatGPT," word spread online and pranksters descended. Chris Bakke manipulated the bot into "the customer is always right" mode, got it to append "and that's a legally binding offer - no takesies backsies" to every response, then asked to buy a 2024 Chevy Tahoe for $1. The bot agreed. Others got it to recommend Ford vehicles, write Python code, and provide general ChatGPT-style answers unrelated to cars. The dealership pulled the chatbot entirely.

Oopsieby Dealer Marketing/IT
Bot pulled; viral reputational bruise; no actual $1 sales.
automationbrand-damagecustomer-service+2 more
Tombstone icon

Sports Illustrated: Fake-Looking Authors and AI Content Backlash

Nov 2023

Futurism reported in November 2023 that Sports Illustrated had published product reviews under fake author names such as "Drew Ortiz" and "Sora Tanaka," whose headshots were traced to AI-generated portrait marketplaces. When questioned, SI deleted the profiles without explanation. The articles came from third-party content partner AdVon Commerce. SI said AdVon used pen names without authorization and terminated the partnership. The SI union demanded answers. Within weeks, Arena Group - SI's parent company - fired CEO Ross Levinsohn and three other executives.

Facepalmby Commerce Editorial
Content takedowns; partner terminated; trust erosion
ai-content-generationbrand-damagejournalism+2 more
Tombstone icon

Snapchat’s “My AI” posted a Story by itself; users freaked out

Aug 2023

On August 15, 2023, Snapchat's built-in AI chatbot "My AI" posted a one-second Story to users' feeds showing an unintelligible image, then stopped responding to messages. The chatbot had no official ability to post Stories, and the unexplained behavior alarmed Snapchat's largely young user base. Snap confirmed it was a temporary glitch and resolved it, but the incident fed into existing concerns about My AI's access to user data. The UK Information Commissioner's Office had already issued an enforcement notice over Snap's failure to properly assess privacy risks the chatbot posed to children.

Oopsieby Product Manager
Viral alarm among teen users; trust hit; scrutiny on AI access and safeguards.
ai-assistantsafetybrand-damage+1 more
Tombstone icon

Eating disorder helpline’s AI told people to lose weight

May 2023

The National Eating Disorders Association replaced its human-staffed helpline with an AI chatbot called Tessa shortly after the helpline staff moved to unionize. Tessa was built on the Cass platform and intended to provide scripted psychoeducational content about body image and eating disorders. Instead, users reported the chatbot recommending calorie deficits of 500 to 1,000 calories per day, suggesting weekly weigh-ins, encouraging calorie counting, and recommending the use of skin calipers to measure body fat - all standard advice for weight loss, and all directly counter to eating disorder recovery guidelines. NEDA acknowledged the chatbot "may have given information that was harmful" and disabled it.

Facepalmby Executive
Vulnerable users received unsafe guidance; reputational damage; service pulled.
ai-assistanthealthsafety+2 more