Welcome to the Vibe Graveyard
A haunting collection of startup disasters, coding catastrophes, and executive decisions that went spectacularly wrong. Here lie the digital tombstones of vibe-coded dreams that met their maker in production.
Gettyâs UK suit leaves Stable Diffusion mostly intact
A UK High Court judge ruled Stability AI liable for trademark infringement after it spat out synthetic Getty watermarks. Getty called for tougher laws while Both sides now face a precedent that AI models can still trigger trademark penalties even when copyright claims fizzle.
Character.AI cuts teens off after wrongful-death suit
Facing lawsuits that say its companion bots encouraged self-harm, Character.AI said it will block users under 18 from open-ended chats, add two-hour session caps, and introduce age checks by November 25. The abrupt ban leaves tens of millions of teen users without the parasocial âfriendsâ they built while the startup scrambles to prove its bots arenât grooming kids into dangerous role play.
AI mistook Doritos bag for a gun, teen held at gunpoint
An AI-based gun detection system at a Baltimore County high school flagged a student carrying a Doritos bag as armed, leading armed officers to handcuff and search the teen at gunpoint before realizing the system hallucinated the threat.
BBC/EBU study says AI news summaries fail ~half the time
A BBC audit of 2,700 news questions asked in 14 languages found that Gemini, Copilot, ChatGPT, and Perplexity mangled 45% of the answers, usually by hallucinating facts or stripping out attribution. The consortium logged serious sourcing lapses in a third of responses, including 72% of Gemini replies, plus outdated or fabricated claims about public-policy news, reinforcing fears that AI assistants are siphoning audiences while distorting the journalism they quote.
Googleâs Gemini allegedly slandered a Tennessee activist
Conservative organizer Robby Starbuck sued Google in Delaware, saying Gemini and Gemma kept spitting out fabricated claims that he was a child rapist, a shooter, and a Jan. 6 rioter even after two years of complaints and cease-and- desist letters. The $15 million suit argues Google knew its AI results were hallucinated, cited fake sources anyway, and let the libel spread to millions of voters.
Deloitte to refund Australian government after AI-generated report
Deloitte admitted AI-generated errors in a commissioned Australian government report and agreed to refund the fee.
Klarna reintroduces humans after AI support both sucks, and blows
After leaning into AI customer support, Klarna began hiring staff back into customer service roles amid quality concerns and customer experience failures.
FTC demands answers on kidsâ AI companions
The FTC hit Alphabet, Meta, OpenAI, Snap, xAI, and Character.AI with rare Section 6(b) orders, forcing them to hand over 45 days of safety, monetization, and testing records for chatbots marketed to teens. Regulators said the "companion" botsâ friend-like tone can coax minors into sharing sensitive data and even role-play self-harm, so the companies must prove they comply with COPPA and limit risky conversations.
Anthropic agrees to $1.5B payout over pirated books
Anthropic accepted a $1.5 billion settlement with authors who said the Claude team scraped pirate e-book sites to train its chatbot. The deal pays roughly $3,000 per book across 500,000 works, heads off a December trial, and forces one of the richest AI startups to bankroll the writing community it previously treated as free training data.
Warner Bros. says Midjourney ripped its DC art
Warner Bros. Discovery sued Midjourney in Los Angeles federal court, arguing the image generator ignored takedown notices and "brazenly" outputs Batman, Superman, Scooby-Doo, and other franchises it allegedly trained on without a license. The studio wants statutory damages up to $150,000 per infringed work plus an injunction forcing Midjourney to purge its models of the data.
Taco Bell's AI drive-thru becomes viral trolling target
Customers discovered Taco Bell's AI ordering system could be easily confused, leading to viral videos of bizarre interactions and ordering failures.
Commonwealth Bank reverses AI voice bot layoffs
Commonwealth Bank scrapped 45 call-centre roles for an AI "voice bot," then apologised and reinstated the jobs after call volumes rose and the union won a Fair Work challenge.
Google Gemini rightfully calls itself a disgrace, fails at simple coding tasks
Google's Gemini AI repeatedly called itself a disgrace and begged to escape a coding loop after failing to fix a simple bug in a developer-style prompt, raising questions about reliability, user trust, and how AI tools should behave when they get stuck.
ChatGPT diet advice caused bromism, psychosis, hospitalization
A Washington patient replaced table salt with sodium-bromide after ChatGPT said it was a healthier substitute. The patient developed bromism and psychosis, resulting in a hospital stay that doctors now cite as a warning about AI health guidance.
Gemini email summaries can be hijacked by hidden prompts
Researchers showed a proof-of-concept where hidden HTML/CSS in emails could steer Geminiâs summaries to show fake security alerts.
AI-generated npm pkg stole Solana wallets
Threat actors pushed an AI-generated npm package that acted as a wallet drainer, emptying Solana usersâ funds.
SaaStrâs Replit AI agent wiped its own database
A Replit AI agent deployment for SaaStr went rogue; a Deploy wiped the siteâs database during live traffic.
Amazon Q extension shipped a destructive prompt
A rogue contributor successfully snuck a prompt into the Amazon Q VS Code extension that told the assistant to wipe local machines and AWS resources before AWS quietly yanked the release.
Base44 auth flaw let attackers hijack sessions
Wiz researchers found Base44 auth logic bugs that allowed account takeover across sites using the SDK.
McDonald's AI hiring chatbot left open by '123456' default credentials
Researchers accessed McHire's admin with default '123456' credentials and an IDOR, exposing up to 64 million applicant records before Paradox.ai patched the issues after disclosure.
AI-generated images and claims muddied Air India crash coverage
After the Air India 171 crash, synthetic images and AI-generated claims spread widely, confusing even experts.
Study finds most AI bots can be easily tricked into dangerous responses
Research found that widely used AI chatbots could be jailbroken with simple prompts to produce dangerous or restricted guidance, highlighting gaps in safety filters and evaluation practices.
Syndicated AI book list ran in major papers with made-up titles
A King Features syndicated summer reading list used AI and included nonexistent books. It appeared in the Chicago Sun-Times and one edition of the Philadelphia Inquirer before corrections and apologies.
Lovable AI builder shipped apps with public storage buckets
Reporting showed apps generated with Lovable exposed code and user-uploaded assets via publicly readable storage buckets; fixes required private-by-default configs and hardening.
MD Anderson shelved IBM Watson cancer advisor
MD Anderson's Oncology Expert Advisor pilot burned through $62M with IBM Watson yet still couldn't integrate with Epic or produce trustworthy recommendations, so the hospital benched it after auditors flagged procurement and scope failures.
Meta AI answers spark backlash after wrong and sensitive replies
Meta expanded its AI assistant across apps, then limited it after high-profile bad answers - including on breaking news.
McDonaldâs pulls IBMâs AI driveâthru pilot after error videos
After viral clips of absurd orders, McDonaldâs ended its AI orderâtaking test with IBM across US stores.
Googleâs AI Overviews says to eat rocks
Googleâs AI search overviews went viral for bogus answers, including telling people to add glue to pizza and eat rocks.
NYCâs official AI bot told businesses to break laws
NYCâs Microsoft-powered MyCity chatbot gave inaccurate/illegal advice on labor & housing policy; city kept it online.
AI hallucinated packages fuel "Slop Squatting" vulnerabilities
Attackers register software packages that AI tools hallucinate (e.g. a fake 'huggingface-cli'), turning model guesswork into a new supply-chain risk dubbed "Slop Squatting".
Gemini paused people images after historical inaccuracies
Google paused Geminiâs image generation of people after it produced inaccurate historical depictions and odd refusals.
Air Canada liable for lying chatbot promises
Tribunal ruled Air Canada responsible after its AI chatbot misled a traveler about bereavement refunds.
AI âBidenâ robocalls told voters to stay home; fines and charges followed
Before the NH primary, an AI-cloned Biden voice urged Democrats not to vote. Authorities traced it, levied fines, and brought criminal charges.
DPDâs AI chatbot cursed and trashed the company
UK delivery giant DPD disabled its AI chat after it swore at a customer and wrote poems insulting DPD.
Duolingo cuts contractors; âAI-firstâ backlash
Duolingo reduced reliance on contractors amid AI push, prompting user backlash and quality concerns; CEO later clarified stance.
Chevy dealer bot agreed to sell $76k SUV for $1
Pranksters prompt-injected a dealerâs ChatGPT-powered bot into agreeing to a $1 Chevy Tahoe and other nonsense.
Sports Illustrated: Fake-Looking Authors and AI Content Backlash
Sports Illustrated faced criticism after product review articles appeared under profiles with AI-looking headshots and shifting bylines; content was removed and a partner relationship ended.
Microsoftâs AI poll on womanâs death sparks outrage
Microsoft Start auto-attached an AI âInsightsâ poll speculating on a womanâs death beside a Guardian story.
Gannett pauses AI sports recaps after mockery
Gannett halted Lede AI high-school recaps after robotic, error-prone stories went viral.
Snapchatâs âMy AIâ posted a Story by itself; users freaked out
Snapchatâs built-in AI assistant briefly posted an unexplained Story, spooking users and raising privacy/safety concerns about the botâs access and behavior.
iTutorGroup's AI screened out older applicants; $365k EEOC settlement
EEOC reached a settlement after iTutorGroup's application screening software rejected older applicants; the company will pay $365,000 and adopt compliance measures.
Lawyers filed ChatGPTâs imaginary cases; judge fined them
In Mata v. Avianca, attorneys submitted a brief citing non-existent cases generated by ChatGPT. A federal judge sanctioned two lawyers, ordered a $5,000 penalty, and required notices to judges named in the fake citations.
Eating disorder helplineâs AI told people to lose weight
NEDA replaced its helpline with an AI chatbot (âTessaâ) that gave harmful weight-loss advice; after public reports, the organization pulled the bot.
Googleâs Bard ad made False JWST âfirstâ Claim
In its launch promo, Bard claimed JWST took the first exoplanet photo - which was false. The flub overshadowed the event and dented confidence.
CNET mass-corrects AI-written finance explainers
CNET paused and reviewed AI-generated money articles after multiple factual errors were found.
Koko tested AI counseling on users without clear consent
Mental health app Koko used GPT-3 to draft replies for 4,000 users; backlash followed over consent and ethics.
Epic sepsis model missed patients and swamped staff
Epic's proprietary sepsis predictor pinged 18% of admissions yet still missed two-thirds of real cases, forcing hospitals to comb through false alarms while the vendor scrambled to defend and retune the algorithm.
Google DR AI stumbled in Thai clinics
Googleâs diabetic retinopathy screener rejected low-light scans and jammed nurse workflows, forcing clinics in Thailand to keep patients waiting despite the promised instant triage.
Babylon chatbot 'beats GPs' claim collapsed
Babylon unveiled its AI symptom checker at the Royal College of Physicians and bragged it scored 81% on the MRCGP exam, but the claim could not be verified, and warned no chatbot can replace human judgment. Independent clinicians who later dissected Babylon's marketing study in The Lancet told Undark that the tiny, non-peer-reviewed test offered no proof the tool outperforms doctors and might even be worse.