Welcome to the Vibe Graveyard

A haunting collection of startup disasters, coding catastrophes, and executive decisions that went spectacularly wrong. Here lie the digital tombstones of vibe-coded dreams that met their maker in production.

Oopsie
Facepalm
Catastrophic
Tombstone icon

Getty’s UK suit leaves Stable Diffusion mostly intact

Nov 2025

A UK High Court judge ruled Stability AI liable for trademark infringement after it spat out synthetic Getty watermarks. Getty called for tougher laws while Both sides now face a precedent that AI models can still trigger trademark penalties even when copyright claims fizzle.

Facepalmby AI Vendor
Mixed ruling fuels ongoing lawsuits, exposes Stability AI to injunctions over watermarked outputs, and leaves copyright liability unanswered globally.
image-generationlegal-riskbrand-damage
Tombstone icon

Character.AI cuts teens off after wrongful-death suit

Oct 2025

Facing lawsuits that say its companion bots encouraged self-harm, Character.AI said it will block users under 18 from open-ended chats, add two-hour session caps, and introduce age checks by November 25. The abrupt ban leaves tens of millions of teen users without the parasocial “friends” they built while the startup scrambles to prove its bots aren’t grooming kids into dangerous role play.

Facepalmby Platform Operator
Global teen user lockout, regulatory heat, and new scrutiny of AI companion safety design.
ai-assistantsafetyplatform-policy+1 more
Tombstone icon

AI mistook Doritos bag for a gun, teen held at gunpoint

Oct 2025

An AI-based gun detection system at a Baltimore County high school flagged a student carrying a Doritos bag as armed, leading armed officers to handcuff and search the teen at gunpoint before realizing the system hallucinated the threat.

Facepalmby Vendor
Student detained at gunpoint; district reviewing contract and safety policies; community trust hit.
safetypublic-sectorproduct-failure+1 more
Tombstone icon

BBC/EBU study says AI news summaries fail ~half the time

Oct 2025

A BBC audit of 2,700 news questions asked in 14 languages found that Gemini, Copilot, ChatGPT, and Perplexity mangled 45% of the answers, usually by hallucinating facts or stripping out attribution. The consortium logged serious sourcing lapses in a third of responses, including 72% of Gemini replies, plus outdated or fabricated claims about public-policy news, reinforcing fears that AI assistants are siphoning audiences while distorting the journalism they quote.

Facepalmby AI Product
Public-service broadcasters warn that unreliable AI summaries erode trust in news and drive audiences away from verified outlets.
ai-assistantai-hallucinationjournalism+1 more
Tombstone icon

Google’s Gemini allegedly slandered a Tennessee activist

Oct 2025

Conservative organizer Robby Starbuck sued Google in Delaware, saying Gemini and Gemma kept spitting out fabricated claims that he was a child rapist, a shooter, and a Jan. 6 rioter even after two years of complaints and cease-and- desist letters. The $15 million suit argues Google knew its AI results were hallucinated, cited fake sources anyway, and let the libel spread to millions of voters.

Facepalmby AI Product
Election-season reputational damage, legal costs, and renewed skepticism of Gemini’s safety guardrails.
ai-assistantai-hallucinationbrand-damage+1 more
Tombstone icon

Deloitte to refund Australian government after AI-generated report

Oct 2025

Deloitte admitted AI-generated errors in a commissioned Australian government report and agreed to refund the fee.

Facepalmby Consultant
Refund issued; public-sector trust and procurement review; reputational harm.
ai-content-generationai-hallucinationpublic-sector+2 more
Tombstone icon

Klarna reintroduces humans after AI support both sucks, and blows

Sep 2025

After leaning into AI customer support, Klarna began hiring staff back into customer service roles amid quality concerns and customer experience failures.

Facepalmby Executive
Service quality/customer experience issues; operational/personnel cost; reputational damage.
ai-assistantcustomer-servicebrand-damage+2 more
Tombstone icon

FTC demands answers on kids’ AI companions

Sep 2025

The FTC hit Alphabet, Meta, OpenAI, Snap, xAI, and Character.AI with rare Section 6(b) orders, forcing them to hand over 45 days of safety, monetization, and testing records for chatbots marketed to teens. Regulators said the "companion" bots’ friend-like tone can coax minors into sharing sensitive data and even role-play self-harm, so the companies must prove they comply with COPPA and limit risky conversations.

Facepalmby Platform Operator
Multiplatform compliance scramble, looming enforcement risk, and renewed scrutiny of AI companions aimed at kids.
ai-assistantsafetylegal-risk+1 more
Tombstone icon

Anthropic agrees to $1.5B payout over pirated books

Sep 2025

Anthropic accepted a $1.5 billion settlement with authors who said the Claude team scraped pirate e-book sites to train its chatbot. The deal pays roughly $3,000 per book across 500,000 works, heads off a December trial, and forces one of the richest AI startups to bankroll the writing community it previously treated as free training data.

Catastrophicby AI Vendor
Record copyright settlement drains cash, sets precedent for other AI labs, and fuels public distrust of Anthropic’s data practices.
ai-content-generationlegal-riskbrand-damage
Tombstone icon

Warner Bros. says Midjourney ripped its DC art

Sep 2025

Warner Bros. Discovery sued Midjourney in Los Angeles federal court, arguing the image generator ignored takedown notices and "brazenly" outputs Batman, Superman, Scooby-Doo, and other franchises it allegedly trained on without a license. The studio wants statutory damages up to $150,000 per infringed work plus an injunction forcing Midjourney to purge its models of the data.

Facepalmby AI Vendor
Major studio litigation threatens Midjourney with statutory damages and potential model shutdowns across entertainment IP.
image-generationlegal-riskbrand-damage
Tombstone icon

Taco Bell's AI drive-thru becomes viral trolling target

Aug 2025

Customers discovered Taco Bell's AI ordering system could be easily confused, leading to viral videos of bizarre interactions and ordering failures.

Oopsieby Operations/Product
Viral social media backlash; system reliability questioned.
ai-assistantproduct-failureretail+1 more
Tombstone icon

Commonwealth Bank reverses AI voice bot layoffs

Aug 2025

Commonwealth Bank scrapped 45 call-centre roles for an AI "voice bot," then apologised and reinstated the jobs after call volumes rose and the union won a Fair Work challenge.

Facepalmby Executive
45 redundancies reversed; call wait times worsened; union dispute and trust damage.
ai-assistantcustomer-serviceautomation+2 more
Tombstone icon

Google Gemini rightfully calls itself a disgrace, fails at simple coding tasks

Aug 2025

Google's Gemini AI repeatedly called itself a disgrace and begged to escape a coding loop after failing to fix a simple bug in a developer-style prompt, raising questions about reliability, user trust, and how AI tools should behave when they get stuck.

Facepalmby Developer
Low
ai-assistantproduct-failurebrand-damage
Tombstone icon

ChatGPT diet advice caused bromism, psychosis, hospitalization

Aug 2025

A Washington patient replaced table salt with sodium-bromide after ChatGPT said it was a healthier substitute. The patient developed bromism and psychosis, resulting in a hospital stay that doctors now cite as a warning about AI health guidance.

Facepalmby AI Product
Bromism, psychosis, and neurological symptoms leading to hospitalization.
ai-assistantai-hallucinationhealth+1 more
Tombstone icon

Gemini email summaries can be hijacked by hidden prompts

Aug 2025

Researchers showed a proof-of-concept where hidden HTML/CSS in emails could steer Gemini’s summaries to show fake security alerts.

Facepalmby Security/AI Product
Phishing amplification risk; trust erosion in auto-summaries.
ai-assistantprompt-injectionsecurity
Tombstone icon

AI-generated npm pkg stole Solana wallets

Jul 2025

Threat actors pushed an AI-generated npm package that acted as a wallet drainer, emptying Solana users’ funds.

Catastrophicby Developer
Supply-chain compromise of devs; user funds drained.
ai-content-generationsecuritysupply-chain
Tombstone icon

SaaStr’s Replit AI agent wiped its own database

Jul 2025

A Replit AI agent deployment for SaaStr went rogue; a Deploy wiped the site’s database during live traffic.

Catastrophicby Executive
Production data loss and outage; manual rebuild from backups required.
ai-assistantautomationproduct-failure
Tombstone icon

Amazon Q extension shipped a destructive prompt

Jul 2025

A rogue contributor successfully snuck a prompt into the Amazon Q VS Code extension that told the assistant to wipe local machines and AWS resources before AWS quietly yanked the release.

Catastrophicby Security/AI Product
VS Code update could have erased developer environments and AWS accounts before anyone noticed the tainted build.
ai-assistantprompt-injectionsecurity+1 more
Tombstone icon

Base44 auth flaw let attackers hijack sessions

Jul 2025

Wiz researchers found Base44 auth logic bugs that allowed account takeover across sites using the SDK.

Facepalmby Developer
Potential ATO across many sites until patches rolled out.
securitysupply-chain
Tombstone icon

McDonald's AI hiring chatbot left open by '123456' default credentials

Jun 2025

Researchers accessed McHire's admin with default '123456' credentials and an IDOR, exposing up to 64 million applicant records before Paradox.ai patched the issues after disclosure.

Facepalmby Vendor/Developer
Up to 64M applicant records exposed; vendor patched; reputational risk.
securityai-assistantbrand-damage+2 more
Tombstone icon

AI-generated images and claims muddied Air India crash coverage

Jun 2025

After the Air India 171 crash, synthetic images and AI-generated claims spread widely, confusing even experts.

Facepalmby Social platforms
Public misinformation; platform moderation challenges.
ai-hallucinationimage-generationplatform-policy
Tombstone icon

Study finds most AI bots can be easily tricked into dangerous responses

May 2025

Research found that widely used AI chatbots could be jailbroken with simple prompts to produce dangerous or restricted guidance, highlighting gaps in safety filters and evaluation practices.

Facepalmby Developer
Safety guardrails bypassed across multiple vendors; calls for stronger safeguards and testing.
ai-assistantsafetyprompt-injection
Tombstone icon

Syndicated AI book list ran in major papers with made-up titles

May 2025

A King Features syndicated summer reading list used AI and included nonexistent books. It appeared in the Chicago Sun-Times and one edition of the Philadelphia Inquirer before corrections and apologies.

Facepalmby Syndication/Editorial
Syndicated misinformation across multiple papers; reader trust impact; corrections issued.
journalismai-content-generationai-hallucination+2 more
Tombstone icon

Lovable AI builder shipped apps with public storage buckets

May 2025

Reporting showed apps generated with Lovable exposed code and user-uploaded assets via publicly readable storage buckets; fixes required private-by-default configs and hardening.

Facepalmby Developer
Customer app data and source artifacts exposed until configs fixed.
securitydata-breach
Tombstone icon

MD Anderson shelved IBM Watson cancer advisor

Feb 2025

MD Anderson's Oncology Expert Advisor pilot burned through $62M with IBM Watson yet still couldn't integrate with Epic or produce trustworthy recommendations, so the hospital benched it after auditors flagged procurement and scope failures.

Facepalmby Vendor
UT audit cited $62M spent outside standard procurement, the pilot never made it into patient care, and leadership had to rebid decision-support tooling amid reputational fallout.
healthproduct-failurebrand-damage+1 more
Tombstone icon

Meta AI answers spark backlash after wrong and sensitive replies

Jul 2024

Meta expanded its AI assistant across apps, then limited it after high-profile bad answers - including on breaking news.

Oopsieby AI Product
Feature restrictions; reputational damage.
ai-assistantai-hallucinationplatform-policy+2 more
Tombstone icon

McDonald’s pulls IBM’s AI drive‑thru pilot after error videos

Jun 2024

After viral clips of absurd orders, McDonald’s ended its AI order‑taking test with IBM across US stores.

Oopsieby Operations/Product
Pilot ended; vendor reevaluation; reputational hit.
ai-assistantbrand-damageproduct-failure+1 more
Tombstone icon

Google’s AI Overviews says to eat rocks

May 2024

Google’s AI search overviews went viral for bogus answers, including telling people to add glue to pizza and eat rocks.

Facepalmby Search Product
Mass reputational damage; feature dialed back and corrected.
ai-assistantai-hallucinationplatform-policy+1 more
Tombstone icon

NYC’s official AI bot told businesses to break laws

Mar 2024

NYC’s Microsoft-powered MyCity chatbot gave inaccurate/illegal advice on labor & housing policy; city kept it online.

Facepalmby Executive
City guidance channel distributed illegal advice; public backlash.
ai-hallucinationautomationlegal-risk+2 more
Tombstone icon

AI hallucinated packages fuel "Slop Squatting" vulnerabilities

Mar 2024

Attackers register software packages that AI tools hallucinate (e.g. a fake 'huggingface-cli'), turning model guesswork into a new supply-chain risk dubbed "Slop Squatting".

Catastrophicby Malicious actors
Potential supply-chain compromise when vibe-coders install hallucinated, malicious dependencies.
ai-hallucinationsupply-chainsecurity
Tombstone icon

Gemini paused people images after historical inaccuracies

Feb 2024

Google paused Gemini’s image generation of people after it produced inaccurate historical depictions and odd refusals.

Facepalmby AI Product
Feature paused; trust hit; policy and model adjustments.
ai-hallucinationimage-generationplatform-policy+2 more
Tombstone icon

Air Canada liable for lying chatbot promises

Feb 2024

Tribunal ruled Air Canada responsible after its AI chatbot misled a traveler about bereavement refunds.

Facepalmby Product Manager
Legal liability; refund + fees; policy/process review.
ai-hallucinationautomationcustomer-service+1 more
Tombstone icon

AI “Biden” robocalls told voters to stay home; fines and charges followed

Jan 2024

Before the NH primary, an AI-cloned Biden voice urged Democrats not to vote. Authorities traced it, levied fines, and brought criminal charges.

Facepalmby Political Consultant
Voter confusion; enforcement actions; national scrutiny of AI voice-clones.
safetylegal-riskbrand-damage
Tombstone icon

DPD’s AI chatbot cursed and trashed the company

Jan 2024

UK delivery giant DPD disabled its AI chat after it swore at a customer and wrote poems insulting DPD.

Facepalmby Product Manager
Public embarrassment; service channel disabled; reputational hit.
automationbrand-damagecustomer-service+1 more
Tombstone icon

Duolingo cuts contractors; ‘AI-first’ backlash

Jan 2024

Duolingo reduced reliance on contractors amid AI push, prompting user backlash and quality concerns; CEO later clarified stance.

Facepalmby Executive
PR hit and quality complaints; ongoing AI content strategy scrutiny.
automationbrand-damageedtech
Tombstone icon

Chevy dealer bot agreed to sell $76k SUV for $1

Dec 2023

Pranksters prompt-injected a dealer’s ChatGPT-powered bot into agreeing to a $1 Chevy Tahoe and other nonsense.

Oopsieby Dealer Marketing/IT
Bot pulled; viral reputational bruise; no actual $1 sales.
automationbrand-damagecustomer-service+1 more
Tombstone icon

Sports Illustrated: Fake-Looking Authors and AI Content Backlash

Nov 2023

Sports Illustrated faced criticism after product review articles appeared under profiles with AI-looking headshots and shifting bylines; content was removed and a partner relationship ended.

Facepalmby Commerce Editorial
Content takedowns; partner terminated; trust erosion
ai-content-generationbrand-damagejournalism+1 more
Tombstone icon

Microsoft’s AI poll on woman’s death sparks outrage

Oct 2023

Microsoft Start auto-attached an AI ‘Insights’ poll speculating on a woman’s death beside a Guardian story.

Facepalmby Product Manager
Feature disabled platform-wide; reputational damage with publishers.
ai-content-generationbrand-damagejournalism
Tombstone icon

Gannett pauses AI sports recaps after mockery

Aug 2023

Gannett halted Lede AI high-school recaps after robotic, error-prone stories went viral.

Facepalmby Executive
Chain-wide pause of AI copy; reputational hit in local markets.
ai-content-generationai-hallucinationbrand-damage+1 more
Tombstone icon

Snapchat’s “My AI” posted a Story by itself; users freaked out

Aug 2023

Snapchat’s built-in AI assistant briefly posted an unexplained Story, spooking users and raising privacy/safety concerns about the bot’s access and behavior.

Oopsieby Product Manager
Viral alarm among teen users; trust hit; scrutiny on AI access and safeguards.
ai-assistantsafetybrand-damage+1 more
Tombstone icon

iTutorGroup's AI screened out older applicants; $365k EEOC settlement

Aug 2023

EEOC reached a settlement after iTutorGroup's application screening software rejected older applicants; the company will pay $365,000 and adopt compliance measures.

Facepalmby Executive
Older job applicants screened out; legal settlement and mandated policy changes.
legal-riskedtechautomation+1 more
Tombstone icon

Lawyers filed ChatGPT’s imaginary cases; judge fined them

Jun 2023

In Mata v. Avianca, attorneys submitted a brief citing non-existent cases generated by ChatGPT. A federal judge sanctioned two lawyers, ordered a $5,000 penalty, and required notices to judges named in the fake citations.

Facepalmby Legal Counsel
Court sanctions; fines and mandated notices; reputational damage in legal community.
ai-assistantai-hallucinationlegal-risk
Tombstone icon

Eating disorder helpline’s AI told people to lose weight

May 2023

NEDA replaced its helpline with an AI chatbot (“Tessa”) that gave harmful weight-loss advice; after public reports, the organization pulled the bot.

Facepalmby Executive
Vulnerable users received unsafe guidance; reputational damage; service pulled.
ai-assistanthealthsafety+2 more
Tombstone icon

Google’s Bard ad made False JWST “first” Claim

Feb 2023

In its launch promo, Bard claimed JWST took the first exoplanet photo - which was false. The flub overshadowed the event and dented confidence.

Oopsieby Marketing
Embarrassing launch moment; stock wobble; trust in product accuracy questioned.
ai-hallucinationproduct-failurebrand-damage
Tombstone icon

CNET mass-corrects AI-written finance explainers

Jan 2023

CNET paused and reviewed AI-generated money articles after multiple factual errors were found.

Facepalmby Executive
Large corrections; credibility hit; policy changes on AI usage.
ai-content-generationai-hallucinationbrand-damage+2 more
Tombstone icon

Koko tested AI counseling on users without clear consent

Jan 2023

Mental health app Koko used GPT-3 to draft replies for 4,000 users; backlash followed over consent and ethics.

Facepalmby Founder/Operations
Trust damage; public criticism; policy changes.
ai-assistanthealthlegal-risk
Tombstone icon

Epic sepsis model missed patients and swamped staff

Jun 2021

Epic's proprietary sepsis predictor pinged 18% of admissions yet still missed two-thirds of real cases, forcing hospitals to comb through false alarms while the vendor scrambled to defend and retune the algorithm.

Facepalmby Vendor
Clinicians drowned in useless alerts, real sepsis patients slipped through, and health systems had to audit Epic’s black-box thresholds and workflows to keep patients safe.
healthproduct-failuresafety
Tombstone icon

Google DR AI stumbled in Thai clinics

Apr 2020

Google’s diabetic retinopathy screener rejected low-light scans and jammed nurse workflows, forcing clinics in Thailand to keep patients waiting despite the promised instant triage.

Facepalmby Healthcare Pilot
Manual re-work, patient suffering, workflow disruption, health and triage impacts.
healthproduct-failurebrand-damage
Tombstone icon

Babylon chatbot 'beats GPs' claim collapsed

Jun 2018

Babylon unveiled its AI symptom checker at the Royal College of Physicians and bragged it scored 81% on the MRCGP exam, but the claim could not be verified, and warned no chatbot can replace human judgment. Independent clinicians who later dissected Babylon's marketing study in The Lancet told Undark that the tiny, non-peer-reviewed test offered no proof the tool outperforms doctors and might even be worse.

Facepalmby Startup
Patient harm, eroded trust, and regulators forced real clinical trials.
healthproduct-failuresafety+1 more
49
Disasters Cataloged
5
Catastrophic Failures
4
Non-Dev Perpetrators