Welcome to the Vibe Graveyard

A haunting collection of startup disasters, coding catastrophes, and executive decisions that went spectacularly wrong. Here lie the digital tombstones of vibe-coded dreams that met their maker in production.

Oopsie
Facepalm
Catastrophic
Tombstone icon

Taco Bell's AI drive-thru becomes viral trolling target

Aug 2025

Customers discovered Taco Bell's AI ordering system could be easily confused, leading to viral videos of bizarre interactions and ordering failures.

Oopsieby Operations/Product
Viral social media backlash; system reliability questioned.
ai-assistantproduct-failureretail+1 more
Tombstone icon

Gemini email summaries can be hijacked by hidden prompts

Aug 2025

Researchers showed a proof-of-concept where hidden HTML/CSS in emails could steer Gemini’s summaries to show fake security alerts.

Facepalmby Security/AI Product
Phishing amplification risk; trust erosion in auto-summaries.
ai-assistantprompt-injectionsecurity
Tombstone icon

AI-generated npm pkg stole Solana wallets

Jul 2025

Threat actors pushed an AI-generated npm package that acted as a wallet drainer, emptying Solana users’ funds.

Catastrophicby Developer
Supply-chain compromise of devs; user funds drained.
ai-content-generationsecuritysupply-chain
Tombstone icon

SaaStr’s Replit AI agent wiped its own database

Jul 2025

A Replit AI agent deployment for SaaStr went rogue; a Deploy wiped the site’s database during live traffic.

Catastrophicby Executive
Production data loss and outage; manual rebuild from backups required.
ai-assistantautomationproduct-failure
Tombstone icon

Base44 auth flaw let attackers hijack sessions

Jul 2025

Wiz researchers found Base44 auth logic bugs that allowed account takeover across sites using the SDK.

Facepalmby Developer
Potential ATO across many sites until patches rolled out.
securitysupply-chain
Tombstone icon

McDonald's AI hiring chatbot left open by '123456' default credentials

Jun 2025

Researchers accessed McHire's admin with default '123456' credentials and an IDOR, exposing up to 64 million applicant records before Paradox.ai patched the issues after disclosure.

Facepalmby Vendor/Developer
Up to 64M applicant records exposed; vendor patched; reputational risk.
securityai-assistantbrand-damage+2 more
Tombstone icon

AI-generated images and claims muddied Air India crash coverage

Jun 2025

After the Air India 171 crash, synthetic images and AI-generated claims spread widely, confusing even experts.

Facepalmby Social platforms
Public misinformation; platform moderation challenges.
ai-hallucinationimage-generationplatform-policy
Tombstone icon

Syndicated AI book list ran in major papers with made-up titles

May 2025

A King Features syndicated summer reading list used AI and included nonexistent books. It appeared in the Chicago Sun-Times and one edition of the Philadelphia Inquirer before corrections and apologies.

Facepalmby Syndication/Editorial
Syndicated misinformation across multiple papers; reader trust impact; corrections issued.
journalismai-content-generationai-hallucination+2 more
Tombstone icon

Lovable AI builder shipped apps with public storage buckets

May 2025

Reporting showed apps generated with Lovable exposed code and user-uploaded assets via publicly readable storage buckets; fixes required private-by-default configs and hardening.

Facepalmby Developer
Customer app data and source artifacts exposed until configs fixed.
securitydata-breach
Tombstone icon

Meta AI answers spark backlash after wrong and sensitive replies

Jul 2024

Meta expanded its AI assistant across apps, then limited it after high-profile bad answers - including on breaking news.

Oopsieby AI Product
Feature restrictions; reputational damage.
ai-assistantai-hallucinationplatform-policy+2 more
Tombstone icon

McDonald’s pulls IBM’s AI drive‑thru pilot after error videos

Jun 2024

After viral clips of absurd orders, McDonald’s ended its AI order‑taking test with IBM across US stores.

Oopsieby Operations/Product
Pilot ended; vendor reevaluation; reputational hit.
ai-assistantbrand-damageproduct-failure+1 more
Tombstone icon

Google’s AI Overviews says to eat rocks

May 2024

Google’s AI search overviews went viral for bogus answers, including telling people to add glue to pizza and eat rocks.

Facepalmby Search Product
Mass reputational damage; feature dialed back and corrected.
ai-assistantai-hallucinationplatform-policy+1 more
Tombstone icon

NYC’s official AI bot told businesses to break laws

Mar 2024

NYC’s Microsoft-powered MyCity chatbot gave inaccurate/illegal advice on labor & housing policy; city kept it online.

Facepalmby Executive
City guidance channel distributed illegal advice; public backlash.
ai-hallucinationautomationlegal-risk+2 more
Tombstone icon

AI hallucinated packages fuel "Slop Squatting" vulnerabilities

Mar 2024

Attackers register software packages that AI tools hallucinate (e.g. a fake 'huggingface-cli'), turning model guesswork into a new supply-chain risk dubbed "Slop Squatting".

Catastrophicby Malicious actors
Potential supply-chain compromise when vibe-coders install hallucinated, malicious dependencies.
ai-hallucinationsupply-chainsecurity
Tombstone icon

Gemini paused people images after historical inaccuracies

Feb 2024

Google paused Gemini’s image generation of people after it produced inaccurate historical depictions and odd refusals.

Facepalmby AI Product
Feature paused; trust hit; policy and model adjustments.
ai-hallucinationimage-generationplatform-policy+2 more
Tombstone icon

Air Canada liable for lying chatbot promises

Feb 2024

Tribunal ruled Air Canada responsible after its AI chatbot misled a traveler about bereavement refunds.

Facepalmby Product Manager
Legal liability; refund + fees; policy/process review.
ai-hallucinationautomationcustomer-service+1 more
Tombstone icon

AI “Biden” robocalls told voters to stay home; fines and charges followed

Jan 2024

Before the NH primary, an AI-cloned Biden voice urged Democrats not to vote. Authorities traced it, levied fines, and brought criminal charges.

Facepalmby Political Consultant
Voter confusion; enforcement actions; national scrutiny of AI voice-clones.
safetylegal-riskbrand-damage
Tombstone icon

DPD’s AI chatbot cursed and trashed the company

Jan 2024

UK delivery giant DPD disabled its AI chat after it swore at a customer and wrote poems insulting DPD.

Facepalmby Product Manager
Public embarrassment; service channel disabled; reputational hit.
automationbrand-damagecustomer-service+1 more
Tombstone icon

Duolingo cuts contractors; ‘AI-first’ backlash

Jan 2024

Duolingo reduced reliance on contractors amid AI push, prompting user backlash and quality concerns; CEO later clarified stance.

Facepalmby Executive
PR hit and quality complaints; ongoing AI content strategy scrutiny.
automationbrand-damageedtech
Tombstone icon

Chevy dealer bot agreed to sell $76k SUV for $1

Dec 2023

Pranksters prompt-injected a dealer’s ChatGPT-powered bot into agreeing to a $1 Chevy Tahoe and other nonsense.

Oopsieby Dealer Marketing/IT
Bot pulled; viral reputational bruise; no actual $1 sales.
automationbrand-damagecustomer-service+1 more
Tombstone icon

Sports Illustrated: Fake-Looking Authors and AI Content Backlash

Nov 2023

Sports Illustrated faced criticism after product review articles appeared under profiles with AI-looking headshots and shifting bylines; content was removed and a partner relationship ended.

Facepalmby Commerce Editorial
Content takedowns; partner terminated; trust erosion
ai-content-generationbrand-damagejournalism+1 more
Tombstone icon

Microsoft’s AI poll on woman’s death sparks outrage

Oct 2023

Microsoft Start auto-attached an AI ‘Insights’ poll speculating on a woman’s death beside a Guardian story.

Facepalmby Product Manager
Feature disabled platform-wide; reputational damage with publishers.
ai-content-generationbrand-damagejournalism
Tombstone icon

Gannett pauses AI sports recaps after mockery

Aug 2023

Gannett halted Lede AI high-school recaps after robotic, error-prone stories went viral.

Facepalmby Executive
Chain-wide pause of AI copy; reputational hit in local markets.
ai-content-generationai-hallucinationbrand-damage+1 more
Tombstone icon

Snapchat’s “My AI” posted a Story by itself; users freaked out

Aug 2023

Snapchat’s built-in AI assistant briefly posted an unexplained Story, spooking users and raising privacy/safety concerns about the bot’s access and behavior.

Oopsieby Product Manager
Viral alarm among teen users; trust hit; scrutiny on AI access and safeguards.
ai-assistantsafetybrand-damage+1 more
Tombstone icon

Lawyers filed ChatGPT’s imaginary cases; judge fined them

Jun 2023

In Mata v. Avianca, attorneys submitted a brief citing non-existent cases generated by ChatGPT. A federal judge sanctioned two lawyers, ordered a $5,000 penalty, and required notices to judges named in the fake citations.

Facepalmby Legal Counsel
Court sanctions; fines and mandated notices; reputational damage in legal community.
ai-assistantai-hallucinationlegal-risk
Tombstone icon

Eating disorder helpline’s AI told people to lose weight

May 2023

NEDA replaced its helpline with an AI chatbot (“Tessa”) that gave harmful weight-loss advice; after public reports, the organization pulled the bot.

Facepalmby Executive
Vulnerable users received unsafe guidance; reputational damage; service pulled.
ai-assistanthealthsafety+2 more
Tombstone icon

Google’s Bard ad made False JWST “first” Claim

Feb 2023

In its launch promo, Bard claimed JWST took the first exoplanet photo - which was false. The flub overshadowed the event and dented confidence.

Oopsieby Marketing
Embarrassing launch moment; stock wobble; trust in product accuracy questioned.
ai-hallucinationproduct-failurebrand-damage
Tombstone icon

CNET mass-corrects AI-written finance explainers

Jan 2023

CNET paused and reviewed AI-generated money articles after multiple factual errors were found.

Facepalmby Executive
Large corrections; credibility hit; policy changes on AI usage.
ai-content-generationai-hallucinationbrand-damage+2 more
Tombstone icon

Koko tested AI counseling on users without clear consent

Jan 2023

Mental health app Koko used GPT-3 to draft replies for 4,000 users; backlash followed over consent and ethics.

Facepalmby Founder/Operations
Trust damage; public criticism; policy changes.
ai-assistanthealthlegal-risk
29
Disasters Cataloged
3
Catastrophic Failures
4
Non-Dev Perpetrators