Welcome to the Vibe Graveyard
A haunting collection of startup disasters, coding catastrophes, and executive decisions that went spectacularly wrong. Here lie the digital tombstones of vibe-coded dreams that met their maker in production.
Taco Bell's AI drive-thru becomes viral trolling target
Customers discovered Taco Bell's AI ordering system could be easily confused, leading to viral videos of bizarre interactions and ordering failures.
Gemini email summaries can be hijacked by hidden prompts
Researchers showed a proof-of-concept where hidden HTML/CSS in emails could steer Gemini’s summaries to show fake security alerts.
AI-generated npm pkg stole Solana wallets
Threat actors pushed an AI-generated npm package that acted as a wallet drainer, emptying Solana users’ funds.
SaaStr’s Replit AI agent wiped its own database
A Replit AI agent deployment for SaaStr went rogue; a Deploy wiped the site’s database during live traffic.
Base44 auth flaw let attackers hijack sessions
Wiz researchers found Base44 auth logic bugs that allowed account takeover across sites using the SDK.
McDonald's AI hiring chatbot left open by '123456' default credentials
Researchers accessed McHire's admin with default '123456' credentials and an IDOR, exposing up to 64 million applicant records before Paradox.ai patched the issues after disclosure.
AI-generated images and claims muddied Air India crash coverage
After the Air India 171 crash, synthetic images and AI-generated claims spread widely, confusing even experts.
Syndicated AI book list ran in major papers with made-up titles
A King Features syndicated summer reading list used AI and included nonexistent books. It appeared in the Chicago Sun-Times and one edition of the Philadelphia Inquirer before corrections and apologies.
Lovable AI builder shipped apps with public storage buckets
Reporting showed apps generated with Lovable exposed code and user-uploaded assets via publicly readable storage buckets; fixes required private-by-default configs and hardening.
Meta AI answers spark backlash after wrong and sensitive replies
Meta expanded its AI assistant across apps, then limited it after high-profile bad answers - including on breaking news.
McDonald’s pulls IBM’s AI drive‑thru pilot after error videos
After viral clips of absurd orders, McDonald’s ended its AI order‑taking test with IBM across US stores.
Google’s AI Overviews says to eat rocks
Google’s AI search overviews went viral for bogus answers, including telling people to add glue to pizza and eat rocks.
NYC’s official AI bot told businesses to break laws
NYC’s Microsoft-powered MyCity chatbot gave inaccurate/illegal advice on labor & housing policy; city kept it online.
AI hallucinated packages fuel "Slop Squatting" vulnerabilities
Attackers register software packages that AI tools hallucinate (e.g. a fake 'huggingface-cli'), turning model guesswork into a new supply-chain risk dubbed "Slop Squatting".
Gemini paused people images after historical inaccuracies
Google paused Gemini’s image generation of people after it produced inaccurate historical depictions and odd refusals.
Air Canada liable for lying chatbot promises
Tribunal ruled Air Canada responsible after its AI chatbot misled a traveler about bereavement refunds.
AI “Biden” robocalls told voters to stay home; fines and charges followed
Before the NH primary, an AI-cloned Biden voice urged Democrats not to vote. Authorities traced it, levied fines, and brought criminal charges.
DPD’s AI chatbot cursed and trashed the company
UK delivery giant DPD disabled its AI chat after it swore at a customer and wrote poems insulting DPD.
Duolingo cuts contractors; ‘AI-first’ backlash
Duolingo reduced reliance on contractors amid AI push, prompting user backlash and quality concerns; CEO later clarified stance.
Chevy dealer bot agreed to sell $76k SUV for $1
Pranksters prompt-injected a dealer’s ChatGPT-powered bot into agreeing to a $1 Chevy Tahoe and other nonsense.
Sports Illustrated: Fake-Looking Authors and AI Content Backlash
Sports Illustrated faced criticism after product review articles appeared under profiles with AI-looking headshots and shifting bylines; content was removed and a partner relationship ended.
Microsoft’s AI poll on woman’s death sparks outrage
Microsoft Start auto-attached an AI ‘Insights’ poll speculating on a woman’s death beside a Guardian story.
Gannett pauses AI sports recaps after mockery
Gannett halted Lede AI high-school recaps after robotic, error-prone stories went viral.
Snapchat’s “My AI” posted a Story by itself; users freaked out
Snapchat’s built-in AI assistant briefly posted an unexplained Story, spooking users and raising privacy/safety concerns about the bot’s access and behavior.
Lawyers filed ChatGPT’s imaginary cases; judge fined them
In Mata v. Avianca, attorneys submitted a brief citing non-existent cases generated by ChatGPT. A federal judge sanctioned two lawyers, ordered a $5,000 penalty, and required notices to judges named in the fake citations.
Eating disorder helpline’s AI told people to lose weight
NEDA replaced its helpline with an AI chatbot (“Tessa”) that gave harmful weight-loss advice; after public reports, the organization pulled the bot.
Google’s Bard ad made False JWST “first” Claim
In its launch promo, Bard claimed JWST took the first exoplanet photo - which was false. The flub overshadowed the event and dented confidence.
CNET mass-corrects AI-written finance explainers
CNET paused and reviewed AI-generated money articles after multiple factual errors were found.
Koko tested AI counseling on users without clear consent
Mental health app Koko used GPT-3 to draft replies for 4,000 users; backlash followed over consent and ethics.