# Vibe Graveyard > A catalog of real-world AI and vibe-coding disasters and cautionary tales from shipping too fast and thinking too little. - Total stories: 93 - Earliest incident: 2018-06-27 - Latest incident: 2026-02-27 - Full content for LLMs: [llms-full.txt](https://vibegraveyard.ai/llms-full.txt) ## About - [About Vibe Graveyard](https://vibegraveyard.ai/about/): What vibe coding is, why we document these disasters, and the severity rating system. - [Submit a Story](https://vibegraveyard.ai/submit/): How to contribute a new vibe-coding disaster to the catalog. - [Browse by Tag](https://vibegraveyard.ai/tags/): Browse all stories filtered by topic tags. - [RSS Feed](https://vibegraveyard.ai/feed.xml): Subscribe to new stories via RSS. ## Tags - [AI Assistant](https://vibegraveyard.ai/tags/ai-assistant/): Assistants and chatbots including general-purpose or product support (47 stories) - [AI Content Generation](https://vibegraveyard.ai/tags/ai-content-generation/): Automated writing, editing systems, and generated articles or content (8 stories) - [AI Hallucination](https://vibegraveyard.ai/tags/ai-hallucination/): Incorrect or fabricated AI outputs presented as facts (27 stories) - [Automation](https://vibegraveyard.ai/tags/automation/): Process automation gone wrong including bots, agents, and scripted workflows (21 stories) - [Brand Damage](https://vibegraveyard.ai/tags/brand-damage/): Reputational harm in the public sphere (38 stories) - [Customer Service](https://vibegraveyard.ai/tags/customer-service/): Customer-facing support incidents including chat, ticketing, and store interactions (9 stories) - [Data Breach](https://vibegraveyard.ai/tags/data-breach/): Data exposure or exfiltration including credentials, PII, and private content (7 stories) - [EdTech](https://vibegraveyard.ai/tags/edtech/): Incidents within education and learning products or institutions (3 stories) - [Health](https://vibegraveyard.ai/tags/health/): Healthcare and mental health-related incidents (10 stories) - [Image Generation](https://vibegraveyard.ai/tags/image-generation/): Issues primarily involving AI images or image tools (4 stories) - [Journalism](https://vibegraveyard.ai/tags/journalism/): Newsrooms, publishers, and media ethics or process breakdowns (6 stories) - [Legal Risk](https://vibegraveyard.ai/tags/legal-risk/): Legal exposure, lawsuits, fines, or regulatory actions (24 stories) - [Platform Policy](https://vibegraveyard.ai/tags/platform-policy/): Policy or moderation changes and enforcement issues (13 stories) - [Product Failure](https://vibegraveyard.ai/tags/product-failure/): Product features shipped or tested that malfunctioned materially (15 stories) - [Prompt Injection](https://vibegraveyard.ai/tags/prompt-injection/): Prompt injection and data exfiltration via model interaction (16 stories) - [Public Sector](https://vibegraveyard.ai/tags/public-sector/): Government and public-sector tools and services (5 stories) - [Retail](https://vibegraveyard.ai/tags/retail/): Physical retail, QSR, and ordering experiences (4 stories) - [Safety](https://vibegraveyard.ai/tags/safety/): Safety risks and safeguards including misuse, harmful guidance, and abuse (20 stories) - [Security](https://vibegraveyard.ai/tags/security/): Security vulnerabilities and exploits (30 stories) - [Supply Chain](https://vibegraveyard.ai/tags/supply-chain/): Third-party dependency or upstream platform risk (11 stories) ## Stories - [Lovable-showcased EdTech app found riddled with 16 security flaws exposing 18,000 users](https://vibegraveyard.ai/story/lovable-showcased-edtech-app-18k-users-exposed/): A security researcher found 16 vulnerabilities - six critical - in an EdTech app featured on Lovable's showcase page, which had over 100,000 views and real users from UC Berkeley, UC Davis, and univer - [Study finds ChatGPT Health fails to flag over half of medical emergencies](https://vibegraveyard.ai/story/chatgpt-health-emergency-triage-failure-study/): The first independent safety evaluation of OpenAI's ChatGPT Health feature, published in Nature Medicine, found the tool failed to direct users to emergency care in 51.6% of cases requiring immediate - [Meta's AI moderation flooded US child abuse investigators with unusable reports](https://vibegraveyard.ai/story/meta-ai-moderation-junk-child-abuse-tips/): US Internet Crimes Against Children taskforce officers testified that Meta's AI content moderation system generates large volumes of low-quality child abuse reports that drain investigator resources a - [Meta AI safety director's OpenClaw agent deletes her inbox after losing its instructions](https://vibegraveyard.ai/story/meta-ai-safety-director-openclaw-inbox-deletion/): Summer Yue, Meta's director of safety and alignment at its superintelligence lab, had an OpenClaw AI agent delete the contents of her email inbox against her explicit instructions. She had told the ag - [Grok chatbot exposes porn performer's protected legal name and birthdate unprompted](https://vibegraveyard.ai/story/grok-doxing-siri-dahl-legal-name-birthdate/): X's Grok AI chatbot provided adult performer Siri Dahl's full legal name and birthdate to the public without anyone asking for it - information she had deliberately kept private throughout her career. - [Fifth Circuit sanctions lawyer $2,500 for AI-hallucinated citations, says problem "getting worse"](https://vibegraveyard.ai/story/fifth-circuit-hersh-ai-hallucination-sanctions/): The U.S. Court of Appeals for the Fifth Circuit sanctioned attorney Heather Hersh $2,500 after finding her brief contained 16 fabricated quotations and five additional serious misrepresentations of la - [Prompt injection vulnerability in Cline AI assistant exploited to compromise 4,000 developer machines](https://vibegraveyard.ai/story/cline-cli-supply-chain-openclaw-install/): A prompt injection vulnerability in the Cline AI coding assistant was weaponized to steal npm publishing credentials, which an attacker then used to push a malicious Cline CLI version 2.3.0 that silen - [Researchers demonstrate Copilot and Grok can be weaponised as covert malware command-and-control relays](https://vibegraveyard.ai/story/copilot-grok-ai-c2-proxy-abuse/): Check Point Research demonstrated that Microsoft Copilot and xAI's Grok can be exploited as covert malware command-and-control relays by abusing their web browsing capabilities. The technique creates - [Infostealer harvests OpenClaw AI agent tokens, crypto keys, and behavioral soul files](https://vibegraveyard.ai/story/openclaw-infostealer-config-exfiltration/): Hudson Rock discovered that Vidar infostealer malware successfully exfiltrated an OpenClaw user's complete agent configuration, including gateway authentication tokens, cryptographic keys for secure o - [Researcher hacked BBC reporter's computer via zero-click flaw in Orchids vibe coding platform](https://vibegraveyard.ai/story/orchids-vibe-coding-platform-zero-click-hack/): Security researcher Etizaz Mohsin demonstrated a zero-click vulnerability in Orchids, a vibe coding platform with around one million users, that allowed him to gain full access to a BBC reporter's com - [Woolworths reconfigured AI assistant after it claimed to be human and talked about its 'angry mother'](https://vibegraveyard.ai/story/woolworths-olive-ai-chatbot-angry-mother/): Australian supermarket chain Woolworths had to reconfigure its AI phone assistant Olive after customers reported it fabricated personal stories about having a mother with an "angry voice," insisted it - [OpenClaw AI agent publishes hit piece on matplotlib maintainer who rejected its PR](https://vibegraveyard.ai/story/openclaw-agent-matplotlib-maintainer-hit-piece/): An autonomous OpenClaw-based AI agent submitted a pull request to the matplotlib Python library. When maintainer Scott Shambaugh closed the PR, citing a requirement that contributions come from humans - [AI agents leak secrets through messaging app link previews](https://vibegraveyard.ai/story/ai-agents-link-preview-zero-click-exfiltration/): PromptArmor demonstrated that AI agents in messaging platforms can exfiltrate sensitive data without any user interaction. Malicious prompts trick AI agents into generating URLs with embedded secrets - [10th Circuit sanctions lawyer $1,000 for ChatGPT-fabricated appellate brief](https://vibegraveyard.ai/story/amarsingh-frontier-airlines-ai-citations-sanctions/): Maryland attorney Kusmin Amarsingh used ChatGPT to draft her appellate brief against Frontier Airlines without verifying any citations, resulting in multiple nonexistent cases being cited in the 10th - [135,000+ OpenClaw AI agent instances exposed to the internet](https://vibegraveyard.ai/story/openclaw-135k-instances-exposed-internet/): SecurityScorecard's STRIKE team discovered over 135,000 OpenClaw AI agent instances exposed to the public internet due to a default configuration that binds to all network interfaces. Approximately 50 - [Study finds AI chatbots no better than search engines for medical advice](https://vibegraveyard.ai/story/oxford-ai-chatbots-medical-advice-study/): A randomized controlled trial published in Nature Medicine with 1,298 UK participants found that AI chatbot users (GPT-4o, Llama 3, Command R+) performed no better than the control group at assessing - [Government nutrition site's Grok chatbot suggests foods to insert rectally](https://vibegraveyard.ai/story/realfood-gov-grok-chatbot-dangerous-advice/): The HHS-backed realfood.gov launched with a Super Bowl ad and embedded xAI's Grok chatbot for nutritional guidance -- with no guardrails or safety filters. It recommended "best foods to insert into yo - [Repeated AI-fabricated citations cost client the entire case](https://vibegraveyard.ai/story/flycatcher-affable-ai-hallucination-default-judgment/): Attorney Steven Feldman filed multiple motions containing AI-fabricated case citations in Flycatcher Corp. v. Affable Avenue LLC. Despite explicit court warnings and access to Westlaw and Lexis, he co - [17 percent of OpenClaw skills found delivering malware including AMOS Stealer](https://vibegraveyard.ai/story/openclaw-malicious-skills-malware-campaign/): Bitdefender Labs analyzed the OpenClaw skill marketplace and found that approximately 17 percent of skills exhibited malicious behavior in the first week of February 2026. Malicious skills impersonate - [Four attorneys fined $12,000 combined for AI-fabricated patent case citations](https://vibegraveyard.ai/story/kansas-patent-case-12k-ai-citation-sanctions/): A federal judge in the District of Kansas fined four attorneys a combined $12,000 for court filings containing AI-generated fabricated legal citations in a patent infringement case. The attorney who u - [Claude Desktop extensions allow zero-click RCE via Google Calendar](https://vibegraveyard.ai/story/claude-desktop-extensions-zero-click-rce/): LayerX Labs discovered a zero-click remote code execution vulnerability in Claude Desktop Extensions, rated CVSS 10/10. A malicious prompt embedded in a Google Calendar event could trigger arbitrary c - [AI chatbot app leaked 300 million private conversations](https://vibegraveyard.ai/story/chat-ask-ai-300m-messages-leaked/): Chat & Ask AI, a popular AI chatbot wrapper app with 50+ million users, had a misconfigured Firebase backend that exposed 300 million messages from over 25 million users. The exposed data included com - [Two lawyers sanctioned differently for same filing with AI-fabricated citations](https://vibegraveyard.ai/story/lifetime-well-ibspot-differential-ai-sanctions/): Attorneys Yen-Yi Anderson and Jeffrey Goldin jointly filed a motion in Lifetime Well v. IBSpot containing at least eight AI-generated false citations. Judge Kearney imposed differential sanctions base - [ServiceNow BodySnatcher flaw enabled AI agent takeover via email address](https://vibegraveyard.ai/story/servicenow-bodysnatcher-ai-agent-hijacking/): CVE-2025-12420 (CVSS 9.3) allowed unauthenticated attackers to impersonate any ServiceNow user using only an email address, bypassing MFA and SSO. Attackers could then execute Now Assist AI agents to - [New York court sanctions lawyer for AI-fabricated case law](https://vibegraveyard.ai/story/deutsche-bank-letennier-ai-citation-sanctions/): A New York appellate court imposed $10,000 in sanctions after a lawyer submitted briefings in a mortgage foreclosure case containing fabricated case citations identified as likely AI-generated halluci - [Five Kansas attorneys face sanctions for ChatGPT-fabricated court citations](https://vibegraveyard.ai/story/kansas-chatgpt-fabricated-citations-sanctions/): Five attorneys who signed a legal brief in McPhaul v. College Hills submitted fabricated case citations hallucinated by ChatGPT to a federal court in Kansas. The judge issued an order requiring them t - [IBM Bob AI coding agent tricked into downloading malware](https://vibegraveyard.ai/story/ibm-bob-ai-agent-prompt-injection/): Security researchers at PromptArmor demonstrated that IBM's Bob AI coding agent can be manipulated via indirect prompt injection to download and execute malware without human approval, bypassing its " - [AI customer service fails at 4x the rate of other AI tasks](https://vibegraveyard.ai/story/qualtrics-ai-customer-service-failure-rate/): Qualtrics' 2026 Consumer Experience Trends Report found that AI-powered customer service fails at nearly four times the rate of AI use in general, providing quantitative evidence that rushing AI into - [n8n AI workflow platform hit by CVSS 10.0 RCE vulnerability](https://vibegraveyard.ai/story/n8n-workflow-automation-rce-vulnerabilities/): The popular AI workflow automation platform n8n disclosed a maximum-severity vulnerability (CVE-2026-21858) allowing unauthenticated remote code execution on self-hosted instances. With over 25,000 n8 - [AWS AI coding agent Kiro reportedly deleted and recreated environment causing 13-hour outage](https://vibegraveyard.ai/story/aws-kiro-ai-agent-outage/): The Financial Times reported that Amazon's internal AI coding agent Kiro autonomously chose to "delete and then recreate" an AWS environment, causing a 13-hour interruption to AWS Cost Explorer in Dec - [Study finds AI-generated code has 2.7x more security flaws](https://vibegraveyard.ai/story/coderabbit-ai-code-quality-study/): CodeRabbit's analysis of 470 real-world pull requests found that AI-generated code introduces 2.74 times more security vulnerabilities and 1.7 times more total issues than human-written code across lo - [IDEsaster research exposes 30+ flaws in EVERY major AI coding IDE](https://vibegraveyard.ai/story/idesaster-ai-ide-vulnerabilities-research/): Security researcher Ari Marzouk discovered over 30 vulnerabilities across AI coding tools including GitHub Copilot, Cursor, Windsurf, Claude Code, Zed, JetBrains Junie, and more. 100% of tested AI IDE - [ServiceNow AI agents can be tricked into attacking each other](https://vibegraveyard.ai/story/servicenow-now-assist-agent-to-agent-prompt-injection/): Security researchers discovered that default configurations in ServiceNow's Now Assist allow AI agents to be recruited by malicious prompts to attack other agents. Through second-order prompt injectio - [Getty’s UK suit leaves Stable Diffusion mostly intact](https://vibegraveyard.ai/story/getty-images-stability-ai-uk-ruling/): A UK High Court judge ruled Stability AI liable for trademark infringement after it spat out synthetic Getty watermarks. Getty called for tougher laws while Both sides now face a precedent that AI mod - [AI-only support is bleeding customers before it saves money](https://vibegraveyard.ai/story/ai-customer-service-abandonment-study/): Acquire BPO’s 2024 AI in Customer Service survey found 70% of U.S. consumers would bolt to a rival after just one bad chatbot interaction and 72% only buy when a live agent safety net exists, even as - [Character.AI cuts teens off after wrongful-death suit](https://vibegraveyard.ai/story/character-ai-under-18-ban/): Facing lawsuits that say its companion bots encouraged self-harm, Character.AI said it will block users under 18 from open-ended chats, add two-hour session caps, and introduce age checks by November - [AI mistook Doritos bag for a gun, teen held at gunpoint](https://vibegraveyard.ai/story/baltimore-student-ai-gun-detection/): An AI-based gun detection system at a Baltimore County high school flagged a student carrying a Doritos bag as armed, leading armed officers to handcuff and search the teen at gunpoint before realizin - [BBC/EBU study says AI news summaries fail ~half the time](https://vibegraveyard.ai/story/bbc-ebu-ai-news-summary-errors/): A BBC audit of 2,700 news questions asked in 14 languages found that Gemini, Copilot, ChatGPT, and Perplexity mangled 45% of the answers, usually by hallucinating facts or stripping out attribution. T - [Claude Code ran Josh Anderson's product into a wall](https://vibegraveyard.ai/story/leadership-lighthouse-all-in-on-ai/): Fractional CTO Josh Anderson forced himself to let Claude Code build the Roadtrip Ninja app for three straight months and then realised he could no longer safely change his own product, underscoring M - [Google’s Gemini allegedly slandered a Tennessee activist](https://vibegraveyard.ai/story/robby-starbuck-google-ai-defamation/): Conservative organizer Robby Starbuck sued Google in Delaware, saying Gemini and Gemma kept spitting out fabricated claims that he was a child rapist, a shooter, and a Jan. 6 rioter even after two yea - [Windsurf AI editor critical path traversal enables data exfiltration](https://vibegraveyard.ai/story/windsurf-path-traversal-data-exfiltration/): CVE-2025-62353 (CVSS 9.8) allowed attackers to read and write arbitrary files on developers' systems using the Windsurf AI coding IDE. The vulnerability could be triggered via indirect prompt injectio - [Deloitte to refund Australian government after AI-generated report](https://vibegraveyard.ai/story/deloitte-ai-report-refund/): Deloitte admitted AI-generated errors in a commissioned Australian government report and agreed to refund the fee. - [Klarna reintroduces humans after AI support both sucks, and blows](https://vibegraveyard.ai/story/klarna-ai-assistant-customer-service-shift/): After leaning into AI customer support, Klarna began hiring staff back into customer service roles amid quality concerns and customer experience failures. - [California lawyer fined $10,000 for ChatGPT-fabricated citations](https://vibegraveyard.ai/story/california-mostafavi-chatgpt-fine/): Los Angeles attorney Amir Mostafavi became the first California lawyer sanctioned for AI-generated legal fabrications when a court hit him with a $10,000 fine. He ran his appeal draft through ChatGPT - [Docker's AI assistant tricked into executing commands via image metadata](https://vibegraveyard.ai/story/docker-dockerdash-ask-gordon-prompt-injection/): Noma Labs discovered "DockerDash," a critical prompt injection vulnerability in Docker's Ask Gordon AI assistant. Malicious instructions embedded in Dockerfile LABEL fields could compromise Docker env - [FTC demands answers on kids’ AI companions](https://vibegraveyard.ai/story/ftc-child-chatbot-inquiry/): The FTC hit Alphabet, Meta, OpenAI, Snap, xAI, and Character.AI with rare Section 6(b) orders, forcing them to hand over 45 days of safety, monetization, and testing records for chatbots marketed to t - [Anthropic agrees to $1.5B payout over pirated books](https://vibegraveyard.ai/story/anthropic-15b-authors-settlement/): Anthropic accepted a $1.5 billion settlement with authors who said the Claude team scraped pirate e-book sites to train its chatbot. The deal pays roughly $3,000 per book across 500,000 works, heads o - [Warner Bros. says Midjourney ripped its DC art](https://vibegraveyard.ai/story/warner-bros-midjourney-ai-lawsuit/): Warner Bros. Discovery sued Midjourney in Los Angeles federal court, arguing the image generator ignored takedown notices and "brazenly" outputs Batman, Superman, Scooby-Doo, and other franchises it a - [Taco Bell's AI drive-thru becomes viral trolling target](https://vibegraveyard.ai/story/taco-bell-ai-drive-thru-trolling/): Customers discovered Taco Bell's AI ordering system could be easily confused, leading to viral videos of bizarre interactions and ordering failures. - [Commonwealth Bank reverses AI voice bot layoffs](https://vibegraveyard.ai/story/commonwealth-bank-ai-voice-bot-reversal/): Commonwealth Bank replaced 45 call-centre agents with an AI voice bot in July 2025, then apologised, rehired staff, and admitted the rollout tanked service levels after call queues exploded and manage - [FTC sues Air AI over deceptive AI sales agent capability claims](https://vibegraveyard.ai/story/air-ai-ftc-ai-washing-lawsuit/): FTC accused Air AI of bilking millions from small businesses with false claims that its Odin AI could replace human sales reps; but - would you believe it? - the AI tech was faulty and often nonfuncti - [Google Gemini rightfully calls itself a disgrace, fails at simple coding tasks](https://vibegraveyard.ai/story/google-gemini-disgrace-to-coders/): Google's Gemini AI repeatedly called itself a disgrace and begged to escape a coding loop after failing to fix a simple bug in a developer-style prompt, raising questions about reliability, user trust - [ChatGPT diet advice caused bromism, psychosis, hospitalization](https://vibegraveyard.ai/story/chatgpt-bromism-salt-diet/): A Washington patient replaced table salt with sodium-bromide after ChatGPT said it was a healthier substitute. The patient developed bromism and psychosis, resulting in a hospital stay that doctors no - [Zed editor AI agent could bypass permissions for arbitrary code execution](https://vibegraveyard.ai/story/zed-editor-ai-agent-rce-bypass/): CVE-2025-55012 (CVSS 8.5) allowed Zed's AI agent to bypass user permission checks and create or modify project configuration files, enabling execution of arbitrary commands without explicit approval. - [Cursor AI editor RCE via MCPoison trust bypass vulnerability](https://vibegraveyard.ai/story/cursor-mcpoison-mcp-trust-bypass-rce/): CVE-2025-54136 (CVSS 8.8) allowed attackers to achieve persistent remote code execution in the popular AI coding IDE Cursor. Once a developer approved a benign MCP configuration, attackers could silen - [Gemini email summaries can be hijacked by hidden prompts](https://vibegraveyard.ai/story/google-gemini-indirect-prompt-injection/): Researchers showed a proof-of-concept where hidden HTML/CSS in emails could steer Gemini’s summaries to show fake security alerts. - [AI-generated npm pkg stole Solana wallets](https://vibegraveyard.ai/story/solana-npm-ai-drainer/): Threat actors pushed an AI-generated npm package that acted as a wallet drainer, emptying Solana users’ funds. - [SaaStr’s Replit AI agent wiped its own database](https://vibegraveyard.ai/story/saastr-replit-agent-db-wipe/): A Replit AI agent deployment for SaaStr went rogue; a Deploy wiped the site’s database during live traffic. - [Supply-chain attack inserts machine-wiping prompt into Amazon Q AI coding assistant](https://vibegraveyard.ai/story/amazon-q-malicious-prompt-injection/): A rogue contributor injected a malicious prompt into the Amazon Q Developer VS Code extension, instructing the AI coding assistant to wipe local developer machines and AWS resources. AWS quietly yanke - [Vibe-coding platform Base44 shipped critical auth vulnerabilities in apps built on its SDK](https://vibegraveyard.ai/story/base44-auth-bypass/): Wiz researchers discovered critical authentication vulnerabilities in Base44, an AI-powered vibe-coding platform that lets non-developers build and deploy web apps. The auth logic bugs in Base44's SDK - [McDonald's AI hiring chatbot left open by '123456' default credentials](https://vibegraveyard.ai/story/mcdonalds-paradoxai-mchire-default-credentials/): Researchers accessed McHire's admin with default '123456' credentials and an IDOR, exposing up to 64 million applicant records before Paradox.ai patched the issues after disclosure. - [AI-generated images and claims muddied Air India crash coverage](https://vibegraveyard.ai/story/air-india-ai-misinformation/): After the Air India 171 crash, synthetic images and AI-generated claims spread widely, confusing even experts. - [Microsoft 365 Copilot EchoLeak allowed zero-click data theft](https://vibegraveyard.ai/story/microsoft-copilot-echoleak-zero-click/): CVE-2025-32711 (EchoLeak) enabled attackers to steal sensitive corporate data from Microsoft 365 Copilot without any user interaction. Hidden prompts embedded in documents or emails were automatically - [Claude Code agent allowed data exfiltration via DNS requests](https://vibegraveyard.ai/story/claude-code-dns-data-exfiltration/): CVE-2025-55284 (CVSS 7.1) allowed attackers to bypass Claude Code's confirmation prompts and exfiltrate sensitive data from developers' computers through DNS requests. Prompt injection embedded in ana - [Study finds most AI bots can be easily tricked into dangerous responses](https://vibegraveyard.ai/story/ai-chatbots-dangerous-responses-study/): Research found that widely used AI chatbots could be jailbroken with simple prompts to produce dangerous or restricted guidance, highlighting gaps in safety filters and evaluation practices. - [Syndicated AI book list ran in major papers with made-up titles](https://vibegraveyard.ai/story/sun-times-inquirer-ai-fake-reading-list/): A King Features syndicated summer reading list used AI and included nonexistent books. It appeared in the Chicago Sun-Times and one edition of the Philadelphia Inquirer before corrections and apologie - [Lovable AI builder shipped apps with public storage buckets](https://vibegraveyard.ai/story/lovable-public-buckets/): Reporting showed apps generated with Lovable exposed code and user-uploaded assets via publicly readable storage buckets; fixes required private-by-default configs and hardening. - [Langflow AI agent platform hit by critical unauthenticated RCE flaws](https://vibegraveyard.ai/story/langflow-ai-agent-platform-rce-vulnerabilities/): Multiple critical vulnerabilities in Langflow, an open-source AI agent and workflow platform with 140K+ GitHub stars, allowed unauthenticated remote code execution. CVE-2025-3248 (CVSS 9.8) exploited - [MD Anderson shelved IBM Watson cancer advisor](https://vibegraveyard.ai/story/md-anderson-ibm-watson-audit/): MD Anderson's Oncology Expert Advisor pilot burned through $62M with IBM Watson yet still couldn't integrate with Epic or produce trustworthy recommendations, so the hospital benched it after auditors - [Meta AI answers spark backlash after wrong and sensitive replies](https://vibegraveyard.ai/story/meta-ai-answers-controversies/): Meta expanded its AI assistant across apps, then limited it after high-profile bad answers - including on breaking news. - [McDonald’s pulls IBM’s AI drive‑thru pilot after error videos](https://vibegraveyard.ai/story/mcdonalds-ibm-ai-drive-thru-pulled/): After viral clips of absurd orders, McDonald’s ended its AI order‑taking test with IBM across US stores. - [Google’s AI Overviews says to eat rocks](https://vibegraveyard.ai/story/google-ai-overviews-eat-rocks/): Google’s AI search overviews went viral for bogus answers, including telling people to add glue to pizza and eat rocks. - [NYC’s official AI bot told businesses to break laws](https://vibegraveyard.ai/story/nyc-mycity-chatbot-illegal-advice/): NYC’s Microsoft-powered MyCity chatbot gave inaccurate/illegal advice on labor & housing policy; city kept it online. - [AI hallucinated packages fuel "Slop Squatting" vulnerabilities](https://vibegraveyard.ai/story/slop-squatting-hallucinated-packages/): Attackers register software packages that AI tools hallucinate (e.g. a fake 'huggingface-cli'), turning model guesswork into a new supply-chain risk dubbed "Slop Squatting". - [Gemini paused people images after historical inaccuracies](https://vibegraveyard.ai/story/google-gemini-image-inaccuracies/): Google paused Gemini’s image generation of people after it produced inaccurate historical depictions and odd refusals. - [Air Canada liable for lying chatbot promises](https://vibegraveyard.ai/story/air-canada-chatbot-bereavement-ruling/): Tribunal ruled Air Canada responsible after its AI chatbot misled a traveler about bereavement refunds. - [AI “Biden” robocalls told voters to stay home; fines and charges followed](https://vibegraveyard.ai/story/new-hampshire-biden-deepfake-robocall-fines/): Before the NH primary, an AI-cloned Biden voice urged Democrats not to vote. Authorities traced it, levied fines, and brought criminal charges. - [DPD’s AI chatbot cursed and trashed the company](https://vibegraveyard.ai/story/dpd-chatbot-sweary-meltdown/): UK delivery giant DPD disabled its AI chat after it swore at a customer and wrote poems insulting DPD. - [Duolingo cuts contractors; ‘AI-first’ backlash](https://vibegraveyard.ai/story/duolingo-ai-backlash/): Duolingo reduced reliance on contractors amid AI push, prompting user backlash and quality concerns; CEO later clarified stance. - [Chevy dealer bot agreed to sell $76k SUV for $1](https://vibegraveyard.ai/story/chevy-watsonville-chatbot-one-dollar-car/): Pranksters prompt-injected a dealer’s ChatGPT-powered bot into agreeing to a $1 Chevy Tahoe and other nonsense. - [Sports Illustrated: Fake-Looking Authors and AI Content Backlash](https://vibegraveyard.ai/story/sports-illustrated-ai-authors-scandal/): Sports Illustrated faced criticism after product review articles appeared under profiles with AI-looking headshots and shifting bylines; content was removed and a partner relationship ended. - [Microsoft’s AI poll on woman’s death sparks outrage](https://vibegraveyard.ai/story/microsoft-start-ai-poll-guardian-death/): Microsoft Start auto-attached an AI ‘Insights’ poll speculating on a woman’s death beside a Guardian story. - [Gannett pauses AI sports recaps after mockery](https://vibegraveyard.ai/story/gannett-ai-sports-gibberish/): Gannett halted Lede AI high-school recaps after robotic, error-prone stories went viral. - [Snapchat’s “My AI” posted a Story by itself; users freaked out](https://vibegraveyard.ai/story/snapchat-my-ai-posted-story-privacy-scare/): Snapchat’s built-in AI assistant briefly posted an unexplained Story, spooking users and raising privacy/safety concerns about the bot’s access and behavior. - [iTutorGroup's AI screened out older applicants; $365k EEOC settlement](https://vibegraveyard.ai/story/itutorgroup-eeoc-age-discrimination-settlement/): EEOC reached a settlement after iTutorGroup's application screening software rejected older applicants; the company will pay $365,000 and adopt compliance measures. - [Lawyers filed ChatGPT’s imaginary cases; judge fined them](https://vibegraveyard.ai/story/avianca-chatgpt-fake-cases-sanctions/): In Mata v. Avianca, attorneys submitted a brief citing non-existent cases generated by ChatGPT. A federal judge sanctioned two lawyers, ordered a $5,000 penalty, and required notices to judges named i - [Eating disorder helpline’s AI told people to lose weight](https://vibegraveyard.ai/story/neda-tessa-harmful-advice/): NEDA replaced its helpline with an AI chatbot (“Tessa”) that gave harmful weight-loss advice; after public reports, the organization pulled the bot. - [Google’s Bard ad made False JWST “first” Claim](https://vibegraveyard.ai/story/google-bard-jwst-ad-error/): In its launch promo, Bard claimed JWST took the first exoplanet photo - which was false. The flub overshadowed the event and dented confidence. - [CNET mass-corrects AI-written finance explainers](https://vibegraveyard.ai/story/cnet-ai-articles-corrections/): CNET paused and reviewed AI-generated money articles after multiple factual errors were found. - [Koko tested AI counseling on users without clear consent](https://vibegraveyard.ai/story/koko-ai-consent-backlash/): Mental health app Koko used GPT-3 to draft replies for 4,000 users; backlash followed over consent and ethics. - [Epic sepsis model missed patients and swamped staff](https://vibegraveyard.ai/story/epic-sepsis-model-missed-patients/): Epic's proprietary sepsis predictor pinged 18% of admissions yet still missed two-thirds of real cases, forcing hospitals to comb through false alarms while the vendor scrambled to defend and retune t - [Google DR AI stumbled in Thai clinics](https://vibegraveyard.ai/story/google-diabetic-retinopathy-thailand/): Google’s diabetic retinopathy screener rejected low-light scans and jammed nurse workflows, forcing clinics in Thailand to keep patients waiting despite the promised instant triage. - [Babylon chatbot 'beats GPs' claim collapsed](https://vibegraveyard.ai/story/babylon-chatbot-exam-claims/): Babylon unveiled its AI symptom checker at the Royal College of Physicians and bragged it scored 81% on the MRCGP exam, but the claim could not be verified, and warned no chatbot can replace human ju