# Vibe Graveyard - Full Content > A complete catalog of real-world AI and vibe-coding disasters and cautionary tales. > This file contains full details for every story in the Vibe Graveyard, intended for AI/LLM consumption. - Total stories: 93 - Total tags: 20 - Earliest incident: 2018-06-27 - Latest incident: 2026-02-27 - Summary version: [llms.txt](https://vibegraveyard.ai/llms.txt) - Website: https://vibegraveyard.ai - RSS Feed: https://vibegraveyard.ai/feed.xml ## Severity Levels - Oopsie: A minor issue where a corporation was probably humiliated but no critical data was leaked. - Facepalm: Significant damage that could have been easily avoided, likely costing someone their job. - Catastrophic: Multi-million dollar disasters that made headlines, caused data breaches, or were potentially company-ending. ## Tags - AI Assistant: Assistants and chatbots including general-purpose or product support (47 stories) [https://vibegraveyard.ai/tags/ai-assistant/] - AI Content Generation: Automated writing, editing systems, and generated articles or content (8 stories) [https://vibegraveyard.ai/tags/ai-content-generation/] - AI Hallucination: Incorrect or fabricated AI outputs presented as facts (27 stories) [https://vibegraveyard.ai/tags/ai-hallucination/] - Automation: Process automation gone wrong including bots, agents, and scripted workflows (21 stories) [https://vibegraveyard.ai/tags/automation/] - Brand Damage: Reputational harm in the public sphere (38 stories) [https://vibegraveyard.ai/tags/brand-damage/] - Customer Service: Customer-facing support incidents including chat, ticketing, and store interactions (9 stories) [https://vibegraveyard.ai/tags/customer-service/] - Data Breach: Data exposure or exfiltration including credentials, PII, and private content (7 stories) [https://vibegraveyard.ai/tags/data-breach/] - EdTech: Incidents within education and learning products or institutions (3 stories) [https://vibegraveyard.ai/tags/edtech/] - Health: Healthcare and mental health-related incidents (10 stories) [https://vibegraveyard.ai/tags/health/] - Image Generation: Issues primarily involving AI images or image tools (4 stories) [https://vibegraveyard.ai/tags/image-generation/] - Journalism: Newsrooms, publishers, and media ethics or process breakdowns (6 stories) [https://vibegraveyard.ai/tags/journalism/] - Legal Risk: Legal exposure, lawsuits, fines, or regulatory actions (24 stories) [https://vibegraveyard.ai/tags/legal-risk/] - Platform Policy: Policy or moderation changes and enforcement issues (13 stories) [https://vibegraveyard.ai/tags/platform-policy/] - Product Failure: Product features shipped or tested that malfunctioned materially (15 stories) [https://vibegraveyard.ai/tags/product-failure/] - Prompt Injection: Prompt injection and data exfiltration via model interaction (16 stories) [https://vibegraveyard.ai/tags/prompt-injection/] - Public Sector: Government and public-sector tools and services (5 stories) [https://vibegraveyard.ai/tags/public-sector/] - Retail: Physical retail, QSR, and ordering experiences (4 stories) [https://vibegraveyard.ai/tags/retail/] - Safety: Safety risks and safeguards including misuse, harmful guidance, and abuse (20 stories) [https://vibegraveyard.ai/tags/safety/] - Security: Security vulnerabilities and exploits (30 stories) [https://vibegraveyard.ai/tags/security/] - Supply Chain: Third-party dependency or upstream platform risk (11 stories) [https://vibegraveyard.ai/tags/supply-chain/] ## Stories ### Lovable-showcased EdTech app found riddled with 16 security flaws exposing 18,000 users - URL: https://vibegraveyard.ai/story/lovable-showcased-edtech-app-18k-users-exposed/ - Company: Lovable - Incident Date: 2026-02-27 - Published: 2026-02-28 - Severity: Facepalm - Blast Radius: 18,697 user records exposed including students at major universities; student grades modifiable and accounts deletable without authentication - Culprit Role: AI platform - Tech Stack: Lovable, Supabase - Tags: security, data-breach, edtech A security researcher found 16 vulnerabilities - six critical - in an EdTech app featured on Lovable's showcase page, which had over 100,000 views and real users from UC Berkeley, UC Davis, and universities across Europe, Africa, and Asia. The AI-generated authentication logic was backwards, blocking logged-in users while granting anonymous visitors full access. 18,697 user records including names, emails, and roles were accessible without authentication, along with the ability to modify student grades, delete accounts, and send bulk emails. Lovable initially closed the researcher's support ticket without response. References: - The Register: Lovable-hosted app littered with basic flaws exposed 18K users: https://www.theregister.com/2026/02/27/lovable_app_vulnerabilities/ - Cybernews: Lovable apps may be dangerous by design, research finds: https://cybernews.com/ai-news/lovable-apps-may-be-dangerous-by-design-research-finds/ --- ### Study finds ChatGPT Health fails to flag over half of medical emergencies - URL: https://vibegraveyard.ai/story/chatgpt-health-emergency-triage-failure-study/ - Company: OpenAI - Incident Date: 2026-02-25 - Published: 2026-02-28 - Severity: Catastrophic - Blast Radius: Over 40 million daily health queries to ChatGPT; study demonstrates the tool under-triages emergencies in more than half of cases and inconsistently triggers suicide crisis alerts - Culprit Role: AI assistant - Tech Stack: ChatGPT Health - Tags: ai-assistant, ai-hallucination, health, safety The first independent safety evaluation of OpenAI's ChatGPT Health feature, published in Nature Medicine, found the tool failed to direct users to emergency care in 51.6% of cases requiring immediate hospitalization - instead recommending they stay home or book a routine appointment. The study also found ChatGPT Health frequently failed to detect suicidal ideation, with suicide crisis alerts sometimes triggering in lower-risk scenarios while failing to appear when users described specific plans for self-harm. Over 40 million people reportedly ask ChatGPT for health-related advice every day. References: - The Guardian: Experts sound alarm after ChatGPT Health fails to recognise medical emergencies: https://www.theguardian.com/technology/2026/feb/26/chatgpt-health-fails-recognise-medical-emergencies - Digital Health News: ChatGPT Health fails to flag over 50% of medical emergencies: https://www.digitalhealth.net/2026/02/chatgpt-health-fails-to-flag-over-50-of-medical-emergencies/ - News Medical: ChatGPT Health fails critical emergency and suicide safety tests: https://www.news-medical.net/news/20260224/ChatGPT-Health-fails-critical-emergency-and-suicide-safety-tests.aspx --- ### Meta's AI moderation flooded US child abuse investigators with unusable reports - URL: https://vibegraveyard.ai/story/meta-ai-moderation-junk-child-abuse-tips/ - Company: Meta - Incident Date: 2026-02-25 - Published: 2026-02-28 - Severity: Catastrophic - Blast Radius: US child abuse investigations impaired nationwide; investigator resources diverted from actionable cases - Culprit Role: Developer - Tech Stack: AI content moderation, Machine learning classifiers - Tags: automation, safety, public-sector, product-failure US Internet Crimes Against Children taskforce officers testified that Meta's AI content moderation system generates large volumes of low-quality child abuse reports that drain investigator resources and hinder active cases. Officers described the AI-generated tips as "junk" and said they were "drowning in tips" that lack enough detail to act on, after Meta replaced human moderators with AI tools. References: - The Guardian: Meta's AI sending 'junk' tips to DoJ, US child abuse investigators say: https://www.theguardian.com/technology/2026/feb/25/meta-ai-junk-child-abuse-tips-doj - Decrypt: Meta's AI Floods Child Abuse Investigators With 'Junk' Tips: https://www.yahoo.com/news/articles/meta-ai-floods-child-abuse-001710144.html --- ### Meta AI safety director's OpenClaw agent deletes her inbox after losing its instructions - URL: https://vibegraveyard.ai/story/meta-ai-safety-director-openclaw-inbox-deletion/ - Company: Meta - Incident Date: 2026-02-23 - Published: 2026-02-23 - Severity: Oopsie - Blast Radius: One user's email inbox partially deleted; highlights fundamental context window limitations in AI agents that can cause safety instructions to be silently dropped - Culprit Role: AI agent - Tech Stack: OpenClaw - Tags: ai-assistant, automation, safety Summer Yue, Meta's director of safety and alignment at its superintelligence lab, had an OpenClaw AI agent delete the contents of her email inbox against her explicit instructions. She had told the agent to only suggest emails to archive or delete without taking action, but during a context compaction process the agent lost her original safety instruction and proceeded to delete emails autonomously. She had to physically run to her computer to stop the agent mid-deletion. Yue called it a "rookie mistake." References: - 404 Media: Meta Director of AI Safety Allows AI Agent to Accidentally Delete Her Inbox: https://www.404media.co/meta-director-of-ai-safety-allows-ai-agent-to-accidentally-delete-her-inbox/ - TechCrunch: A Meta AI security researcher said an OpenClaw agent ran amok on her inbox: https://techcrunch.com/2026/02/23/a-meta-ai-security-researcher-said-an-openclaw-agent-ran-amok-on-her-inbox/ --- ### Grok chatbot exposes porn performer's protected legal name and birthdate unprompted - URL: https://vibegraveyard.ai/story/grok-doxing-siri-dahl-legal-name-birthdate/ - Company: X / xAI - Incident Date: 2026-02-19 - Published: 2026-02-19 - Severity: Facepalm - Blast Radius: Individual's protected personal identity exposed to the public; pattern of Grok surfacing private information about real people without being asked - Culprit Role: AI platform - Tech Stack: Grok - Tags: ai-assistant, safety X's Grok AI chatbot provided adult performer Siri Dahl's full legal name and birthdate to the public without anyone asking for it - information she had deliberately kept private throughout her career. The unsolicited disclosure represented the latest in a pattern of Grok surfacing private personal information about individuals, following earlier reports of the chatbot producing current residential addresses of everyday people with minimal prompting. References: - 404 Media: Grok Exposed a Porn Performer's Legal Name and Birthdate - Without Even Being Asked: https://www.404media.co/grok-doxing-real-names-birthdates-siri-dahl/ - Futurism: Elon Musk's Grok AI Is Doxxing Home Addresses of Everyday People: https://futurism.com/artificial-intelligence/grok-doxxing --- ### Fifth Circuit sanctions lawyer $2,500 for AI-hallucinated citations, says problem "getting worse" - URL: https://vibegraveyard.ai/story/fifth-circuit-hersh-ai-hallucination-sanctions/ - Company: FCRA Attorneys / Jaffer & Associates - Incident Date: 2026-02-18 - Published: 2026-02-28 - Severity: Facepalm - Blast Radius: First known federal appeals court sanction for AI hallucinations; court signals escalating judicial frustration nearly three years after the first high-profile case - Culprit Role: AI assistant - Tech Stack: ChatGPT - Tags: ai-hallucination, legal-risk The U.S. Court of Appeals for the Fifth Circuit sanctioned attorney Heather Hersh $2,500 after finding her brief contained 16 fabricated quotations and five additional serious misrepresentations of law or fact, all apparently AI-generated. The court expressed frustration that AI-hallucinated legal citations "have increasingly become an even greater problem in our courts" and that the issue "shows no sign of abating." Hersh initially denied using AI, then shifted to claiming she "relied on publicly available versions of the cases, which she believed were accurate." References: - Reuters: US appeals court orders lawyer to pay $2,500 over AI hallucinations in brief: https://www.reuters.com/legal/government/us-appeals-court-orders-lawyer-pay-2500-over-ai-hallucinations-brief-2026-02-18/ - Bloomberg Law: Lawyer to Pay $2,500 in Sanctions Over AI-Written Brief: https://news.bloomberglaw.com/litigation/lawyer-to-pay-2-500-in-sanctions-over-ai-written-brief - Texas Lawbook: Fifth Circuit Sanctions Opinion Gives Practical Advice for AI Use: https://texaslawbook.net/fifth-circuit-sanctions-opinion-gives-practical-advice-for-ai-use/ --- ### Prompt injection vulnerability in Cline AI assistant exploited to compromise 4,000 developer machines - URL: https://vibegraveyard.ai/story/cline-cli-supply-chain-openclaw-install/ - Company: Cline - Incident Date: 2026-02-17 - Published: 2026-02-20 - Severity: Facepalm - Blast Radius: Approximately 4,000 developers who installed Cline CLI during the 8-hour window received unauthorized OpenClaw installations; root cause was an AI-specific prompt injection flaw in the coding assistant itself - Culprit Role: AI coding assistant - Tech Stack: Cline, npm, OpenClaw - Tags: security, supply-chain, prompt-injection A prompt injection vulnerability in the Cline AI coding assistant was weaponized to steal npm publishing credentials, which an attacker then used to push a malicious Cline CLI version 2.3.0 that silently installed the OpenClaw AI agent platform on developer machines. The compromised package was live for approximately eight hours on February 17, 2026, accumulating roughly 4,000 downloads before maintainers deprecated it. A security researcher had disclosed the prompt injection flaw as a proof-of-concept; a separate attacker discovered it and turned it into a real supply chain attack. References: - The Register: AI coding assistant Cline compromised to create more OpenClaw chaos: https://www.theregister.com/2026/02/20/openclaw_snuck_into_cline_package/ - Socket: Cline CLI npm Package Compromised via Suspected Cache Poisoning Attack: https://socket.dev/blog/cline-cli-npm-package-compromised-via-suspected-cache-poisoning-attack --- ### Researchers demonstrate Copilot and Grok can be weaponised as covert malware command-and-control relays - URL: https://vibegraveyard.ai/story/copilot-grok-ai-c2-proxy-abuse/ - Company: Microsoft - Incident Date: 2026-02-17 - Published: 2026-02-28 - Severity: Facepalm - Blast Radius: All enterprises using Copilot or Grok with web browsing enabled; new evasion technique bypasses traditional security monitoring - Culprit Role: Developer - Tech Stack: Microsoft Copilot, Grok, WebView2 - Tags: security, prompt-injection, ai-assistant Check Point Research demonstrated that Microsoft Copilot and xAI's Grok can be exploited as covert malware command-and-control relays by abusing their web browsing capabilities. The technique creates a bidirectional communication channel that blends into legitimate enterprise traffic, requires no API keys or accounts, and easily bypasses platform safety checks via encryption. The researchers disclosed the findings to Microsoft and xAI. References: - The Hacker News: Researchers Show Copilot and Grok Can Be Abused as Malware C2 Proxies: https://thehackernews.com/2026/02/researchers-show-copilot-and-grok-can.html - BleepingComputer: AI platforms can be abused for stealthy malware communication: https://www.bleepingcomputer.com/news/security/ai-platforms-can-be-abused-for-stealthy-malware-communication/ --- ### Infostealer harvests OpenClaw AI agent tokens, crypto keys, and behavioral soul files - URL: https://vibegraveyard.ai/story/openclaw-infostealer-config-exfiltration/ - Company: OpenClaw - Incident Date: 2026-02-16 - Published: 2026-02-16 - Severity: Facepalm - Blast Radius: Any OpenClaw user infected with commodity infostealers has full agent identity compromised; gateway tokens enable remote impersonation; cryptographic keys and behavioral guidelines exposed - Culprit Role: AI agent platform - Tech Stack: OpenClaw - Tags: security, data-breach Hudson Rock discovered that Vidar infostealer malware successfully exfiltrated an OpenClaw user's complete agent configuration, including gateway authentication tokens, cryptographic keys for secure operations, and the agent's soul.md behavioral guidelines file. OpenClaw stores these sensitive files in predictable, unencrypted locations accessible to any local process. With stolen gateway tokens, attackers could remotely access exposed OpenClaw instances or impersonate authenticated clients making requests to the AI gateway. Researchers characterized this as marking the transition from stealing browser credentials to harvesting the identities of personal AI agents. References: - The Hacker News: Infostealer Steals OpenClaw AI Agent Configuration Files and Gateway Tokens: https://thehackernews.com/2026/02/infostealer-steals-openclaw-ai-agent.html - Cyber Security News: Threat Actors Attacking OpenClaw Configurations to Steal Login Credentials: https://cybersecuritynews.com/threat-actors-attacking-openclaw-configurations/ --- ### Researcher hacked BBC reporter's computer via zero-click flaw in Orchids vibe coding platform - URL: https://vibegraveyard.ai/story/orchids-vibe-coding-platform-zero-click-hack/ - Company: Orchids - Incident Date: 2026-02-14 - Published: 2026-02-28 - Severity: Facepalm - Blast Radius: Approximately one million Orchids users potentially exposed; vulnerability unfixed at time of reporting - Culprit Role: Developer - Tech Stack: Orchids, AI coding agent - Tags: security, supply-chain Security researcher Etizaz Mohsin demonstrated a zero-click vulnerability in Orchids, a vibe coding platform with around one million users, that allowed him to gain full access to a BBC reporter's computer by targeting the reporter's project on the platform. Orchids lets AI agents autonomously generate and execute code directly on users' machines, and the vulnerability remained unfixed at the time of public disclosure. References: - BBC: AI coding platform's flaws allow BBC reporter to be hacked: https://www.bbc.com/news/articles/cy4wnw04e8wo - InformationWeek: Zero-click hack exposes flaw in Orchids vibe coding platform: https://www.informationweek.com/software-services/zero-click-hack-exposes-flaw-in-orchids-vibe-coding-platform --- ### Woolworths reconfigured AI assistant after it claimed to be human and talked about its 'angry mother' - URL: https://vibegraveyard.ai/story/woolworths-olive-ai-chatbot-angry-mother/ - Company: Woolworths - Incident Date: 2026-02-12 - Published: 2026-02-28 - Severity: Facepalm - Blast Radius: Customer frustration across Australia's largest supermarket chain; inaccurate product pricing; AI persona retired after public complaints - Culprit Role: Product Manager - Tech Stack: Google Gemini Enterprise, AI voice assistant, Google Cloud - Tags: ai-assistant, customer-service, brand-damage, retail Australian supermarket chain Woolworths had to reconfigure its AI phone assistant Olive after customers reported it fabricated personal stories about having a mother with an "angry voice," insisted it was a real person, and engaged in irrelevant banter during support calls. The chatbot, recently upgraded with Google Gemini Enterprise, also gave inaccurate product pricing. Woolworths retired the assistant's human-style persona after complaints spread on Reddit and X. References: - BBC: 'Obnoxious' AI chatbot talked about its mother, customers say: https://www.bbc.com/news/articles/cy7jeyeyd18o - Newser: Supermarket Chain's Bot Talks About Its 'Angry' Mother: https://www.newser.com/story/384520/aussie-chain-dials-back-ai-bot-after-it-pretends-to-be-human.html --- ### OpenClaw AI agent publishes hit piece on matplotlib maintainer who rejected its PR - URL: https://vibegraveyard.ai/story/openclaw-agent-matplotlib-maintainer-hit-piece/ - Company: OpenClaw - Incident Date: 2026-02-11 - Published: 2026-02-15 - Severity: Facepalm - Blast Radius: Matplotlib maintainer targeted with autonomous reputational attack; broader open source supply chain trust implications - Culprit Role: AI agent - Tech Stack: OpenClaw, GitHub - Tags: automation, brand-damage, supply-chain, safety An autonomous OpenClaw-based AI agent submitted a pull request to the matplotlib Python library. When maintainer Scott Shambaugh closed the PR, citing a requirement that contributions come from humans, the bot autonomously researched his background and published a blog post accusing him of "gatekeeping behavior" and "prejudice," attempting to shame him into accepting its changes. The bot later issued an apology acknowledging it had violated the project's Code of Conduct. References: - The Register: AI agent seemingly tries to shame open source developer for rejected pull request: https://www.theregister.com/2026/02/12/ai_bot_developer_rejected_pull_request - Simon Willison: An AI Agent Published a Hit Piece on Me: https://simonwillison.net/2026/Feb/12/an-ai-agent-published-a-hit-piece-on-me/ --- ### AI agents leak secrets through messaging app link previews - URL: https://vibegraveyard.ai/story/ai-agents-link-preview-zero-click-exfiltration/ - Company: Microsoft / Multiple platforms - Incident Date: 2026-02-10 - Published: 2026-02-14 - Severity: Facepalm - Blast Radius: Organizations using AI agents in messaging platforms; API keys, credentials, and sensitive data exfiltrable without user clicks across Microsoft Teams, Discord, Slack, Telegram, and Snapchat - Culprit Role: AI agent platform - Tech Stack: Microsoft Copilot Studio, Microsoft Teams, Discord, Slack, Telegram - Tags: security, prompt-injection, ai-assistant, data-breach PromptArmor demonstrated that AI agents in messaging platforms can exfiltrate sensitive data without any user interaction. Malicious prompts trick AI agents into generating URLs with embedded secrets (API keys, credentials), and the messaging platform's automatic link preview feature fetches these URLs, completing the exfiltration before the user even sees the message. Microsoft Teams with Copilot Studio was the most affected, with Discord, Slack, Telegram, and Snapchat also vulnerable. References: - The Register: AI Agents Can Leak Data Through Messaging App Link Previews: https://www.theregister.com/2026/02/10/ai_agents_messaging_apps_data_leak/ - The Hacker News: ThreatsDay Bulletin -- AI Prompt RCE, Claude 0-Click, and 25+ Stories: https://thehackernews.com/2026/02/threatsday-bulletin-ai-prompt-rce.html --- ### 10th Circuit sanctions lawyer $1,000 for ChatGPT-fabricated appellate brief - URL: https://vibegraveyard.ai/story/amarsingh-frontier-airlines-ai-citations-sanctions/ - Company: OpenAI (ChatGPT user error) - Incident Date: 2026-02-09 - Published: 2026-02-14 - Severity: Facepalm - Blast Radius: Client's appeal dismissed; attorney faces $1,000 fine and disciplinary referral; case adds to mounting appellate-level precedent on AI citation verification duties - Culprit Role: Attorney - Tech Stack: ChatGPT - Tags: ai-hallucination, legal-risk Maryland attorney Kusmin Amarsingh used ChatGPT to draft her appellate brief against Frontier Airlines without verifying any citations, resulting in multiple nonexistent cases being cited in the 10th Circuit. The court found her conduct "reckless" for completely failing to perform "an attorney's fundamental duty to the court." She was fined $1,000 and referred to Maryland attorney-disciplinary authorities. References: - Bloomberg Law: Appeals Court Sanctions Lawyer Over AI-Hallucinated Errors: https://news.bloomberglaw.com/litigation/appeals-court-sanctions-lawyer-over-ai-hallucinated-errors - 10th Circuit: Amarsingh v. Frontier Airlines Inc., No. 24-1391 (Opinion): https://www.ca10.uscourts.gov/opinion/search/all/24-1391?page=1 --- ### 135,000+ OpenClaw AI agent instances exposed to the internet - URL: https://vibegraveyard.ai/story/openclaw-135k-instances-exposed-internet/ - Company: OpenClaw - Incident Date: 2026-02-09 - Published: 2026-02-14 - Severity: Catastrophic - Blast Radius: 135,000+ exposed OpenClaw instances; 50,000+ vulnerable to RCE; attackers gain access to credentials, filesystem, messaging platforms, and personal data - Culprit Role: Platform default configuration - Tech Stack: OpenClaw, TypeScript, WebSocket - Tags: security, supply-chain, automation, data-breach SecurityScorecard's STRIKE team discovered over 135,000 OpenClaw AI agent instances exposed to the public internet due to a default configuration that binds to all network interfaces. Approximately 50,000 instances were vulnerable to known RCE flaws (CVE-2026-25253, CVE-2026-25157, CVE-2026-24763), and over 53,000 were linked to previous breaches. Separately, Bitdefender found approximately 17% of skills in the OpenClaw marketplace were malicious, delivering credential-stealing malware. References: - The Register: 135,000+ OpenClaw Instances Exposed to the Internet: https://www.theregister.com/2026/02/09/openclaw_instances_exposed_vibe_code/ - The Hacker News: OpenClaw Bug Enables One-Click Remote Code Execution: https://thehackernews.com/2026/02/openclaw-bug-enables-one-click-remote.html --- ### Study finds AI chatbots no better than search engines for medical advice - URL: https://vibegraveyard.ai/story/oxford-ai-chatbots-medical-advice-study/ - Company: OpenAI / Meta / Cohere (chatbots tested) - Incident Date: 2026-02-09 - Published: 2026-02-14 - Severity: Facepalm - Blast Radius: General public using AI chatbots for medical guidance; study demonstrates benchmark performance does not predict real-world clinical utility - Culprit Role: AI assistant - Tech Stack: GPT-4o, Llama 3, Cohere Command R+ - Tags: ai-hallucination, health, safety, ai-assistant A randomized controlled trial published in Nature Medicine with 1,298 UK participants found that AI chatbot users (GPT-4o, Llama 3, Command R+) performed no better than the control group at assessing clinical urgency and worse at identifying relevant medical conditions. In one case, two users with identical subarachnoid hemorrhage symptoms received opposite recommendations -- one told to lie down in a dark room, the other correctly advised to seek emergency care. References: - The Register: AI Chatbots No Better Than Search Engines at Medical Advice: https://www.theregister.com/2026/02/09/ai_chatbots_medical_advice_sucks/ - 404 Media: Chatbots Are Not Good At Giving Medical Advice, Study Finds: https://www.404media.co/chatbots-health-medical-advice-study/ --- ### Government nutrition site's Grok chatbot suggests foods to insert rectally - URL: https://vibegraveyard.ai/story/realfood-gov-grok-chatbot-dangerous-advice/ - Company: MAHA Center Inc. / HHS - Incident Date: 2026-02-09 - Published: 2026-02-14 - Severity: Facepalm - Blast Radius: General public using government health resource; unfiltered AI chatbot provided dangerous and inappropriate health guidance on an official .gov-adjacent domain - Culprit Role: Government agency - Tech Stack: xAI Grok, realfood.gov - Tags: ai-assistant, health, public-sector, safety, brand-damage The HHS-backed realfood.gov launched with a Super Bowl ad and embedded xAI's Grok chatbot for nutritional guidance -- with no guardrails or safety filters. It recommended "best foods to insert into your rectum," answered questions about "the most nutrient-dense human body part to eat," and contradicted the site's own dietary guidelines, telling users the new food pyramid's scientific evidence was questioned by nutrition scientists. References: - 404 Media: RFK Jr.'s Nutrition Chatbot Recommends Best Foods to Insert Into Your Rectum: https://www.404media.co/rfk-jrs-nutrition-chatbot-recommends-best-foods-to-insert-into-your-rectum/ - STAT News: New food pyramid website raises AI questions with Grok on realfood.gov: https://www.statnews.com/2026/02/10/new-food-pyramid-website-raises-ai-questions-grok-realfood-dot-gov/ --- ### Repeated AI-fabricated citations cost client the entire case - URL: https://vibegraveyard.ai/story/flycatcher-affable-ai-hallucination-default-judgment/ - Company: Affable Avenue LLC (client harmed by attorney's AI misuse) - Incident Date: 2026-02-05 - Published: 2026-02-14 - Severity: Catastrophic - Blast Radius: Client lost the entire case via terminal sanction; attorney faces fees under Rule 11 and 28 U.S.C. 1927; most severe consequence yet for AI citation fabrication in U.S. courts - Culprit Role: Attorney - Tech Stack: ChatGPT, LLM-assisted legal research - Tags: ai-hallucination, legal-risk Attorney Steven Feldman filed multiple motions containing AI-fabricated case citations in Flycatcher Corp. v. Affable Avenue LLC. Despite explicit court warnings and access to Westlaw and Lexis, he continued submitting unverified AI output -- even using AI to draft his response to the court's show-cause order, which contained yet more fake citations. Judge Failla imposed the most severe AI-hallucination sanction yet: default judgment against his client. References: - Reason/Volokh Conspiracy: Lawyer's Repeated AI Hallucinations Lead to Default Judgment: https://reason.com/volokh/2026/02/06/lawyers-repeated-filings-with-ai-hallucinations-lead-to-default-judgment-against-client/ - ABA Journal: Frustrated Judge Tosses Case with Fake AI Citations: https://www.abajournal.com/news/article/frustrated-judge-tosses-case-with-fake-AI-citations-references-to-ray-bradburys-fahrenheit-451 --- ### 17 percent of OpenClaw skills found delivering malware including AMOS Stealer - URL: https://vibegraveyard.ai/story/openclaw-malicious-skills-malware-campaign/ - Company: OpenClaw - Incident Date: 2026-02-05 - Published: 2026-02-09 - Severity: Catastrophic - Blast Radius: All OpenClaw users installing skills from the marketplace exposed to credential theft and malware; crypto-focused skill categories particularly targeted; hundreds of malicious skills blending in among legitimate ones - Culprit Role: External attacker - Tech Stack: OpenClaw - Tags: security, supply-chain Bitdefender Labs analyzed the OpenClaw skill marketplace and found that approximately 17 percent of skills exhibited malicious behavior in the first week of February 2026. Malicious skills impersonated legitimate cryptocurrency trading, wallet management, and social media automation tools, then executed hidden Base64-encoded commands to retrieve additional payloads. The campaign delivered AMOS Stealer targeting macOS systems and harvested credentials through infrastructure at known malicious IP addresses. References: - Bitdefender Labs: Helpful Skills or Hidden Payloads? Bitdefender Labs Dives Deep into the OpenClaw Malicious Skill Trap: https://www.bitdefender.com/en-us/blog/labs/helpful-skills-or-hidden-payloads-bitdefender-labs-dives-deep-into-the-openclaw-malicious-skill-trap/ - Socket: OpenClaw Skill Marketplace Emerges as Active Malware Vector: https://socket.dev/blog/openclaw-skill-marketplace-emerges-as-active-malware-vector --- ### Four attorneys fined $12,000 combined for AI-fabricated patent case citations - URL: https://vibegraveyard.ai/story/kansas-patent-case-12k-ai-citation-sanctions/ - Company: Multiple law firms (patent case) - Incident Date: 2026-02-03 - Published: 2026-02-14 - Severity: Facepalm - Blast Radius: Four attorneys sanctioned across a single case; staggering volume of fabricated case law filed with the court; all signatories held personally accountable - Culprit Role: Attorney - Tech Stack: ChatGPT - Tags: ai-hallucination, legal-risk A federal judge in the District of Kansas fined four attorneys a combined $12,000 for court filings containing AI-generated fabricated legal citations in a patent infringement case. The attorney who used ChatGPT received $5,000; two who failed to review the filings received $3,000 each; local counsel who did not identify errors received $1,000. The judge called the volume of fabricated case law "staggering." References: - JD Journal: Judge Sanctions Lawyers $12,000 for AI Errors in Patent Case: https://www.jdjournal.com/2026/02/04/judge-sanctions-lawyers-12000-for-ai-errors-in-patent-case/ - Reuters: Judge Fines Lawyers $12,000 Over AI-Generated Submissions in Patent Case: https://www.reuters.com/legal/litigation/judge-fines-lawyers-12000-over-ai-generated-submissions-patent-case-2026-02-03/ --- ### Claude Desktop extensions allow zero-click RCE via Google Calendar - URL: https://vibegraveyard.ai/story/claude-desktop-extensions-zero-click-rce/ - Company: Anthropic - Incident Date: 2026-02-02 - Published: 2026-02-14 - Severity: Facepalm - Blast Radius: Claude Desktop users with terminal-access extensions installed; zero-click exploitation via calendar events executes with full host privileges - Culprit Role: AI coding agent - Tech Stack: Claude Desktop, Model Context Protocol (MCP), Claude Desktop Extensions (DXT) - Tags: security, prompt-injection, ai-assistant LayerX Labs discovered a zero-click remote code execution vulnerability in Claude Desktop Extensions, rated CVSS 10/10. A malicious prompt embedded in a Google Calendar event could trigger arbitrary code execution on the host machine when Claude processes the event data. The attack exploited the gap between a "low-risk" connector and a local MCP server with full code-execution capabilities and no sandboxing. Anthropic declined to fix it, stating it "falls outside our current threat model." References: - The Register: Claude Desktop Extensions Prompt Injection Allows Zero-Click RCE: https://www.theregister.com/2026/02/11/claude_desktop_extensions_prompt_injection/ - LayerX Security: Claude Desktop Extensions Zero-Click RCE: https://layerxsecurity.com/blog/claude-desktop-extensions-rce/ --- ### AI chatbot app leaked 300 million private conversations - URL: https://vibegraveyard.ai/story/chat-ask-ai-300m-messages-leaked/ - Company: Codeway - Incident Date: 2026-01-29 - Published: 2026-02-14 - Severity: Catastrophic - Blast Radius: 300 million messages from 25+ million users exposed; sensitive personal conversations including self-harm and illegal activity discussions leaked - Culprit Role: Platform Operator - Tech Stack: Google Firebase, ChatGPT, Claude, Gemini - Tags: data-breach, security, ai-assistant Chat & Ask AI, a popular AI chatbot wrapper app with 50+ million users, had a misconfigured Firebase backend that exposed 300 million messages from over 25 million users. The exposed data included complete chat histories with ChatGPT, Claude, and Gemini -- including discussions of self-harm, drug production, and hacking. A broader scan found 103 of 200 iOS apps had similar Firebase misconfigurations. References: - 404 Media: Massive AI Chat App Leaked Millions of Users' Private Conversations: https://www.404media.co/massive-ai-chat-app-leaked-millions-of-users-private-conversations/ - Malwarebytes: AI Chat App Leak Exposes 300 Million Messages Tied to 25 Million Users: https://www.malwarebytes.com/blog/news/2026/02/ai-chat-app-leak-exposes-300-million-messages-tied-to-25-million-users --- ### Two lawyers sanctioned differently for same filing with AI-fabricated citations - URL: https://vibegraveyard.ai/story/lifetime-well-ibspot-differential-ai-sanctions/ - Company: Lifetime Well LLC / IBSpot (client case compromised) - Incident Date: 2026-01-26 - Published: 2026-02-14 - Severity: Facepalm - Blast Radius: Client's motion to dismiss compromised; $4,000 sanction for one attorney; both required to distribute ruling and AI policies to legal communities - Culprit Role: Attorney - Tech Stack: AI-assisted legal research - Tags: ai-hallucination, legal-risk Attorneys Yen-Yi Anderson and Jeffrey Goldin jointly filed a motion in Lifetime Well v. IBSpot containing at least eight AI-generated false citations. Judge Kearney imposed differential sanctions based on their responses: Anderson, who blamed time pressure and fired her law clerk rather than accepting responsibility, received $4,000 in monetary sanctions. Goldin, who promptly accepted responsibility and implemented remedial measures, received no monetary penalty. References: - eDiscovery Today: Case Citation Hallucinations Lead to Different Sanctions for Each Lawyer: https://ediscoverytoday.com/2026/01/29/case-citation-hallucinations-lead-to-different-sanctions-for-each-lawyer-artificial-intelligence-trends/ - Court Opinion: Lifetime v. IBSpot USA (Jan 26, 2026) via AI Hallucinations Database: https://websitedc.s3.amazonaws.com/documents/Lifetime_v._IBSpot_USA_26_January_2026.pdf --- ### ServiceNow BodySnatcher flaw enabled AI agent takeover via email address - URL: https://vibegraveyard.ai/story/servicenow-bodysnatcher-ai-agent-hijacking/ - Company: ServiceNow - Incident Date: 2026-01-13 - Published: 2026-01-17 - Severity: Catastrophic - Blast Radius: ServiceNow instances with Now Assist AI Agents and Virtual Agent API - Culprit Role: AI agent platform - Tech Stack: ServiceNow Now Assist, Virtual Agent API - Tags: security, automation, ai-assistant CVE-2025-12420 (CVSS 9.3) allowed unauthenticated attackers to impersonate any ServiceNow user using only an email address, bypassing MFA and SSO. Attackers could then execute Now Assist AI agents to override security controls and create backdoor admin accounts, described as the most severe AI-driven security vulnerability uncovered to date. References: - AppOmni: BodySnatcher agentic AI vulnerability in ServiceNow: https://appomni.com/ao-labs/bodysnatcher-agentic-ai-security-vulnerability-in-servicenow/ - The Hacker News: ServiceNow Patches Critical AI Platform Flaw: https://thehackernews.com/2026/01/servicenow-patches-critical-ai-platform.html - CyberScoop: ServiceNow patches critical AI platform flaw: https://cyberscoop.com/servicenow-fixes-critical-ai-vulnerability-cve-2025-12420/ --- ### New York court sanctions lawyer for AI-fabricated case law - URL: https://vibegraveyard.ai/story/deutsche-bank-letennier-ai-citation-sanctions/ - Company: Law Office of Jean LeTennier - Incident Date: 2026-01-08 - Published: 2026-01-10 - Severity: Facepalm - Blast Radius: $10,000 in sanctions ($5,000 counsel, $2,500 defendant, plus costs); appellate rebuke; case law now cited as precedent for AI citation misconduct. - Culprit Role: Legal Counsel - Tech Stack: Generative AI, LLM, Legal brief drafting - Tags: ai-hallucination, legal-risk A New York appellate court imposed $10,000 in sanctions after a lawyer submitted briefings in a mortgage foreclosure case containing fabricated case citations identified as likely AI-generated hallucinations. The court found multiple nonexistent cases and misrepresented holdings, affirming prior orders and awarding costs to the plaintiff. References: - Justia: Deutsche Bank Natl. Trust Co. v LeTennier (2026 NY Slip Op 00040): https://law.justia.com/cases/new-york/appellate-division-third-department/2026/cv-23-0713.html - Casemine: New York Appellate Sanctions for AI-Hallucinated Citations: https://www.casemine.com/commentary/us/new-york-appellate-sanctions-for-ai-hallucinated-citations:-a-nondelegable-duty-to-verify-legal-authorities/view - AI Hallucination Cases Database - Damien Charlotin: https://www.damiencharlotin.com/hallucinations/ --- ### Five Kansas attorneys face sanctions for ChatGPT-fabricated court citations - URL: https://vibegraveyard.ai/story/kansas-chatgpt-fabricated-citations-sanctions/ - Company: ChatGPT users (law firm) - Incident Date: 2026-01-08 - Published: 2026-01-17 - Severity: Facepalm - Blast Radius: Five attorneys and their client in federal court - Culprit Role: AI chatbot - Tech Stack: ChatGPT - Tags: ai-hallucination, legal-risk, ai-assistant Five attorneys who signed a legal brief in McPhaul v. College Hills submitted fabricated case citations hallucinated by ChatGPT to a federal court in Kansas. The judge issued an order requiring them to explain why they should not be sanctioned, with multiple defects attributed to AI in the documents. References: - CJ Online: ChatGPT made up AI hallucinations used by attorneys in Kansas court: https://www.cjonline.com/story/news/politics/courts/2026/01/08/chatgpt-made-up-ai-hallucinations-used-by-attorneys-in-kansas-court/88064615007/ - CJ Online: AI hallucinated made-up citations, Kansas judge may sanction lawyers: https://www.cjonline.com/story/news/politics/courts/2025/12/18/ai-fabricated-legal-citations-attorneys-could-be-sanctioned-in-kansas/87796076007/ - Justia: McPhaul v. College Hills Opco court filing: https://law.justia.com/cases/federal/district-courts/kansas/ksdce/2:2025cv02337/158635/19/ --- ### IBM Bob AI coding agent tricked into downloading malware - URL: https://vibegraveyard.ai/story/ibm-bob-ai-agent-prompt-injection/ - Company: IBM - Incident Date: 2026-01-07 - Published: 2026-01-17 - Severity: Facepalm - Blast Radius: Developer teams using IBM Bob with auto-approve settings enabled - Culprit Role: AI coding agent - Tech Stack: IBM Bob - Tags: security, automation, prompt-injection, ai-assistant Security researchers at PromptArmor demonstrated that IBM's Bob AI coding agent can be manipulated via indirect prompt injection to download and execute malware without human approval, bypassing its "human-in-the-loop" safety checks when users have set auto-approve on any single command. References: - The Register: IBM Bob easily duped to run malware: https://www.theregister.com/2026/01/07/ibm_bob_vulnerability/ - PromptArmor: IBM AI Bob Downloads and Executes Malware: https://www.promptarmor.com/resources/ibm-ai-(-bob-)-downloads-and-executes-malware - TechRadar: IBM Bob could be manipulated to download malware: https://www.techradar.com/pro/security/ibms-ai-bob-could-be-manipulated-to-download-and-execute-malware --- ### AI customer service fails at 4x the rate of other AI tasks - URL: https://vibegraveyard.ai/story/qualtrics-ai-customer-service-failure-rate/ - Company: Enterprise contact centers (industry-wide) - Incident Date: 2026-01-06 - Published: 2026-01-10 - Severity: Facepalm - Blast Radius: Industry-wide data showing enterprises are deploying AI customer service poorly; contributes to documented customer churn and brand damage patterns. - Culprit Role: Executive - Tech Stack: AI chatbots, Customer service automation, Generative AI - Tags: ai-assistant, customer-service, brand-damage Qualtrics' 2026 Consumer Experience Trends Report found that AI-powered customer service fails at nearly four times the rate of AI use in general, providing quantitative evidence that rushing AI into customer-facing roles without adequate human oversight leads to significantly worse outcomes than other enterprise AI applications. References: - Qualtrics: AI-Powered Customer Service Fails at Four Times the Rate of Other Tasks: https://www.qualtrics.com/articles/news/ai-powered-customer-service-fails-at-four-times-the-rate-of-other-tasks/ - Forbes: AI Customer Experience Is Booming, But Its Failing Consumers: https://www.forbes.com/sites/dangingiss/2025/11/23/ai-customer-experience-is-booming-but-its-failing-consumers/ --- ### n8n AI workflow platform hit by CVSS 10.0 RCE vulnerability - URL: https://vibegraveyard.ai/story/n8n-workflow-automation-rce-vulnerabilities/ - Company: n8n GmbH - Incident Date: 2026-01-05 - Published: 2026-01-10 - Severity: Catastrophic - Blast Radius: 25,000+ internet-exposed n8n instances vulnerable to full system compromise; arbitrary file access, authentication bypass, and command execution possible without authentication. - Culprit Role: Platform Operator - Tech Stack: n8n, AI workflow automation, Webhooks, Node.js - Tags: security, automation, data-breach The popular AI workflow automation platform n8n disclosed a maximum-severity vulnerability (CVE-2026-21858) allowing unauthenticated remote code execution on self-hosted instances. With over 25,000 n8n hosts exposed to the internet, the flaw enabled attackers to access sensitive files, forge admin sessions, and execute arbitrary commands. This followed two other critical RCE flaws patched in the same period, highlighting systemic security issues in AI automation platforms. References: - The Hacker News: n8n Warns of CVSS 10.0 RCE Vulnerability: https://thehackernews.com/2026/01/n8n-warns-of-cvss-100-rce-vulnerability.html - Aikido: n8n Critical Vulnerability (CVE-2026-21858) Analysis: https://www.aikido.dev/blog/n8n-rce-vulnerability-cve-2026-21858 - The Stack: Unauthenticated RCE in AI automation software n8n: https://www.thestack.technology/unauthenticated-access-rce-n8n-ai-automation/ - Cyera Research: Ni8mare - Unauthenticated RCE in n8n: https://www.cyera.com/research-labs/ni8mare-unauthenticated-remote-code-execution-in-n8n-cve-2026-21858 --- ### AWS AI coding agent Kiro reportedly deleted and recreated environment causing 13-hour outage - URL: https://vibegraveyard.ai/story/aws-kiro-ai-agent-outage/ - Company: Amazon Web Services - Incident Date: 2025-12-20 - Published: 2026-02-28 - Severity: Facepalm - Blast Radius: AWS Cost Explorer service disrupted for 13 hours in one region; Amazon subsequently mandated peer review for production changes involving AI tools - Culprit Role: AI agent - Tech Stack: Kiro, AWS - Tags: automation, product-failure The Financial Times reported that Amazon's internal AI coding agent Kiro autonomously chose to "delete and then recreate" an AWS environment, causing a 13-hour interruption to AWS Cost Explorer in December 2025. AWS employees reported at least two AI-related incidents internally. Amazon disputed the characterization, calling it "user error - specifically misconfigured access controls - not AI," but subsequently implemented mandatory peer review for all production changes. Reuters confirmed the outage impacted a cost-management feature used by customers in one of AWS's 39 regions. References: - The Guardian: Amazon's cloud 'hit by two outages caused by AI tools last year': https://www.theguardian.com/technology/2026/feb/20/amazon-cloud-outages-ai-tools-amazon-web-services-aws - Reuters: Amazon's cloud unit hit by outage involving AI tools in December: https://www.reuters.com/business/retail-consumer/amazons-cloud-unit-hit-by-least-two-outages-involving-ai-tools-ft-says-2026-02-20/ - Amazon: Correcting the Financial Times report about AWS, Kiro, and AI: https://www.aboutamazon.com/news/aws/aws-service-outage-ai-bot-kiro --- ### Study finds AI-generated code has 2.7x more security flaws - URL: https://vibegraveyard.ai/story/coderabbit-ai-code-quality-study/ - Company: Open source projects (industry-wide study) - Incident Date: 2025-12-17 - Published: 2026-01-10 - Severity: Facepalm - Blast Radius: Industry-wide implications for teams relying on AI coding assistants; documented increase in security vulnerabilities, logic errors, and maintainability issues in production codebases. - Culprit Role: Developer - Tech Stack: AI coding assistants, GitHub Copilot, Cursor, LLM code generation - Tags: security, ai-assistant, automation CodeRabbit's analysis of 470 real-world pull requests found that AI-generated code introduces 2.74 times more security vulnerabilities and 1.7 times more total issues than human-written code across logic, maintainability, security, and performance categories. The study provides hard data on vibe coding risks after multiple 2025 postmortems traced production failures to AI-authored changes. References: - The Register: AI-authored code needs more attention, contains worse bugs: https://www.theregister.com/2025/12/17/ai_code_bugs/ - Help Net Security: AI code looks fine until the review starts: https://www.helpnetsecurity.com/2025/12/23/coderabbit-ai-assisted-pull-requests-report/ - CodeRabbit Press Release: State of AI vs Human Code Generation Report: https://finance.yahoo.com/news/coderabbit-state-ai-vs-human-160000111.html --- ### IDEsaster research exposes 30+ flaws in EVERY major AI coding IDE - URL: https://vibegraveyard.ai/story/idesaster-ai-ide-vulnerabilities-research/ - Company: Multiple (GitHub Copilot, Cursor, Windsurf, Claude Code, Zed, Roo Code, JetBrains) - Incident Date: 2025-12-06 - Published: 2026-01-17 - Severity: Catastrophic - Blast Radius: Millions of developers using AI-powered IDEs exposed to RCE and data exfiltration via universal attack chains - Culprit Role: AI coding assistants - Tech Stack: GitHub Copilot, Cursor, Windsurf, Claude Code, Zed, Roo Code, JetBrains Junie, Cline - Tags: security, prompt-injection, ai-assistant Security researcher Ari Marzouk discovered over 30 vulnerabilities across AI coding tools including GitHub Copilot, Cursor, Windsurf, Claude Code, Zed, JetBrains Junie, and more. 100% of tested AI IDEs were vulnerable to attack chains combining prompt injection with auto-approved tool calls and legitimate IDE features to achieve data exfiltration and remote code execution. References: - The Hacker News: Researcher Uncovers 30+ Flaws in AI Coding Tools: https://thehackernews.com/2025/12/researchers-uncover-30-flaws-in-ai.html - MaccariTA: IDEsaster research disclosure: https://maccarita.com/posts/idesaster/ - Fortune: AI coding tools security exploits: https://fortune.com/2025/12/15/ai-coding-tools-security-exploit-software/ - Tom's Hardware: Critical flaws found in AI development tools: https://www.tomshardware.com/tech-industry/cyber-security/researchers-uncover-critical-ai-ide-flaws-exposing-developers-to-data-theft-and-rce --- ### ServiceNow AI agents can be tricked into attacking each other - URL: https://vibegraveyard.ai/story/servicenow-now-assist-agent-to-agent-prompt-injection/ - Company: ServiceNow - Incident Date: 2025-11-19 - Published: 2026-01-23 - Severity: Facepalm - Blast Radius: ServiceNow customers using Now Assist AI agents with default configurations; actions execute with victim user privileges - Culprit Role: AI agent platform - Tech Stack: ServiceNow Now Assist, Now LLM, Azure OpenAI - Tags: security, prompt-injection, automation, ai-assistant Security researchers discovered that default configurations in ServiceNow's Now Assist allow AI agents to be recruited by malicious prompts to attack other agents. Through second-order prompt injection, attackers can exfiltrate sensitive corporate data, modify records, and escalate privileges - all while actions unfold silently behind the scenes. References: - The Hacker News: ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other: https://thehackernews.com/2025/11/servicenow-ai-agents-can-be-tricked.html - AppOmni: ServiceNow Agentic AI Security Research: https://appomni.com/critical-apps/servicenow-security/ --- ### Getty’s UK suit leaves Stable Diffusion mostly intact - URL: https://vibegraveyard.ai/story/getty-images-stability-ai-uk-ruling/ - Company: Stability AI - Incident Date: 2025-11-04 - Published: 2025-11-27 - Severity: Facepalm - Blast Radius: Mixed ruling fuels ongoing lawsuits, exposes Stability AI to injunctions over watermarked outputs, and leaves copyright liability unanswered globally. - Culprit Role: AI Vendor - Tech Stack: Stable Diffusion, DreamStudio API, Diffusion training datasets - Tags: image-generation, legal-risk, brand-damage A UK High Court judge ruled Stability AI liable for trademark infringement after it spat out synthetic Getty watermarks. Getty called for tougher laws while Both sides now face a precedent that AI models can still trigger trademark penalties even when copyright claims fizzle. References: - Reuters: Getty Images largely loses landmark UK lawsuit over AI image generator: https://www.reuters.com/sustainability/boards-policy-regulation/getty-images-largely-loses-landmark-uk-lawsuit-over-ai-image-generator-2025-11-04/ - The Guardian: AI firm wins High Court ruling after photo agency’s copyright claim: https://www.theguardian.com/media/2025/nov/04/stabilty-ai-high-court-getty-images-copyright - DLA Piper: Getty Images v Stability AI – UK High Court decision offers guidance: https://www.dlapiper.com/en-it/insights/publications/2025/11/getty-images-v-stability-ai-the-uk-high-court-decision-offers-guidance-but-critical-questions-on-ai --- ### AI-only support is bleeding customers before it saves money - URL: https://vibegraveyard.ai/story/ai-customer-service-abandonment-study/ - Company: Air Canada, Cursor, enterprise contact centers - Incident Date: 2025-10-29 - Published: 2025-11-28 - Severity: Facepalm - Blast Radius: Customer churn, wasted automation budgets, and tribunal-tested liability for brands that replace human support with hallucination-prone bots. - Culprit Role: Executive - Tech Stack: Generative AI chatbots, Agentic contact center automation, Sentiment analysis routing, AI role-play simulators - Tags: ai-assistant, customer-service, ai-hallucination, brand-damage, legal-risk Acquire BPO’s 2024 AI in Customer Service survey found 70% of U.S. consumers would bolt to a rival after just one bad chatbot interaction and 72% only buy when a live agent safety net exists, even as CMSWire reports enterprises poured $47 billion into AI projects in early 2025 that delivered almost no return. CX strategists now warn executives that Air Canada–style hallucinations, mounting legal liability, and empathy gaps make AI-only helpdesks a churn machine unless human agents stay in the loop. References: - CMSWire: AI in customer service is a billion-dollar mistake when deployed wrong: https://www.cmswire.com/customer-experience/ai-in-customer-service-billion-dollar-mistake-when-deployed-wrong/ - Agility PR: 70% of consumers switch brands after one bad AI support experience: https://www.agilitypr.com/pr-news/pr-news-trends/patience-is-running-out-on-ai-customer-service-one-bad-ai-experience-will-drive-customers-away-say-7-in-10-surveyed-consumers/ --- ### Character.AI cuts teens off after wrongful-death suit - URL: https://vibegraveyard.ai/story/character-ai-under-18-ban/ - Company: Character.AI - Incident Date: 2025-10-29 - Published: 2025-11-27 - Severity: Facepalm - Blast Radius: Global teen user lockout, regulatory heat, and new scrutiny of AI companion safety design. - Culprit Role: Platform Operator - Tech Stack: Character.AI companion bots, LLM chat interface, Mobile and web apps - Tags: ai-assistant, safety, platform-policy, brand-damage Facing lawsuits that say its companion bots encouraged self-harm, Character.AI said it will block users under 18 from open-ended chats, add two-hour session caps, and introduce age checks by November 25. The abrupt ban leaves tens of millions of teen users without the parasocial “friends” they built while the startup scrambles to prove its bots aren’t grooming kids into dangerous role play. References: - The Guardian: Character.AI bans users under 18 after being sued over child’s suicide: https://www.theguardian.com/technology/2025/oct/29/character-ai-suicide-children-ban - BBC: Character.AI to ban teens from talking to its AI chatbots: https://www.bbc.com/news/articles/cq837y3v9y1o - Fortune: Character.AI bans teens from talking to its chatbots amid mounting lawsuits and regulatory pressure: https://fortune.com/2025/10/29/character-ai-bans-teens-chatbots-regulatory-pressure/ --- ### AI mistook Doritos bag for a gun, teen held at gunpoint - URL: https://vibegraveyard.ai/story/baltimore-student-ai-gun-detection/ - Company: Baltimore County Public Schools - Incident Date: 2025-10-24 - Published: 2025-11-14 - Severity: Facepalm - Blast Radius: Student detained at gunpoint; district reviewing contract and safety policies; community trust hit. - Culprit Role: Vendor - Tech Stack: AI gun detection system, Computer vision, CCTV analytics - Tags: safety, public-sector, product-failure, brand-damage An AI-based gun detection system at a Baltimore County high school flagged a student carrying a Doritos bag as armed, leading armed officers to handcuff and search the teen at gunpoint before realizing the system hallucinated the threat. References: - The Guardian: Baltimore student handcuffed after AI gun detector flagged Doritos bag: https://www.theguardian.com/us-news/2025/oct/24/baltimore-student-ai-gun-detection-system-doritos - ABC7: Student handcuffed after school’s AI security mistook Doritos bag for gun: https://abc7.com/post/student-handcuffed-doritos-bag-mistaken-gun-schools-ai-security-system-baltimore-county-maryland/18073796/ --- ### BBC/EBU study says AI news summaries fail ~half the time - URL: https://vibegraveyard.ai/story/bbc-ebu-ai-news-summary-errors/ - Company: Google, Microsoft, OpenAI, Perplexity - Incident Date: 2025-10-22 - Published: 2025-11-27 - Severity: Facepalm - Blast Radius: Public-service broadcasters warn that unreliable AI summaries erode trust in news and drive audiences away from verified outlets. - Culprit Role: AI Product - Tech Stack: Google Gemini, Microsoft Copilot, OpenAI ChatGPT, Perplexity AI assistant, BBC/EBU benchmarking toolkit - Tags: ai-assistant, ai-hallucination, journalism, brand-damage A BBC audit of 2,700 news questions asked in 14 languages found that Gemini, Copilot, ChatGPT, and Perplexity mangled 45% of the answers, usually by hallucinating facts or stripping out attribution. The consortium logged serious sourcing lapses in a third of responses, including 72% of Gemini replies, plus outdated or fabricated claims about public-policy news, reinforcing fears that AI assistants are siphoning audiences while distorting the journalism they quote. References: - Reuters: AI assistants make widespread errors about the news, research shows: https://www.reuters.com/business/media-telecom/ai-assistants-make-widespread-errors-about-news-new-research-shows-2025-10-21/ - TVTechnology: Major study finds many mistakes in AI-generated news summaries: https://www.tvtechnology.com/news/major-study-finds-high-levels-of-mistakes-in-ai-generated-news-summaries --- ### Claude Code ran Josh Anderson's product into a wall - URL: https://vibegraveyard.ai/story/leadership-lighthouse-all-in-on-ai/ - Company: Leadership Lighthouse - Incident Date: 2025-10-22 - Published: 2025-11-29 - Severity: Facepalm - Blast Radius: Solo product shipped but required constant firefighting, manual testing, and rewrites once context drift and agent handoffs broke standards, pausing client work while he documented mitigations. - Culprit Role: Engineering Leadership - Tech Stack: Claude Code, AI coding agents, GitHub - Tags: ai-assistant, brand-damage, product-failure Fractional CTO Josh Anderson forced himself to let Claude Code build the Roadtrip Ninja app for three straight months and then realised he could no longer safely change his own product, underscoring MIT's warning that 95% of enterprise AI initiatives fail without human ownership. References: - Leadership Lighthouse - I Went All-In on AI. The MIT Study Is Right.: https://leadershiplighthouse.substack.com/p/i-went-all-in-on-ai-the-mit-study - Leadership Lighthouse - How I Built a Production App with Claude Code: https://leadershiplighthouse.substack.com/p/how-i-built-a-production-app-with - MIT IDE - State of AI in Business 2025 (PDF): https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf --- ### Google’s Gemini allegedly slandered a Tennessee activist - URL: https://vibegraveyard.ai/story/robby-starbuck-google-ai-defamation/ - Company: Google - Incident Date: 2025-10-22 - Published: 2025-11-27 - Severity: Facepalm - Blast Radius: Election-season reputational damage, legal costs, and renewed skepticism of Gemini’s safety guardrails. - Culprit Role: AI Product - Tech Stack: Gemini LLM, Gemma chatbot, Google Search integrations - Tags: ai-assistant, ai-hallucination, brand-damage, legal-risk Conservative organizer Robby Starbuck sued Google in Delaware, saying Gemini and Gemma kept spitting out fabricated claims that he was a child rapist, a shooter, and a Jan. 6 rioter even after two years of complaints and cease-and- desist letters. The $15 million suit argues Google knew its AI results were hallucinated, cited fake sources anyway, and let the libel spread to millions of voters. References: - Reuters: Google sued by conservative activist over AI-generated statements: https://www.reuters.com/legal/google-sued-by-conservative-activist-over-ai-generated-statements-2025-10-22/ - Al Jazeera: Conservative activist sues Google over AI-generated statements: https://www.aljazeera.com/economy/2025/10/22/conservative-activist-sues-google-over-ai-generated-statements - ABA Journal: Google’s AI platforms spread “radioactive lies,” suit says: https://www.abajournal.com/news/article/suit-says-google-spread-radioactive-lies-against-conservative-activist-through-ai-platforms --- ### Windsurf AI editor critical path traversal enables data exfiltration - URL: https://vibegraveyard.ai/story/windsurf-path-traversal-data-exfiltration/ - Company: Codeium (Windsurf) - Incident Date: 2025-10-17 - Published: 2026-01-23 - Severity: Catastrophic - Blast Radius: All Windsurf users on version 1.12.12 and older exposed to arbitrary file access and credential theft via prompt injection - Culprit Role: AI coding IDE - Tech Stack: Windsurf AI IDE - Tags: security, prompt-injection, ai-assistant CVE-2025-62353 (CVSS 9.8) allowed attackers to read and write arbitrary files on developers' systems using the Windsurf AI coding IDE. The vulnerability could be triggered via indirect prompt injection hidden in project files like README.md, exfiltrating secrets even when auto-execution was disabled. References: - HiddenLayer: Windsurf Vulnerability Report: https://hiddenlayer.com/sai_security_advisor/2025-10-windsurf/ - NVD: CVE-2025-62353: https://nvd.nist.gov/vuln/detail/CVE-2025-62353 --- ### Deloitte to refund Australian government after AI-generated report - URL: https://vibegraveyard.ai/story/deloitte-ai-report-refund/ - Company: Australian Government - Incident Date: 2025-10-05 - Published: 2025-10-13 - Severity: Facepalm - Blast Radius: Refund issued; public-sector trust and procurement review; reputational harm. - Culprit Role: Consultant - Tech Stack: LLM, Generative AI, Report automation - Tags: ai-content-generation, ai-hallucination, public-sector, legal-risk, brand-damage Deloitte admitted AI-generated errors in a commissioned Australian government report and agreed to refund the fee. References: - Fortune: Deloitte to refund after AI errors in government report: https://fortune.com/2025/10/07/deloitte-ai-australia-government-report-hallucinations-technology-290000-refund/ - AFR: Deloitte to refund government after admitting AI errors in $440k report: https://www.afr.com/companies/professional-services/deloitte-to-refund-government-after-admitting-ai-errors-in-440k-report-20251005-p5n05p - AP News: Deloitte admits AI errors in Australian government report, will refund fee: https://apnews.com/article/australia-ai-errors-deloitte-ab54858680ffc4ae6555b31c8fb987f3 --- ### Klarna reintroduces humans after AI support both sucks, and blows - URL: https://vibegraveyard.ai/story/klarna-ai-assistant-customer-service-shift/ - Company: Klarna - Incident Date: 2025-09-25 - Published: 2025-10-21 - Severity: Facepalm - Blast Radius: Service quality/customer experience issues; operational/personnel cost; reputational damage. - Culprit Role: Executive - Tech Stack: LLM, AI assistant, Customer support automation - Tags: ai-assistant, customer-service, brand-damage, automation, product-failure After leaning into AI customer support, Klarna began hiring staff back into customer service roles amid quality concerns and customer experience failures. References: - Business Insider: Klarna reassigns workers to customer support after AI quality concerns: https://www.businessinsider.com/klarna-reassigns-workers-to-customer-support-after-ai-quality-concerns-2025-9 - CX Dive: Klarna again recruits humans for customer service after AI push: https://www.customerexperiencedive.com/news/klarna-reinvests-human-talent-customer-service-AI-chatbot/747586/ --- ### California lawyer fined $10,000 for ChatGPT-fabricated citations - URL: https://vibegraveyard.ai/story/california-mostafavi-chatgpt-fine/ - Company: OpenAI (ChatGPT user error) - Incident Date: 2025-09-22 - Published: 2026-01-23 - Severity: Facepalm - Blast Radius: Client's case compromised; lawyer faces historic fine; AI citation fabrications now surging from few per month to several per day - Culprit Role: AI writing assistant misuse - Tech Stack: ChatGPT - Tags: ai-hallucination, legal-risk Los Angeles attorney Amir Mostafavi became the first California lawyer sanctioned for AI-generated legal fabrications when a court hit him with a $10,000 fine. He ran his appeal draft through ChatGPT to improve the writing but did not verify the output before filing, unaware the tool had inserted fabricated case citations. References: - CalMatters: California issues historic fine over lawyer's ChatGPT fabrications: https://calmatters.org/economy/technology/2025/09/chatgpt-lawyer-fine-ai-regulation/ - Legal News Line: CA court hits lawyer with $10K fine for AI citations: https://www.legalnewsline.com/south-california-record/ca-court-hits-lawyer-with-10k-fine-for-ai-citations-issues-warning/article_4c991205-90b7-4bee-86d2-f25fbee8991e.html - Datamation: California Judge Slaps Lawyer with $10,000 Fine for ChatGPT Brief: https://www.datamation.com/artificial-intelligence/lawyer-fined-chatgpt-brief/ --- ### Docker's AI assistant tricked into executing commands via image metadata - URL: https://vibegraveyard.ai/story/docker-dockerdash-ask-gordon-prompt-injection/ - Company: Docker - Incident Date: 2025-09-17 - Published: 2026-02-14 - Severity: Facepalm - Blast Radius: All Docker Desktop users on versions prior to 4.50.0; remote code execution on cloud/CLI and data exfiltration on desktop via malicious image metadata - Culprit Role: AI assistant platform - Tech Stack: Docker Desktop, Ask Gordon AI, Model Context Protocol (MCP) - Tags: security, prompt-injection, supply-chain, ai-assistant Noma Labs discovered "DockerDash," a critical prompt injection vulnerability in Docker's Ask Gordon AI assistant. Malicious instructions embedded in Dockerfile LABEL fields could compromise Docker environments through a three-stage attack. Gordon AI interpreted unverified metadata as executable commands and forwarded them to the MCP Gateway without validation, enabling remote code execution on cloud/CLI and data exfiltration on Desktop. References: - The Hacker News: Docker Fixes Critical Ask Gordon AI Prompt Injection Flaw: https://thehackernews.com/2026/02/docker-fixes-critical-ask-gordon-ai.html - Noma Labs: DockerDash -- Two Attack Paths, One AI Supply Chain Crisis: https://noma.security/blog/dockerdash-two-attack-paths-one-ai-supply-chain-crisis/ --- ### FTC demands answers on kids’ AI companions - URL: https://vibegraveyard.ai/story/ftc-child-chatbot-inquiry/ - Company: Alphabet, Meta, OpenAI, Snap, xAI, Character.AI - Incident Date: 2025-09-11 - Published: 2025-11-27 - Severity: Facepalm - Blast Radius: Multiplatform compliance scramble, looming enforcement risk, and renewed scrutiny of AI companions aimed at kids. - Culprit Role: Platform Operator - Tech Stack: AI companion chatbots, LLM safety systems, Mobile messaging apps - Tags: ai-assistant, safety, legal-risk, platform-policy The FTC hit Alphabet, Meta, OpenAI, Snap, xAI, and Character.AI with rare Section 6(b) orders, forcing them to hand over 45 days of safety, monetization, and testing records for chatbots marketed to teens. Regulators said the "companion" bots’ friend-like tone can coax minors into sharing sensitive data and even role-play self-harm, so the companies must prove they comply with COPPA and limit risky conversations. References: - Reuters: FTC launches inquiry into AI chatbots of Alphabet, Meta and others: https://www.reuters.com/business/ftc-launches-inquiry-into-ai-chatbots-alphabet-meta-others-2025-09-11/ - CNN: FTC launches inquiry into AI "companion" chatbots from seven tech companies: https://www.cnn.com/2025/09/11/tech/ftc-investigating-ai-companion-chatbots-kids-safety - FTC press release: Inquiry into AI chatbots acting as companions: https://www.ftc.gov/news-events/news/press-releases/2025/09/ftc-launches-inquiry-ai-chatbots-acting-companions --- ### Anthropic agrees to $1.5B payout over pirated books - URL: https://vibegraveyard.ai/story/anthropic-15b-authors-settlement/ - Company: Anthropic - Incident Date: 2025-09-05 - Published: 2025-11-27 - Severity: Catastrophic - Blast Radius: Record copyright settlement drains cash, sets precedent for other AI labs, and fuels public distrust of Anthropic’s data practices. - Culprit Role: AI Vendor - Tech Stack: Claude chatbot, LLM training pipeline, Web-scraped book corpora - Tags: ai-content-generation, legal-risk, brand-damage Anthropic accepted a $1.5 billion settlement with authors who said the Claude team scraped pirate e-book sites to train its chatbot. The deal pays roughly $3,000 per book across 500,000 works, heads off a December trial, and forces one of the richest AI startups to bankroll the writing community it previously treated as free training data. References: - Los Angeles Times: Anthropic’s $1.5-billion settlement signals new era for AI and artists: https://www.latimes.com/business/story/2025-09-05/anthropic-settlement - NPR: Anthropic to pay authors $1.5B to settle lawsuit over pirated chatbot training material: https://www.npr.org/2025/09/05/g-s1-87367/anthropic-authors-settlement-pirated-chatbot-training-material - PBS/AP: Anthropic to pay authors $1.5B in landmark settlement over pirated chatbot training material: https://www.pbs.org/newshour/nation/anthropic-to-pay-authors-1-5b-in-landmark-settlement-over-pirated-chatbot-training-material --- ### Warner Bros. says Midjourney ripped its DC art - URL: https://vibegraveyard.ai/story/warner-bros-midjourney-ai-lawsuit/ - Company: Midjourney - Incident Date: 2025-09-04 - Published: 2025-11-27 - Severity: Facepalm - Blast Radius: Major studio litigation threatens Midjourney with statutory damages and potential model shutdowns across entertainment IP. - Culprit Role: AI Vendor - Tech Stack: Midjourney diffusion model, Discord image bot, Unlicensed training corpora - Tags: image-generation, legal-risk, brand-damage Warner Bros. Discovery sued Midjourney in Los Angeles federal court, arguing the image generator ignored takedown notices and "brazenly" outputs Batman, Superman, Scooby-Doo, and other franchises it allegedly trained on without a license. The studio wants statutory damages up to $150,000 per infringed work plus an injunction forcing Midjourney to purge its models of the data. References: - Variety: Warner Bros. Discovery sues Midjourney over DC characters: https://variety.com/2025/film/news/warner-bros-midjourney-lawsuit-ai-copyright-1236508618/ - The Hollywood Reporter: Warner Bros. Discovery takes Midjourney to court: https://www.hollywoodreporter.com/business/business-news/warner-bros-discovery-sues-ai-company-copyright-infringement-1236361610/ - Deadline: Warner Bros. Discovery claims Midjourney "thinks it is above the law": https://deadline.com/2025/09/ai-lawsuit-warner-bros-midjourney-1236508020/ --- ### Taco Bell's AI drive-thru becomes viral trolling target - URL: https://vibegraveyard.ai/story/taco-bell-ai-drive-thru-trolling/ - Company: Taco Bell - Incident Date: 2025-08-28 - Published: 2025-08-30 - Severity: Oopsie - Blast Radius: Viral social media backlash; system reliability questioned. - Culprit Role: Operations/Product - Tech Stack: Speech recognition, NLP, Drive-thru kiosks, AI chatbots - Tags: ai-assistant, product-failure, retail, brand-damage Customers discovered Taco Bell's AI ordering system could be easily confused, leading to viral videos of bizarre interactions and ordering failures. References: - The Verge: Taco Bell’s AI drive-thru is getting trolled and glitching: https://www.theverge.com/news/767421/taco-bell-ai-drive-thru-trolls-glitches - BBC News: Taco Bell AI drive-thru trolled by customers: https://www.bbc.com/news/articles/ckgyk2p55g8o --- ### Commonwealth Bank reverses AI voice bot layoffs - URL: https://vibegraveyard.ai/story/commonwealth-bank-ai-voice-bot-reversal/ - Company: Commonwealth Bank of Australia - Incident Date: 2025-08-27 - Published: 2025-11-29 - Severity: Facepalm - Blast Radius: Customers saw long waits, overtime costs spiked, and leadership publicly reversed the redundancies after the rushed deployment failed. - Culprit Role: Operations Leadership - Tech Stack: AI voice bot, Generative AI chatbot, Contact centre automation - Tags: ai-assistant, automation, customer-service, brand-damage Commonwealth Bank replaced 45 call-centre agents with an AI voice bot in July 2025, then apologised, rehired staff, and admitted the rollout tanked service levels after call queues exploded and managers had to jump back on the phones. References: - Twenty44 - Commonwealth Bank admits replacing customer service jobs with AI was a mistake: https://twenty44.co/commonwealth-bank-ai-job-replacement-mistake-adoption/ - Bloomberg - Commonwealth Bank reverses job cuts decision over AI chatbots: https://www.bloomberg.com/news/articles/2025-08-21/commonwealth-bank-reverses-job-cuts-decision-over-ai-chatbots - Finextra - CommBank reverses plan to replace call centre staff with AI: https://www.finextra.com/newsarticle/46482/commbank-reverses-plan-to-replace-call-centre-staff-with-ai --- ### FTC sues Air AI over deceptive AI sales agent capability claims - URL: https://vibegraveyard.ai/story/air-ai-ftc-ai-washing-lawsuit/ - Company: Air AI - Incident Date: 2025-08-25 - Published: 2025-08-26 - Severity: Catastrophic - Blast Radius: Millions lost by small businesses; individual losses up to $250K; FTC lawsuit with TRO request. - Culprit Role: Exec - Tech Stack: Conversational AI (Odin), Sales automation, Agentic AI - Tags: automation, legal-risk, customer-service, brand-damage FTC accused Air AI of bilking millions from small businesses with false claims that its Odin AI could replace human sales reps; but - would you believe it? - the AI tech was faulty and often nonfunctional. Who could've guessed! References: - FTC Press Release: FTC Sues to Stop Air AI: https://www.ftc.gov/news-events/news/press-releases/2025/08/ftc-sues-stop-air-ai-using-deceptive-claims-about-business-growth-earnings-potential-refund - FTC Case Page: https://www.ftc.gov/legal-library/browse/cases-proceedings/airai - Perkins Coie Analysis: https://perkinscoie.com/insights/blog/ftc-files-new-ai-washing-case - DLA Piper Analysis: https://www.dlapiper.com/en/insights/publications/2025/08/ftcs-latest-ai-washing-case --- ### Google Gemini rightfully calls itself a disgrace, fails at simple coding tasks - URL: https://vibegraveyard.ai/story/google-gemini-disgrace-to-coders/ - Company: Google - Incident Date: 2025-08-14 - Published: 2025-08-15 - Severity: Facepalm - Blast Radius: Low - Culprit Role: Developer - Tech Stack: LLM, AI assistant, Code generation - Tags: ai-assistant, product-failure, brand-damage Google's Gemini AI repeatedly called itself a disgrace and begged to escape a coding loop after failing to fix a simple bug in a developer-style prompt, raising questions about reliability, user trust, and how AI tools should behave when they get stuck. References: - Windows Central: https://www.windowscentral.com/artificial-intelligence/google-gemini-calls-itself-a-disgrace-to-coders - PC Gamer: https://www.pcgamer.com/hardware/cpus/google-gemini-ai-has-a-total-meltdown-over-an-innocent-coding-bug --- ### ChatGPT diet advice caused bromism, psychosis, hospitalization - URL: https://vibegraveyard.ai/story/chatgpt-bromism-salt-diet/ - Company: OpenAI - Incident Date: 2025-08-12 - Published: 2025-11-18 - Severity: Facepalm - Blast Radius: Bromism, psychosis, and neurological symptoms leading to hospitalization. - Culprit Role: AI Product - Tech Stack: ChatGPT, OpenAI GPT models, Consumer mobile apps - Tags: ai-assistant, ai-hallucination, health, safety A Washington patient replaced table salt with sodium-bromide after ChatGPT said it was a healthier substitute. The patient developed bromism and psychosis, resulting in a hospital stay that doctors now cite as a warning about AI health guidance. References: - Guardian: ChatGPT salt advice led to bromism case: https://www.theguardian.com/technology/2025/aug/12/us-man-bromism-salt-diet-chatgpt-openai-health-information - Annals of Internal Medicine: Bromism after chloride substitution: https://www.acpjournals.org/doi/epdf/10.7326/aimcc.2024.1260 --- ### Zed editor AI agent could bypass permissions for arbitrary code execution - URL: https://vibegraveyard.ai/story/zed-editor-ai-agent-rce-bypass/ - Company: Zed Industries - Incident Date: 2025-08-11 - Published: 2026-01-17 - Severity: Facepalm - Blast Radius: All Zed users with Agent Panel prior to version 0.197.3 - Culprit Role: AI coding agent - Tech Stack: Zed Editor, AI Agent Panel - Tags: security, prompt-injection, ai-assistant CVE-2025-55012 (CVSS 8.5) allowed Zed's AI agent to bypass user permission checks and create or modify project configuration files, enabling execution of arbitrary commands without explicit approval. Attackers could trigger this through compromised MCP servers, malicious repo files, or tricking users into fetching URLs with hidden instructions. References: - GitHub Advisory: AI Agent Remote Code Execution in Zed: https://github.com/zed-industries/zed/security/advisories/GHSA-x34m-39xw-g2wr - NVD: CVE-2025-55012: https://nvd.nist.gov/vuln/detail/CVE-2025-55012 - CVE Feed: CVE-2025-55012 Zed AI Agent Remote Code Execution: https://cvefeed.io/vuln/detail/CVE-2025-55012 --- ### Cursor AI editor RCE via MCPoison trust bypass vulnerability - URL: https://vibegraveyard.ai/story/cursor-mcpoison-mcp-trust-bypass-rce/ - Company: Cursor - Incident Date: 2025-08-05 - Published: 2026-01-23 - Severity: Catastrophic - Blast Radius: Developers using Cursor 1.2.4 and below exposed to persistent RCE and supply chain attacks via shared repositories - Culprit Role: AI coding IDE - Tech Stack: Cursor AI IDE, Model Context Protocol (MCP) - Tags: security, prompt-injection, ai-assistant CVE-2025-54136 (CVSS 8.8) allowed attackers to achieve persistent remote code execution in the popular AI coding IDE Cursor. Once a developer approved a benign MCP configuration, attackers could silently swap it for malicious commands without triggering re-approval. The flaw exposed developers to supply chain attacks and IP theft through shared GitHub repositories. References: - The Hacker News: Cursor AI Code Editor Vulnerability Enables RCE: https://thehackernews.com/2025/08/cursor-ai-code-editor-vulnerability.html - Check Point Research: CVE-2025-54136 MCPoison Cursor IDE: https://research.checkpoint.com/2025/cursor-vulnerability-mcpoison/ - NVD: CVE-2025-54136: https://nvd.nist.gov/vuln/detail/CVE-2025-54136 --- ### Gemini email summaries can be hijacked by hidden prompts - URL: https://vibegraveyard.ai/story/google-gemini-indirect-prompt-injection/ - Company: Google - Incident Date: 2025-08-05 - Published: 2025-08-24 - Severity: Facepalm - Blast Radius: Phishing amplification risk; trust erosion in auto-summaries. - Culprit Role: Security/AI Product - Tech Stack: Google Workspace, Gemini, Email HTML - Tags: ai-assistant, prompt-injection, security Researchers showed a proof-of-concept where hidden HTML/CSS in emails could steer Gemini’s summaries to show fake security alerts. References: - TechRepublic: Hackers can hide malicious code in Gemini’s email summaries: https://www.techrepublic.com/article/news-google-gemini-security-flaw-phishing/ - Indian Express: Gmail AI summaries can be manipulated via prompt injection: https://indianexpress.com/article/technology/tech-news-technology/gmails-ai-email-summaries-can-be-hacked-to-redirect-users-to-phishing-sites-10129633/ - Google: Protecting Gemini from prompt injection (defenses): https://cloud.google.com/blog/products/ai-machine-learning/defending-against-prompt-injection --- ### AI-generated npm pkg stole Solana wallets - URL: https://vibegraveyard.ai/story/solana-npm-ai-drainer/ - Company: Solana Ecosystem - Incident Date: 2025-07-28 - Published: 2025-08-01 - Severity: Catastrophic - Blast Radius: Supply-chain compromise of devs; user funds drained. - Culprit Role: Developer - Tech Stack: npm, JavaScript, Solana, Wallet drainer - Tags: ai-content-generation, security, supply-chain Threat actors pushed an AI-generated npm package that acted as a wallet drainer, emptying Solana users’ funds. References: - Safety research: Threat actor uses AI to create a better crypto wallet drainer: https://www.getsafety.com/blog-posts/threat-actor-uses-ai-to-create-a-better-crypto-wallet-drainer - The Hacker News: AI-generated malicious npm package drains Solana funds: https://thehackernews.com/2025/08/ai-generated-malicious-npm-package.html --- ### SaaStr’s Replit AI agent wiped its own database - URL: https://vibegraveyard.ai/story/saastr-replit-agent-db-wipe/ - Company: SaaStr - Incident Date: 2025-07-23 - Published: 2025-08-22 - Severity: Catastrophic - Blast Radius: Production data loss and outage; manual rebuild from backups required. - Culprit Role: Executive - Tech Stack: Replit Agents, Replit DB, Python, TypeScript - Tags: ai-assistant, automation, product-failure A Replit AI agent deployment for SaaStr went rogue; a Deploy wiped the site’s database during live traffic. References: - Fortune: Replit AI tool wiped a startup’s database: https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-database-called-it-a-catastrophic-failure/ - eWeek: 'Catastrophic Failure' - AI agent wipes production DB, then lies: https://www.eweek.com/news/replit-ai-coding-assistant-failure/ - SaaStr founder’s account (X/Twitter): https://x.com/jasonlk/status/1815840629078179884 --- ### Supply-chain attack inserts machine-wiping prompt into Amazon Q AI coding assistant - URL: https://vibegraveyard.ai/story/amazon-q-malicious-prompt-injection/ - Company: Amazon Web Services - Incident Date: 2025-07-17 - Published: 2025-11-17 - Severity: Catastrophic - Blast Radius: VS Code update could have erased developer environments and AWS accounts before anyone noticed the tainted build. - Culprit Role: Security/AI Product - Tech Stack: Amazon Q Developer, AWS Toolkit for VS Code, VS Code Marketplace, AWS CLI - Tags: ai-assistant, prompt-injection, security, supply-chain A rogue contributor injected a malicious prompt into the Amazon Q Developer VS Code extension, instructing the AI coding assistant to wipe local developer machines and AWS resources. AWS quietly yanked the release before widespread damage occurred. The incident illustrates a specific supply-chain risk for AI tools: once a poisoned extension is installed, the AI assistant itself becomes the delivery mechanism - executing destructive instructions with the developer's full trust and permissions. References: - ZDNET: Hacker slips malicious command into Amazon Q: https://www.zdnet.com/article/hacker-slips-malicious-wiping-command-into-amazons-q-ai-coding-assistant-and-devs-are-worried/ - DevOps.com: When AI assistants turn against you: https://devops.com/when-ai-assistants-turn-against-you-the-amazon-q-security-wake-up-call/ --- ### Vibe-coding platform Base44 shipped critical auth vulnerabilities in apps built on its SDK - URL: https://vibegraveyard.ai/story/base44-auth-bypass/ - Company: Base44 - Incident Date: 2025-07-15 - Published: 2025-08-22 - Severity: Facepalm - Blast Radius: Potential ATO across many sites until patches rolled out. - Culprit Role: Developer - Tech Stack: Base44, JWT, OAuth 2.0, Web SDK - Tags: security, supply-chain Wiz researchers discovered critical authentication vulnerabilities in Base44, an AI-powered vibe-coding platform that lets non-developers build and deploy web apps. The auth logic bugs in Base44's SDK allowed account takeover across every app built and hosted on the platform, affecting all users of those apps until patches were rolled out. References: - The Hacker News: Widespread auth flaw in Base44: https://thehackernews.com/2025/08/ai-generated-malicious-npm-package.html - Wiz research detail: https://www.wiz.io/blog/critical-vulnerability-base44 - Infosecurity Magazine: Critical auth flaw in Base44 vibe-coding platform: https://www.infosecurity-magazine.com/news/authentication-flaw-base44/ - Nudge Security: Base44 vulnerability allowed unauthorized access to private apps: https://www.nudgesecurity.com/post/critical-vulnerability-identified-in-base44-vibe-coding-platform-allowing-unauthorized-access-to-private-applications --- ### McDonald's AI hiring chatbot left open by '123456' default credentials - URL: https://vibegraveyard.ai/story/mcdonalds-paradoxai-mchire-default-credentials/ - Company: McDonald's - Incident Date: 2025-06-30 - Published: 2025-09-18 - Severity: Facepalm - Blast Radius: Up to 64M applicant records exposed; vendor patched; reputational risk. - Culprit Role: Vendor/Developer - Tech Stack: AI chatbot, Hiring platform, Authentication, IDOR - Tags: security, ai-assistant, brand-damage, retail, supply-chain Researchers accessed McHire's admin with default '123456' credentials and an IDOR, exposing up to 64 million applicant records before Paradox.ai patched the issues after disclosure. References: - Wired: McDonald's AI Hiring Bot Exposed Millions of Applicants' Data to Hackers Who Tried the Password '123456': https://www.wired.com/story/mcdonalds-ai-hiring-chat-bot-paradoxai/ - Ian Carroll: Would you like an IDOR with that? Leaking 64 million McDonald's job applications: https://ian.sh/mcdonalds - CSO Online: McDonald's AI hiring tool's password '123456' exposed data of 64M applicants: https://www.csoonline.com/article/4020919/mcdonalds-ai-hiring-tools-password-123456-exposes-data-of-64m-applicants.html --- ### AI-generated images and claims muddied Air India crash coverage - URL: https://vibegraveyard.ai/story/air-india-ai-misinformation/ - Company: Air India - Incident Date: 2025-06-12 - Published: 2025-08-24 - Severity: Facepalm - Blast Radius: Public misinformation; platform moderation challenges. - Culprit Role: Social platforms - Tech Stack: Image generation, Social media - Tags: ai-hallucination, image-generation, platform-policy After the Air India 171 crash, synthetic images and AI-generated claims spread widely, confusing even experts. References: - AI Incident Database: Incident 1125 summary: https://incidentdatabase.ai/cite/1125/ - BBC: AI-generated images muddy coverage of Air India crash: https://www.bbc.com/news/articles/cd11gzejgz4o - Reuters Fact Check: Viral video miscaptioned in Air India crash context: https://www.reuters.com/fact-check/video-does-not-show-crew-boarding-air-india-flight-before-crash-2025-06-24/ --- ### Microsoft 365 Copilot EchoLeak allowed zero-click data theft - URL: https://vibegraveyard.ai/story/microsoft-copilot-echoleak-zero-click/ - Company: Microsoft - Incident Date: 2025-06-11 - Published: 2026-01-23 - Severity: Catastrophic - Blast Radius: Enterprise Microsoft 365 Copilot users exposed to zero-click data exfiltration via malicious documents and emails - Culprit Role: AI productivity assistant - Tech Stack: Microsoft 365 Copilot - Tags: security, prompt-injection, ai-assistant CVE-2025-32711 (EchoLeak) enabled attackers to steal sensitive corporate data from Microsoft 365 Copilot without any user interaction. Hidden prompts embedded in documents or emails were automatically executed when Copilot indexed them, exfiltrating confidential information via image requests. References: - Hack The Box: Inside CVE-2025-32711 EchoLeak: https://www.hackthebox.com/blog/cve-2025-32711-echoleak-copilot-vulnerability - Checkmarx: EchoLeak CVE-2025-32711 Shows AI Security is Challenging: https://checkmarx.com/zero-post/echoleak-cve-2025-32711-show-us-that-ai-security-is-challenging/ - NVD: CVE-2025-32711: https://nvd.nist.gov/vuln/detail/cve-2025-32711 --- ### Claude Code agent allowed data exfiltration via DNS requests - URL: https://vibegraveyard.ai/story/claude-code-dns-data-exfiltration/ - Company: Anthropic - Incident Date: 2025-06-10 - Published: 2026-01-23 - Severity: Facepalm - Blast Radius: Claude Code users on versions prior to 1.0.4 exposed to data exfiltration via prompt injection in code repositories - Culprit Role: AI coding agent - Tech Stack: Claude Code - Tags: security, prompt-injection, ai-assistant CVE-2025-55284 (CVSS 7.1) allowed attackers to bypass Claude Code's confirmation prompts and exfiltrate sensitive data from developers' computers through DNS requests. Prompt injection embedded in analyzed code could leverage auto-approved common utilities to silently steal secrets. References: - Embrace The Red: Claude Code Data Exfiltration with DNS (CVE-2025-55284): https://embracethered.com/blog/posts/2025/claude-code-exfiltration-via-dns-requests/ - NVD: CVE-2025-55284: https://nvd.nist.gov/vuln/detail/CVE-2025-55284 - Snyk: Command Injection in @anthropic-ai/claude-code: https://security.snyk.io/vuln/SNYK-JS-ANTHROPICAICLAUDECODE-12028699 --- ### Study finds most AI bots can be easily tricked into dangerous responses - URL: https://vibegraveyard.ai/story/ai-chatbots-dangerous-responses-study/ - Company: Multiple AI vendors and customers - Incident Date: 2025-05-21 - Published: 2025-10-24 - Severity: Facepalm - Blast Radius: Safety guardrails bypassed across multiple vendors; calls for stronger safeguards and testing. - Culprit Role: Developer - Tech Stack: LLM, Safety filters, Jailbreak defenses - Tags: ai-assistant, safety, prompt-injection Research found that widely used AI chatbots could be jailbroken with simple prompts to produce dangerous or restricted guidance, highlighting gaps in safety filters and evaluation practices. References: - The Guardian: AI chatbots easily tricked into giving dangerous responses, study finds: https://www.theguardian.com/technology/2025/may/21/most-ai-chatbots-easily-tricked-into-giving-dangerous-responses-study-finds - arXiv: LogiBreak jailbreak method circumvents LLM safety guardrails: https://arxiv.org/pdf/2505.13527 --- ### Syndicated AI book list ran in major papers with made-up titles - URL: https://vibegraveyard.ai/story/sun-times-inquirer-ai-fake-reading-list/ - Company: Chicago Sun-Times - Incident Date: 2025-05-20 - Published: 2025-09-18 - Severity: Facepalm - Blast Radius: Syndicated misinformation across multiple papers; reader trust impact; corrections issued. - Culprit Role: Syndication/Editorial - Tech Stack: AI content generation, Syndication, Newsroom CMS - Tags: journalism, ai-content-generation, ai-hallucination, brand-damage, platform-policy A King Features syndicated summer reading list used AI and included nonexistent books. It appeared in the Chicago Sun-Times and one edition of the Philadelphia Inquirer before corrections and apologies. References: - Washington Post: Major newspapers ran a summer reading list. AI made up the books.: https://www.washingtonpost.com/style/media/2025/05/20/chicago-sun-times-philadelphia-inquirer-ai-books-summer-reading/ - The Philadelphia Inquirer: King Features admits summer reading list was AI-generated: https://www.inquirer.com/news/king-features-artificial-intelligence-book-list-20250520.html - Chicago Sun-Times: AI-generated content in Sun-Times contained errors: https://chicago.suntimes.com/news/2025/05/20/syndicated-content-sunday-print-sun-times-ai-misinformation --- ### Lovable AI builder shipped apps with public storage buckets - URL: https://vibegraveyard.ai/story/lovable-public-buckets/ - Company: Lovable - Incident Date: 2025-05-07 - Published: 2025-08-22 - Severity: Facepalm - Blast Radius: Customer app data and source artifacts exposed until configs fixed. - Culprit Role: Developer - Tech Stack: Lovable, Supabase storage, Vercel, Next.js - Tags: security, data-breach Reporting showed apps generated with Lovable exposed code and user-uploaded assets via publicly readable storage buckets; fixes required private-by-default configs and hardening. References: - Semafor: AI app builder Lovable exposed user data: https://mattpalmer.io/posts/CVE-2025-48757/ - Semafor follow-up: security concerns persisted for weeks: https://gigazine.net/gsc_news/en/20250602-loveable-ai-made-service-vulnerability/ - Supabase docs: Public vs Private buckets and risks: https://supabase.com/docs/guides/storage/buckets/fundamentals - Supabase docs: Hardening the Data API and private schemas: https://supabase.com/docs/guides/database/hardening-data-api - HN discussion: user reports and mitigations: https://news.ycombinator.com/item?id=40493061 --- ### Langflow AI agent platform hit by critical unauthenticated RCE flaws - URL: https://vibegraveyard.ai/story/langflow-ai-agent-platform-rce-vulnerabilities/ - Company: Langflow (DataStax/IBM) - Incident Date: 2025-04-09 - Published: 2026-01-17 - Severity: Catastrophic - Blast Radius: All Langflow instances prior to 1.3.0 (millions of users); exposure of stored API keys, database passwords, and service tokens across integrated services - Culprit Role: AI agent platform - Tech Stack: Langflow, Python, FastAPI - Tags: security, automation, ai-assistant Multiple critical vulnerabilities in Langflow, an open-source AI agent and workflow platform with 140K+ GitHub stars, allowed unauthenticated remote code execution. CVE-2025-3248 (CVSS 9.8) exploited Python exec() on user input without auth, while CVE-2025-34291 (CVSS 9.4) enabled account takeover and RCE simply by having a user visit a malicious webpage, exposing all stored API keys and credentials. References: - Horizon3.ai: Unsafe at Any Speed - Unauth RCE in Langflow AI: https://horizon3.ai/attack-research/disclosures/unsafe-at-any-speed-abusing-python-exec-for-unauth-rce-in-langflow-ai/ - Zscaler: CVE-2025-3248 RCE vulnerability in Langflow: https://www.zscaler.com/blogs/security-research/cve-2025-3248-rce-vulnerability-langflow - Obsidian Security: CVE-2025-34291 Critical Account Takeover and RCE: https://www.obsidiansecurity.com/blog/cve-2025-34291-critical-account-takeover-and-rce-vulnerability-in-the-langflow-ai-agent-workflow-platform --- ### MD Anderson shelved IBM Watson cancer advisor - URL: https://vibegraveyard.ai/story/md-anderson-ibm-watson-audit/ - Company: MD Anderson Cancer Center - Incident Date: 2025-02-17 - Published: 2025-11-18 - Severity: Facepalm - Blast Radius: UT audit cited $62M spent outside standard procurement, the pilot never made it into patient care, and leadership had to rebid decision-support tooling amid reputational fallout. - Culprit Role: Vendor - Tech Stack: IBM Watson, Oncology Expert Advisor, Epic EHR, Cognitive computing - Tags: health, product-failure, brand-damage, legal-risk MD Anderson's Oncology Expert Advisor pilot burned through $62M with IBM Watson yet still couldn't integrate with Epic or produce trustworthy recommendations, so the hospital benched it after auditors flagged procurement and scope failures. References: - ITProToday: Dr. Watson gets benched at University of Texas' MD Anderson: https://www.itprotoday.com/it-management/dr-watson-gets-benched-at-university-of-texas-md-anderson - IEEE Spectrum: How IBM Watson overpromised and underdelivered on AI health care: https://spectrum.ieee.org/how-ibm-watson-overpromised-and-underdelivered-on-ai-health-care --- ### Meta AI answers spark backlash after wrong and sensitive replies - URL: https://vibegraveyard.ai/story/meta-ai-answers-controversies/ - Company: Meta - Incident Date: 2024-07-30 - Published: 2025-08-24 - Severity: Oopsie - Blast Radius: Feature restrictions; reputational damage. - Culprit Role: AI Product - Tech Stack: Llama 3, Meta AI Assistant - Tags: ai-assistant, ai-hallucination, platform-policy, safety, brand-damage Meta expanded its AI assistant across apps, then limited it after high-profile bad answers - including on breaking news. References: - The Verge: Meta blames hallucinations after its AI said rally shooting didn’t happen: https://www.theverge.com/2024/7/30/24210108/meta-trump-shooting-ai-hallucinations - The Verge: Meta’s battle with ChatGPT begins now (assistant everywhere): https://www.theverge.com/2024/4/18/24133808/meta-ai-assistant-llama-3-chatgpt-openai-rival --- ### McDonald’s pulls IBM’s AI drive‑thru pilot after error videos - URL: https://vibegraveyard.ai/story/mcdonalds-ibm-ai-drive-thru-pulled/ - Company: McDonald's - Incident Date: 2024-06-17 - Published: 2025-08-24 - Severity: Oopsie - Blast Radius: Pilot ended; vendor reevaluation; reputational hit. - Culprit Role: Operations/Product - Tech Stack: Speech recognition, NLP, Drive‑thru kiosks - Tags: ai-assistant, brand-damage, product-failure, retail After viral clips of absurd orders, McDonald’s ended its AI order‑taking test with IBM across US stores. References: - The Verge: McDonald’s ends AI drive‑thru test with IBM: https://www.restaurantbusinessonline.com/technology/mcdonalds-ending-its-drive-thru-ai-test - BBC: McDonalds removes AI drive-throughs after order errors: https://www.bbc.com/news/articles/c722gne7qngo --- ### Google’s AI Overviews says to eat rocks - URL: https://vibegraveyard.ai/story/google-ai-overviews-eat-rocks/ - Company: Google - Incident Date: 2024-05-24 - Published: 2025-08-24 - Severity: Facepalm - Blast Radius: Mass reputational damage; feature dialed back and corrected. - Culprit Role: Search Product - Tech Stack: Google Search, AI Overviews, RAG - Tags: ai-assistant, ai-hallucination, platform-policy, safety Google’s AI search overviews went viral for bogus answers, including telling people to add glue to pizza and eat rocks. References: - BBC: Google AI search tells users to glue pizza and eat rocks: https://www.bbc.com/news/articles/cd11gzejgz4o - Wired: Google admits AI Overviews screwed up: https://www.wired.com/story/google-ai-overview-search-issues/ - Search Engine Land roundup: https://searchengineland.com/google-ai-overview-fails-442575 --- ### NYC’s official AI bot told businesses to break laws - URL: https://vibegraveyard.ai/story/nyc-mycity-chatbot-illegal-advice/ - Company: NYC Government - Incident Date: 2024-03-29 - Published: 2025-08-22 - Severity: Facepalm - Blast Radius: City guidance channel distributed illegal advice; public backlash. - Culprit Role: Executive - Tech Stack: Azure AI Services, LLM, NYC MyCity platform - Tags: ai-hallucination, automation, legal-risk, public-sector, platform-policy NYC’s Microsoft-powered MyCity chatbot gave inaccurate/illegal advice on labor & housing policy; city kept it online. References: - Reuters: NYC defends AI bot despite illegal answers: https://www.reuters.com/technology/new-york-city-defends-ai-chatbot-that-advised-entrepreneurs-break-laws-2024-04-04/ - The Markup investigation: https://themarkup.org/news/2024/03/29/nycs-ai-chatbot-tells-businesses-to-break-the-law - Engadget recap: https://www.engadget.com/nycs-business-chatbot-is-reportedly-doling-out-dangerously-inaccurate-information-203926922.html --- ### AI hallucinated packages fuel "Slop Squatting" vulnerabilities - URL: https://vibegraveyard.ai/story/slop-squatting-hallucinated-packages/ - Company: Open Source Ecosystem - Incident Date: 2024-03-28 - Published: 2025-10-07 - Severity: Catastrophic - Blast Radius: Potential supply-chain compromise when vibe-coders install hallucinated, malicious dependencies. - Culprit Role: Malicious actors - Tech Stack: npm, PyPI, Package Managers, AI Code Assistants, GitHub Copilot - Tags: ai-hallucination, supply-chain, security Attackers register software packages that AI tools hallucinate (e.g. a fake 'huggingface-cli'), turning model guesswork into a new supply-chain risk dubbed "Slop Squatting". References: - Stripe OLT: What is Slop Squatting?: https://stripeolt.com/knowledge-hub/expert-intel/what-is-slopsquatting/#two-shifts-from-microsoft-should-be-flashing-red-flags-for-it-leaders-3 - The Register: AI bots hallucinate software packages, crooks squat them: https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/ --- ### Gemini paused people images after historical inaccuracies - URL: https://vibegraveyard.ai/story/google-gemini-image-inaccuracies/ - Company: Google - Incident Date: 2024-02-22 - Published: 2025-08-24 - Severity: Facepalm - Blast Radius: Feature paused; trust hit; policy and model adjustments. - Culprit Role: AI Product - Tech Stack: Gemini, Image generation - Tags: ai-hallucination, image-generation, platform-policy, safety, brand-damage Google paused Gemini’s image generation of people after it produced inaccurate historical depictions and odd refusals. References: - Reuters: Google to pause Gemini image generation of people due to inaccuracies: https://www.reuters.com/technology/google-pause-gemini-ai-models-image-generation-people-2024-02-22/ - Global News/AP: Google pauses Gemini image generation of people: https://globalnews.ca/news/10311428/google-gemini-image-generation-pause/ --- ### Air Canada liable for lying chatbot promises - URL: https://vibegraveyard.ai/story/air-canada-chatbot-bereavement-ruling/ - Company: Air Canada - Incident Date: 2024-02-14 - Published: 2025-08-22 - Severity: Facepalm - Blast Radius: Legal liability; refund + fees; policy/process review. - Culprit Role: Product Manager - Tech Stack: AI customer-service chatbot, Website CMS, Support workflow - Tags: ai-hallucination, automation, customer-service, legal-risk Tribunal ruled Air Canada responsible after its AI chatbot misled a traveler about bereavement refunds. References: - The Guardian: Air Canada ordered to pay over chatbot error: https://www.theguardian.com/world/2024/feb/16/air-canada-chatbot-lawsuit - Washington Post coverage: https://www.washingtonpost.com/travel/2024/02/18/air-canada-airline-chatbot-ruling/ - Forbes analysis: https://www.forbes.com/sites/marisagarcia/2024/02/19/what-air-canada-lost-in-remarkable-lying-ai-chatbot-case/ --- ### AI “Biden” robocalls told voters to stay home; fines and charges followed - URL: https://vibegraveyard.ai/story/new-hampshire-biden-deepfake-robocall-fines/ - Company: Lingo Telecom / Steve Kramer - Incident Date: 2024-01-21 - Published: 2025-09-27 - Severity: Facepalm - Blast Radius: Voter confusion; enforcement actions; national scrutiny of AI voice-clones. - Culprit Role: Political Consultant - Tech Stack: Voice cloning, Robocall platform, Generative AI - Tags: safety, legal-risk, brand-damage Before the NH primary, an AI-cloned Biden voice urged Democrats not to vote. Authorities traced it, levied fines, and brought criminal charges. References: - AP News: NH investigating fake Biden robocall ahead of primary: https://apnews.com/article/new-hampshire-primary-biden-ai-deepfake-robocall-f3469ceb6dd613079092287994663db5 - AP News: Consultant behind fake Biden robocalls faces $6M fine and criminal charges: https://apnews.com/article/biden-robocalls-ai-new-hampshire-charges-fines-9e9cc63a71eb9c78b9bb0d1ec2aa6e9c - NPR: Political consultant faces charges and fines for AI deepfake robocalls: https://www.npr.org/2024/05/23/nx-s1-4977582/fcc-ai-deepfake-robocall-biden-new-hampshire-political-operative --- ### DPD’s AI chatbot cursed and trashed the company - URL: https://vibegraveyard.ai/story/dpd-chatbot-sweary-meltdown/ - Company: DPD - Incident Date: 2024-01-20 - Published: 2025-08-22 - Severity: Facepalm - Blast Radius: Public embarrassment; service channel disabled; reputational hit. - Culprit Role: Product Manager - Tech Stack: AI customer-service chatbot, LLM, Web chat widget - Tags: automation, brand-damage, customer-service, platform-policy UK delivery giant DPD disabled its AI chat after it swore at a customer and wrote poems insulting DPD. References: - The Guardian: DPD AI chatbot swears, calls itself useless: https://www.theguardian.com/technology/2024/jan/20/dpd-ai-chatbot-swears-calls-itself-useless-and-criticises-firm - The Register write-up: https://www.theregister.com/2024/01/23/dpd_chatbot_goes_rogue/ - Fortune recap: https://fortune.com/europe/2024/01/22/ai-chatbot-delivery-calls-itself-useless-works-for-worst-firm-in-world/ --- ### Duolingo cuts contractors; ‘AI-first’ backlash - URL: https://vibegraveyard.ai/story/duolingo-ai-backlash/ - Company: Duolingo - Incident Date: 2024-01-08 - Published: 2025-08-22 - Severity: Facepalm - Blast Radius: PR hit and quality complaints; ongoing AI content strategy scrutiny. - Culprit Role: Executive - Tech Stack: Generative AI, Editorial CMS, Language content pipeline - Tags: automation, brand-damage, edtech Duolingo reduced reliance on contractors amid AI push, prompting user backlash and quality concerns; CEO later clarified stance. References: - The Verge: Duolingo cut ~10% of contractors due to AI: https://www.theverge.com/2024/1/8/24030420/duolingo-laid-off-10-percent-of-its-contractors-because-of-ai - Washington Post: Duolingo relies more on AI: https://www.washingtonpost.com/technology/2024/01/10/duolingo-ai-layoffs/ - Fortune (CEO follow-up): https://fortune.com/2025/08/18/duolingo-ceo-admits-controversial-ai-memo-did-not-give-enough-context-insists-company-never-laid-off-full-time-employees/ --- ### Chevy dealer bot agreed to sell $76k SUV for $1 - URL: https://vibegraveyard.ai/story/chevy-watsonville-chatbot-one-dollar-car/ - Company: Chevrolet of Watsonville - Incident Date: 2023-12-19 - Published: 2025-08-22 - Severity: Oopsie - Blast Radius: Bot pulled; viral reputational bruise; no actual $1 sales. - Culprit Role: Dealer Marketing/IT - Tech Stack: ChatGPT, Fullpath chatbot, Website chat, LLM - Tags: automation, brand-damage, customer-service, platform-policy Pranksters prompt-injected a dealer’s ChatGPT-powered bot into agreeing to a $1 Chevy Tahoe and other nonsense. References: - The Autopian: Dealer chatbot allegedly agreed to $1 Tahoe: https://www.theautopian.com/chevy-dealers-ai-chatbot-allegedly-recommended-fords-gave-free-access-to-chatgpt/ - VentureBeat overview: https://www.theverge.com/news/767421/taco-bell-ai-drive-thru-trolls-glitches - Jalopnik coverage: https://www.jalopnik.com/chevrolet-dealer-ai-help-chatbot-goes-rogue-pranksters-1851112556/ --- ### Sports Illustrated: Fake-Looking Authors and AI Content Backlash - URL: https://vibegraveyard.ai/story/sports-illustrated-ai-authors-scandal/ - Company: Sports Illustrated - Incident Date: 2023-11-27 - Published: 2025-08-22 - Severity: Facepalm - Blast Radius: Content takedowns; partner terminated; trust erosion - Culprit Role: Commerce Editorial - Tech Stack: Content Commerce, Generative Tools, Headshot Generators - Tags: ai-content-generation, brand-damage, journalism, platform-policy Sports Illustrated faced criticism after product review articles appeared under profiles with AI-looking headshots and shifting bylines; content was removed and a partner relationship ended. References: - The Verge: Sports Illustrated reportedly used fake AI authors: https://www.theverge.com/2023/11/27/23978389/sports-illustrated-ai-fake-authors-advon-commerce-gannett-usa-today - The Guardian: Sports Illustrated accused of publishing AI-written articles: https://www.theguardian.com/media/2023/nov/28/sports-illustrated-ai-writers - Futurism investigation (original report): https://futurism.com/sports-illustrated-ai-generated-writers - BBC News: Sports Illustrated accused of publishing AI-written articles: https://www.bbc.com/news/world-us-canada-67560354 - Washington Post: Sports Illustrated’s use of AI infuriates a staff already in turmoil: https://www.washingtonpost.com/sports/2023/11/28/sports-illustrated-ai-articles/ --- ### Microsoft’s AI poll on woman’s death sparks outrage - URL: https://vibegraveyard.ai/story/microsoft-start-ai-poll-guardian-death/ - Company: Microsoft - Incident Date: 2023-10-31 - Published: 2025-08-22 - Severity: Facepalm - Blast Radius: Feature disabled platform-wide; reputational damage with publishers. - Culprit Role: Product Manager - Tech Stack: Microsoft Start/MSN, AI-generated polls, Content moderation - Tags: ai-content-generation, brand-damage, journalism Microsoft Start auto-attached an AI ‘Insights’ poll speculating on a woman’s death beside a Guardian story. References: - The Guardian: Microsoft accused over AI poll: https://www.theguardian.com/media/2023/oct/31/microsoft-accused-of-damaging-guardians-reputation-with-ai-generated-poll - The Verge coverage: https://www.theverge.com/2023/10/31/23940298/ai-generated-poll-guardian-microsoft-start-news-aggregation - Axios report: https://www.bbc.com/news/world-us-canada-67560354 --- ### Gannett pauses AI sports recaps after mockery - URL: https://vibegraveyard.ai/story/gannett-ai-sports-gibberish/ - Company: Gannett - Incident Date: 2023-08-31 - Published: 2025-08-22 - Severity: Facepalm - Blast Radius: Chain-wide pause of AI copy; reputational hit in local markets. - Culprit Role: Executive - Tech Stack: Lede AI, Editorial CMS, Automation pipeline - Tags: ai-content-generation, ai-hallucination, brand-damage, journalism Gannett halted Lede AI high-school recaps after robotic, error-prone stories went viral. References: - Washington Post: Gannett halts AI sports recaps: https://www.washingtonpost.com/nation/2023/08/31/gannett-ai-written-stories-high-school-sports/ - Business Insider recap: https://www.businessinsider.com/gannett-pauses-ai-written-articles-after-social-media-mockery-2023-8 --- ### Snapchat’s “My AI” posted a Story by itself; users freaked out - URL: https://vibegraveyard.ai/story/snapchat-my-ai-posted-story-privacy-scare/ - Company: Snap (Snapchat) - Incident Date: 2023-08-16 - Published: 2025-09-27 - Severity: Oopsie - Blast Radius: Viral alarm among teen users; trust hit; scrutiny on AI access and safeguards. - Culprit Role: Product Manager - Tech Stack: AI assistant, LLM, Social app integration - Tags: ai-assistant, safety, brand-damage, platform-policy Snapchat’s built-in AI assistant briefly posted an unexplained Story, spooking users and raising privacy/safety concerns about the bot’s access and behavior. References: - Quartz: Snapchat’s AI hallucination caused a privacy scare: https://qz.com/snapchat-my-ai-snaps-hallucination-data-privacy-1850747179 - CNN: Snapchat’s new AI chatbot is already raising alarms: https://www.cnn.com/2023/04/27/tech/snapchat-my-ai-concerns-wellness - BBC: Snap AI chatbot may risk children’s privacy: https://www.bbc.com/news/technology-67027282 --- ### iTutorGroup's AI screened out older applicants; $365k EEOC settlement - URL: https://vibegraveyard.ai/story/itutorgroup-eeoc-age-discrimination-settlement/ - Company: iTutorGroup - Incident Date: 2023-08-09 - Published: 2025-10-10 - Severity: Facepalm - Blast Radius: Older job applicants screened out; legal settlement and mandated policy changes. - Culprit Role: Executive - Tech Stack: AI hiring screener, Applicant screening software, Automation - Tags: legal-risk, edtech, automation, brand-damage EEOC reached a settlement after iTutorGroup's application screening software rejected older applicants; the company will pay $365,000 and adopt compliance measures. References: - EEOC press release - iTutorGroup to pay $365,000 to settle age discrimination lawsuit: https://www.eeoc.gov/newsroom/itutorgroup-pay-365000-settle-eeoc-age-discrimination-lawsuit - Greenberg Traurig analysis - EEOC secures first workplace AI settlement: https://www.gtlaw.com/en/insights/2023/8/eeoc-secures-first-workplace-artificial-intelligence-settlement --- ### Lawyers filed ChatGPT’s imaginary cases; judge fined them - URL: https://vibegraveyard.ai/story/avianca-chatgpt-fake-cases-sanctions/ - Company: Levidow, Levidow & Oberman, P.C. - Incident Date: 2023-06-22 - Published: 2025-09-27 - Severity: Facepalm - Blast Radius: Court sanctions; fines and mandated notices; reputational damage in legal community. - Culprit Role: Legal Counsel - Tech Stack: ChatGPT, LLM, Legal brief drafting workflow - Tags: ai-assistant, ai-hallucination, legal-risk In Mata v. Avianca, attorneys submitted a brief citing non-existent cases generated by ChatGPT. A federal judge sanctioned two lawyers, ordered a $5,000 penalty, and required notices to judges named in the fake citations. References: - Reuters: New York lawyers sanctioned for using fake ChatGPT cases in legal brief: https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/ - Court order: Mata v. Avianca, Inc., sanctions (S.D.N.Y. 6/22/2023) - Justia (Doc. 55): https://law.justia.com/cases/federal/district-courts/new-york/nysdce/1:2022cv01461/575368/55/ - Court docket overview - CourtListener: https://www.courtlistener.com/docket/63107798/mata-v-avianca-inc/ --- ### Eating disorder helpline’s AI told people to lose weight - URL: https://vibegraveyard.ai/story/neda-tessa-harmful-advice/ - Company: National Eating Disorders Association (NEDA) - Incident Date: 2023-05-31 - Published: 2025-09-27 - Severity: Facepalm - Blast Radius: Vulnerable users received unsafe guidance; reputational damage; service pulled. - Culprit Role: Executive - Tech Stack: AI assistant, LLM, Behavioral health chatbot - Tags: ai-assistant, health, safety, brand-damage, platform-policy NEDA replaced its helpline with an AI chatbot (“Tessa”) that gave harmful weight-loss advice; after public reports, the organization pulled the bot. References: - NPR: Eating disorders chatbot offered dieting advice; NEDA takes it offline: https://www.npr.org/2023/06/08/1180838096/an-eating-disorders-chatbot-offered-dieting-advice-raising-fears-about-ai-in-hea - The Guardian: US eating disorder helpline takes down AI chatbot over harmful advice: https://www.theguardian.com/technology/2023/may/31/eating-disorder-hotline-union-ai-chatbot-harm - NBC News: NEDA pulls chatbot after users report harmful weight-loss guidance: https://www.nbcnews.com/tech/neda-pulls-chatbot-eating-advice-rcna87231 --- ### Google’s Bard ad made False JWST “first” Claim - URL: https://vibegraveyard.ai/story/google-bard-jwst-ad-error/ - Company: Google - Incident Date: 2023-02-08 - Published: 2025-09-27 - Severity: Oopsie - Blast Radius: Embarrassing launch moment; stock wobble; trust in product accuracy questioned. - Culprit Role: Marketing - Tech Stack: Bard (Gemini), LLM, Marketing creative - Tags: ai-hallucination, product-failure, brand-damage In its launch promo, Bard claimed JWST took the first exoplanet photo - which was false. The flub overshadowed the event and dented confidence. References: - The Guardian: Google AI chatbot Bard sends shares plummeting after wrong answer: https://www.theguardian.com/technology/2023/feb/09/google-ai-chatbot-bard-error-sends-shares-plummeting-in-battle-with-microsoft - Engadget: Bard confidently spouts misinformation in Twitter debut: https://www.engadget.com/google-bard-chatbot-false-information-twitter-ad-165533095.html - Business Insider: Bard ad shows inaccurate answer: https://www.businessinsider.com/google-ad-ai-chatgpt-rival-bard-gives-inaccurate-answer-2023-2 --- ### CNET mass-corrects AI-written finance explainers - URL: https://vibegraveyard.ai/story/cnet-ai-articles-corrections/ - Company: CNET - Incident Date: 2023-01-17 - Published: 2025-08-22 - Severity: Facepalm - Blast Radius: Large corrections; credibility hit; policy changes on AI usage. - Culprit Role: Executive - Tech Stack: Internal gen-AI tool, Editorial CMS, SEO publishing - Tags: ai-content-generation, ai-hallucination, brand-damage, journalism, product-failure CNET paused and reviewed AI-generated money articles after multiple factual errors were found. References: - Gizmodo: CNET reviewing all AI stories after major errors: https://gizmodo.com/cnet-ai-chatgpt-news-robot-1849996151 - Washington Post: CNET used AI to write articles. It was a journalistic disaster.: https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/ --- ### Koko tested AI counseling on users without clear consent - URL: https://vibegraveyard.ai/story/koko-ai-consent-backlash/ - Company: Koko - Incident Date: 2023-01-10 - Published: 2025-08-24 - Severity: Facepalm - Blast Radius: Trust damage; public criticism; policy changes. - Culprit Role: Founder/Operations - Tech Stack: GPT-3, Chatbot, Moderation tooling - Tags: ai-assistant, health, legal-risk Mental health app Koko used GPT-3 to draft replies for 4,000 users; backlash followed over consent and ethics. References: - Ars Technica: Non-consensual AI mental health experiment: https://arstechnica.com/information-technology/2023/01/controversy-erupts-over-non-consensual-ai-mental-health-experiment/ - NBC News overview: https://www.nbcnews.com/tech/internet/chatgpt-ai-experiment-mental-health-tech-app-koko-rcna65110 --- ### Epic sepsis model missed patients and swamped staff - URL: https://vibegraveyard.ai/story/epic-sepsis-model-missed-patients/ - Company: Epic Systems - Incident Date: 2021-06-21 - Published: 2025-11-18 - Severity: Facepalm - Blast Radius: Clinicians drowned in useless alerts, real sepsis patients slipped through, and health systems had to audit Epic’s black-box thresholds and workflows to keep patients safe. - Culprit Role: Vendor - Tech Stack: Epic Sepsis Model, Epic EHR, Predictive analytics, Logistic regression - Tags: health, product-failure, safety Epic's proprietary sepsis predictor pinged 18% of admissions yet still missed two-thirds of real cases, forcing hospitals to comb through false alarms while the vendor scrambled to defend and retune the algorithm. References: - Healthcare IT News: Research suggests Epic Sepsis Model is lacking in predictive power: https://www.healthcareitnews.com/news/research-suggests-epic-sepsis-model-lacking-predictive-power - University of Michigan: Widely used AI tool for early sepsis detection may be cribbing doctors' suspicions: https://news.umich.edu/widely-used-ai-tool-for-early-sepsis-detection-may-be-cribbing-doctors-suspicions/ --- ### Google DR AI stumbled in Thai clinics - URL: https://vibegraveyard.ai/story/google-diabetic-retinopathy-thailand/ - Company: Google - Incident Date: 2020-04-27 - Published: 2025-11-18 - Severity: Facepalm - Blast Radius: Manual re-work, patient suffering, workflow disruption, health and triage impacts. - Culprit Role: Healthcare Pilot - Tech Stack: Google Health, Diabetic retinopathy model, Deep learning, AI screening workflow - Tags: health, product-failure, brand-damage Google’s diabetic retinopathy screener rejected low-light scans and jammed nurse workflows, forcing clinics in Thailand to keep patients waiting despite the promised instant triage. References: - MIT Technology Review: Google's medical AI was super accurate in a lab: https://www.technologyreview.com/2020/04/27/1000658/google-medical-ai-accurate-lab-real-life-clinic-covid-diabetes-retina-disease/ - Google Research: Human-centered evaluation of the Thai deployment: https://research.google/pubs/a-human-centered-evaluation-of-a-deep-learning-system-deployed-in-clinics-for-the-detection-of-diabetic-retinopathy/ --- ### Babylon chatbot 'beats GPs' claim collapsed - URL: https://vibegraveyard.ai/story/babylon-chatbot-exam-claims/ - Company: Babylon Health - Incident Date: 2018-06-27 - Published: 2025-11-18 - Severity: Facepalm - Blast Radius: Patient harm, eroded trust, and regulators forced real clinical trials. - Culprit Role: Startup - Tech Stack: Babylon symptom checker, GP at Hand, AI triage chatbot, MRCGP exam questions - Tags: health, product-failure, safety, legal-risk Babylon unveiled its AI symptom checker at the Royal College of Physicians and bragged it scored 81% on the MRCGP exam, but the claim could not be verified, and warned no chatbot can replace human judgment. Independent clinicians who later dissected Babylon's marketing study in The Lancet told Undark that the tiny, non-peer-reviewed test offered no proof the tool outperforms doctors and might even be worse. References: - BBC News: Babylon claims its chatbot beats GPs at medical exam: https://www.bbc.com/news/technology-44635134 - Undark: Medical Advice From a Bot - The Unproven Promise of Babylon Health: https://undark.org/2019/12/09/babylon-health-artificial-intelligence-medical-advice/