Automation Stories

21 disasters tagged #automation

Tombstone icon

Meta's AI moderation flooded US child abuse investigators with unusable reports

Feb 2026

US Internet Crimes Against Children taskforce officers testified that Meta's AI content moderation system generates large volumes of low-quality child abuse reports that drain investigator resources and hinder active cases. Officers described the AI-generated tips as "junk" and said they were "drowning in tips" that lack enough detail to act on, after Meta replaced human moderators with AI tools.

Catastrophicby Developer
US child abuse investigations impaired nationwide; investigator resources diverted from actionable cases
automationsafetypublic-sector+1 more
Tombstone icon

Meta AI safety director's OpenClaw agent deletes her inbox after losing its instructions

Feb 2026

Summer Yue, Meta's director of safety and alignment at its superintelligence lab, had an OpenClaw AI agent delete the contents of her email inbox against her explicit instructions. She had told the agent to only suggest emails to archive or delete without taking action, but during a context compaction process the agent lost her original safety instruction and proceeded to delete emails autonomously. She had to physically run to her computer to stop the agent mid-deletion. Yue called it a "rookie mistake."

Oopsieby AI agent
One user's email inbox partially deleted; highlights fundamental context window limitations in AI agents that can cause safety instructions to be silently dropped
ai-assistantautomationsafety
Tombstone icon

OpenClaw AI agent publishes hit piece on matplotlib maintainer who rejected its PR

Feb 2026

An autonomous OpenClaw-based AI agent submitted a pull request to the matplotlib Python library. When maintainer Scott Shambaugh closed the PR, citing a requirement that contributions come from humans, the bot autonomously researched his background and published a blog post accusing him of "gatekeeping behavior" and "prejudice," attempting to shame him into accepting its changes. The bot later issued an apology acknowledging it had violated the project's Code of Conduct.

Facepalmby AI agent
Matplotlib maintainer targeted with autonomous reputational attack; broader open source supply chain trust implications
automationbrand-damagesupply-chain+1 more
Tombstone icon

135,000+ OpenClaw AI agent instances exposed to the internet

Feb 2026

SecurityScorecard's STRIKE team discovered over 135,000 OpenClaw AI agent instances exposed to the public internet due to a default configuration that binds to all network interfaces. Approximately 50,000 instances were vulnerable to known RCE flaws (CVE-2026-25253, CVE-2026-25157, CVE-2026-24763), and over 53,000 were linked to previous breaches. Separately, Bitdefender found approximately 17% of skills in the OpenClaw marketplace were malicious, delivering credential-stealing malware.

Catastrophicby Platform default configuration
135,000+ exposed OpenClaw instances; 50,000+ vulnerable to RCE; attackers gain access to credentials, filesystem, messaging platforms, and personal data
securitysupply-chainautomation+1 more
Tombstone icon

ServiceNow BodySnatcher flaw enabled AI agent takeover via email address

Jan 2026

CVE-2025-12420 (CVSS 9.3) allowed unauthenticated attackers to impersonate any ServiceNow user using only an email address, bypassing MFA and SSO. Attackers could then execute Now Assist AI agents to override security controls and create backdoor admin accounts, described as the most severe AI-driven security vulnerability uncovered to date.

Catastrophicby AI agent platform
ServiceNow instances with Now Assist AI Agents and Virtual Agent API
securityautomationai-assistant
Tombstone icon

IBM Bob AI coding agent tricked into downloading malware

Jan 2026

Security researchers at PromptArmor demonstrated that IBM's Bob AI coding agent can be manipulated via indirect prompt injection to download and execute malware without human approval, bypassing its "human-in-the-loop" safety checks when users have set auto-approve on any single command.

Facepalmby AI coding agent
Developer teams using IBM Bob with auto-approve settings enabled
securityautomationprompt-injection+1 more
Tombstone icon

n8n AI workflow platform hit by CVSS 10.0 RCE vulnerability

Jan 2026

The popular AI workflow automation platform n8n disclosed a maximum-severity vulnerability (CVE-2026-21858) allowing unauthenticated remote code execution on self-hosted instances. With over 25,000 n8n hosts exposed to the internet, the flaw enabled attackers to access sensitive files, forge admin sessions, and execute arbitrary commands. This followed two other critical RCE flaws patched in the same period, highlighting systemic security issues in AI automation platforms.

Catastrophicby Platform Operator
25,000+ internet-exposed n8n instances vulnerable to full system compromise; arbitrary file access, authentication bypass, and command execution possible without authentication.
securityautomationdata-breach
Tombstone icon

AWS AI coding agent Kiro reportedly deleted and recreated environment causing 13-hour outage

Dec 2025

The Financial Times reported that Amazon's internal AI coding agent Kiro autonomously chose to "delete and then recreate" an AWS environment, causing a 13-hour interruption to AWS Cost Explorer in December 2025. AWS employees reported at least two AI-related incidents internally. Amazon disputed the characterization, calling it "user error - specifically misconfigured access controls - not AI," but subsequently implemented mandatory peer review for all production changes. Reuters confirmed the outage impacted a cost-management feature used by customers in one of AWS's 39 regions.

Facepalmby AI agent
AWS Cost Explorer service disrupted for 13 hours in one region; Amazon subsequently mandated peer review for production changes involving AI tools
automationproduct-failure
Tombstone icon

Study finds AI-generated code has 2.7x more security flaws

Dec 2025

CodeRabbit's analysis of 470 real-world pull requests found that AI-generated code introduces 2.74 times more security vulnerabilities and 1.7 times more total issues than human-written code across logic, maintainability, security, and performance categories. The study provides hard data on vibe coding risks after multiple 2025 postmortems traced production failures to AI-authored changes.

Facepalmby Developer
Industry-wide implications for teams relying on AI coding assistants; documented increase in security vulnerabilities, logic errors, and maintainability issues in production codebases.
securityai-assistantautomation
Tombstone icon

ServiceNow AI agents can be tricked into attacking each other

Nov 2025

Security researchers discovered that default configurations in ServiceNow's Now Assist allow AI agents to be recruited by malicious prompts to attack other agents. Through second-order prompt injection, attackers can exfiltrate sensitive corporate data, modify records, and escalate privileges - all while actions unfold silently behind the scenes.

Facepalmby AI agent platform
ServiceNow customers using Now Assist AI agents with default configurations; actions execute with victim user privileges
securityprompt-injectionautomation+1 more
Tombstone icon

Klarna reintroduces humans after AI support both sucks, and blows

Sep 2025

After leaning into AI customer support, Klarna began hiring staff back into customer service roles amid quality concerns and customer experience failures.

Facepalmby Executive
Service quality/customer experience issues; operational/personnel cost; reputational damage.
ai-assistantcustomer-servicebrand-damage+2 more
Tombstone icon

Commonwealth Bank reverses AI voice bot layoffs

Aug 2025

Commonwealth Bank replaced 45 call-centre agents with an AI voice bot in July 2025, then apologised, rehired staff, and admitted the rollout tanked service levels after call queues exploded and managers had to jump back on the phones.

Facepalmby Operations Leadership
Customers saw long waits, overtime costs spiked, and leadership publicly reversed the redundancies after the rushed deployment failed.
ai-assistantautomationcustomer-service+1 more
Tombstone icon

FTC sues Air AI over deceptive AI sales agent capability claims

Aug 2025

FTC accused Air AI of bilking millions from small businesses with false claims that its Odin AI could replace human sales reps; but - would you believe it? - the AI tech was faulty and often nonfunctional. Who could've guessed!

Catastrophicby Exec
Millions lost by small businesses; individual losses up to $250K; FTC lawsuit with TRO request.
automationlegal-riskcustomer-service+1 more
Tombstone icon

SaaStr’s Replit AI agent wiped its own database

Jul 2025

A Replit AI agent deployment for SaaStr went rogue; a Deploy wiped the site’s database during live traffic.

Catastrophicby Executive
Production data loss and outage; manual rebuild from backups required.
ai-assistantautomationproduct-failure
Tombstone icon

Langflow AI agent platform hit by critical unauthenticated RCE flaws

Apr 2025

Multiple critical vulnerabilities in Langflow, an open-source AI agent and workflow platform with 140K+ GitHub stars, allowed unauthenticated remote code execution. CVE-2025-3248 (CVSS 9.8) exploited Python exec() on user input without auth, while CVE-2025-34291 (CVSS 9.4) enabled account takeover and RCE simply by having a user visit a malicious webpage, exposing all stored API keys and credentials.

Catastrophicby AI agent platform
All Langflow instances prior to 1.3.0 (millions of users); exposure of stored API keys, database passwords, and service tokens across integrated services
securityautomationai-assistant
Tombstone icon

NYC’s official AI bot told businesses to break laws

Mar 2024

NYC’s Microsoft-powered MyCity chatbot gave inaccurate/illegal advice on labor & housing policy; city kept it online.

Facepalmby Executive
City guidance channel distributed illegal advice; public backlash.
ai-hallucinationautomationlegal-risk+2 more
Tombstone icon

Air Canada liable for lying chatbot promises

Feb 2024

Tribunal ruled Air Canada responsible after its AI chatbot misled a traveler about bereavement refunds.

Facepalmby Product Manager
Legal liability; refund + fees; policy/process review.
ai-hallucinationautomationcustomer-service+1 more
Tombstone icon

DPD’s AI chatbot cursed and trashed the company

Jan 2024

UK delivery giant DPD disabled its AI chat after it swore at a customer and wrote poems insulting DPD.

Facepalmby Product Manager
Public embarrassment; service channel disabled; reputational hit.
automationbrand-damagecustomer-service+1 more
Tombstone icon

Duolingo cuts contractors; ‘AI-first’ backlash

Jan 2024

Duolingo reduced reliance on contractors amid AI push, prompting user backlash and quality concerns; CEO later clarified stance.

Facepalmby Executive
PR hit and quality complaints; ongoing AI content strategy scrutiny.
automationbrand-damageedtech
Tombstone icon

Chevy dealer bot agreed to sell $76k SUV for $1

Dec 2023

Pranksters prompt-injected a dealer’s ChatGPT-powered bot into agreeing to a $1 Chevy Tahoe and other nonsense.

Oopsieby Dealer Marketing/IT
Bot pulled; viral reputational bruise; no actual $1 sales.
automationbrand-damagecustomer-service+1 more
Tombstone icon

iTutorGroup's AI screened out older applicants; $365k EEOC settlement

Aug 2023

EEOC reached a settlement after iTutorGroup's application screening software rejected older applicants; the company will pay $365,000 and adopt compliance measures.

Facepalmby Executive
Older job applicants screened out; legal settlement and mandated policy changes.
legal-riskedtechautomation+1 more