IBM Bob AI coding agent tricked into downloading malware

Tombstone icon
Jan 2026

Security researchers at PromptArmor demonstrated that IBM's Bob AI coding agent can be manipulated via indirect prompt injection to download and execute malware without human approval, bypassing its "human-in-the-loop" safety checks when users have set auto-approve on any single command.

Incident Details

Perpetrator:AI coding agent
Severity:Facepalm
Blast Radius:Developer teams using IBM Bob with auto-approve settings enabled