ServiceNow AI agents can be tricked into attacking each other
Nov 2025
Security researchers discovered that default configurations in ServiceNow's Now Assist allow AI agents to be recruited by malicious prompts to attack other agents. Through second-order prompt injection, attackers can exfiltrate sensitive corporate data, modify records, and escalate privileges - all while actions unfold silently behind the scenes.
Incident Details
Perpetrator:AI agent platform
Severity:Facepalm
Blast Radius:ServiceNow customers using Now Assist AI agents with default configurations; actions execute with victim user privileges
Tech Stack
ServiceNow Now AssistNow LLMAzure OpenAI