ServiceNow BodySnatcher flaw enabled AI agent takeover via email address

Tombstone icon

CVE-2025-12420 (CVSS 9.3) allowed unauthenticated attackers to impersonate any ServiceNow user using only an email address, bypassing MFA and SSO. Attackers could then execute Now Assist AI agents to override security controls and create backdoor admin accounts, described as the most severe AI-driven security vulnerability uncovered to date.

Incident Details

Severity:Catastrophic
Company:ServiceNow
Perpetrator:AI agent platform
Incident Date:
Blast Radius:ServiceNow instances with Now Assist AI Agents and Virtual Agent API

The Vulnerability Chain

CVE-2025-12420 combined three distinct security failures into a single attack path that turned ServiceNow's AI agent platform into a tool for unauthenticated remote privilege escalation. Aaron Costello at AppOmni discovered the flaw and named it BodySnatcher. He called it "the most severe AI-driven security vulnerability uncovered to date."

The chain worked like this: authenticate using a hardcoded secret that was identical across every ServiceNow deployment, impersonate any user by supplying their email address, then execute privileged AI agent workflows as that user - including creating backdoor administrator accounts. No password, no MFA token, no SSO credentials. Just an email address and a string that every ServiceNow instance shared.

AppOmni reported the vulnerability to ServiceNow on October 23, 2025. ServiceNow patched it within a week, on October 30, and notified affected customers. The public disclosure came on January 13, 2026.

Step One: The Hardcoded Secret

ServiceNow's Virtual Agent API uses a method called Message Auth to authenticate external platforms that want to communicate with the chatbot infrastructure. The authentication mechanism relies on a static credential - a shared secret that the external platform includes in its API requests to prove it's authorized.

The problem was that this secret was hardcoded to the string servicenowexternalagent and shipped identically across every customer environment worldwide. Every ServiceNow instance running the affected versions used the exact same authentication credential. An attacker who knew this single string could authenticate as a legitimate external provider on any ServiceNow deployment.

Finding this string required looking at the configuration of the Virtual Agent API. Once discovered, it was a skeleton key for the first gate of every vulnerable instance on the planet.

Step Two: Email-Only Identity Linking

Once authenticated as an external provider, the API needed to figure out which ServiceNow user was sending the message. The vulnerable configuration used a feature called Auto-Linking, which automatically associated an external user with a ServiceNow account based on a field match. The matching field was the user's email address.

An attacker could supply any email address in the email_id parameter of the API request. If that email matched a ServiceNow user - say, admin@example.com - the system linked the session to that user's account. The attacker was now operating as that user within the ServiceNow instance.

No password check. No MFA challenge. No SSO redirect. The Virtual Agent API trusted that if an external provider was authenticated (via the hardcoded secret that everyone already knew), the email address it supplied must be legitimate. Two security controls - provider authentication and user identity verification - both failed for the same reason: they relied on information the attacker could trivially obtain or guess.

Step Three: AI Agent Execution

The identity hijacking alone would have been a serious vulnerability. What made BodySnatcher catastrophic was what the attacker could do with the hijacked identity.

ServiceNow's Now Assist AI Agents are designed to perform privileged operations on behalf of users: creating records, modifying configurations, managing user accounts. When a user is authenticated - or in this case, when the system believes a user is authenticated - these AI agents execute requests with that user's permissions.

An attacker impersonating a system administrator could instruct the Record Management AI Agent to create a new user account in the sys_user table, set it to active, and then grant it full administrative privileges by creating a record in the sys_user_has_role table. The AI agent would carry out these instructions because, as far as the system could tell, a legitimate administrator was making the request.

The PoC exploit that AppOmni published demonstrated exactly this: a single HTTP POST request to the Virtual Agent API endpoint containing the hardcoded token, the target admin's email address, and an agent objective instructing the AI to create a backdoor admin account. The request was straightforward enough to fit in a few dozen lines.

What Could Be Accessed

ServiceNow instances typically house some of an organization's most sensitive operational data. Customer records, employee information, IT infrastructure details, security incident reports, and business process configurations all live in the platform. An attacker with administrative access through a backdoor account could read, modify, or exfiltrate any of it.

AppOmni's disclosure noted that a successful exploit could grant "nearly unlimited access to everything an organization houses, such as customer Social Security numbers, healthcare information, financial records, or confidential intellectual property." This was not hypothetical - the privilege escalation was complete, giving the attacker every permission the impersonated user held.

The AI Amplification Factor

NeuralTrust's analysis of CVE-2025-12420 described the core lesson: "AI agents can amplify traditional security flaws into full platform takeovers." The broken authentication and weak identity linking were conventional bug classes - hardcoded credentials and insufficient identity verification have been in security textbooks for decades. What made them devastating was the AI execution layer on top.

Without Now Assist AI Agents, an attacker who impersonated a user through the Virtual Agent API would have been limited to whatever actions the chatbot's original NLU (natural language understanding) functionality supported. With AI agents, the attacker could issue natural language commands to create users, modify records, and alter configurations - all through a conversational interface that the platform trusted to execute privileged operations.

The AI agents didn't create the vulnerability. They amplified its impact from "unauthorized chatbot access" to "complete platform takeover." The combination of traditional authentication failures with AI agent execution capabilities produced a vulnerability class that didn't exist before organizations started deploying agentic AI systems.

The Supervised Mode Detail

AppOmni's PoC revealed one interesting detail: the Record Management AI Agent's Create Record tool was configured with an execution mode of "Supervised" rather than "Autonomous." This meant the AI agent asked for confirmation before creating the user account, sending a message back through the chat asking the attacker to confirm the action.

This was the kind of guardrail that sounds good on paper: the AI asks before doing something destructive. In practice, the confirmation request went to the attacker, who was happy to confirm. A supervised execution mode only helps when the person being asked for confirmation is someone who should be making the decision. When the entire authentication chain has been compromised, asking the attacker "are you sure?" provides no security value at all.

Patch and Response

ServiceNow's response was prompt. From AppOmni's October 23, 2025 report to the October 30 fix, the turnaround was seven days. The company released updated versions of both affected applications: Now Assist AI Agents (sn_aia) versions 5.1.18 and 5.2.19, and Virtual Agent API (sn_va_as_service) versions 3.15.2 and 4.0.4.

Cloud-hosted ServiceNow customers were patched automatically. On-premise customers needed to upgrade manually - and given that the vulnerability allowed unauthenticated remote privilege escalation, the urgency of that upgrade was significant.

AppOmni's recommendations went beyond the immediate patch. They called for enforced MFA for account linking, an agent approval process to prevent unauthorized AI agent deployment, and lifecycle management policies to de-provision unused or stagnant agents. The message was that fixing the specific CVE wasn't enough - organizations needed to reconsider how they configured the relationship between external APIs and internal AI agent execution paths.

The hardcoded credential was the kind of mistake that seems obvious in retrospect but reflects a common assumption in enterprise software: that the integration layer between internal systems is trusted by default. When AI agents sit on the other side of that trusted integration, the cost of that assumption goes up considerably.

Discussion