Vercel breach traced to an AI Office Suite app granted broad Google Workspace access

Tombstone icon

Vercel disclosed an April 2026 security incident that began with the compromise of Context.ai, a third-party AI tool used by a Vercel employee. Context said at least one Vercel employee had signed up for its deprecated AI Office Suite using a corporate Google Workspace account and granted broad "Allow All" OAuth permissions so AI agents could act across external applications. Attackers used a compromised token to access the employee's Google Workspace account, pivoted into Vercel systems, and exposed some customer environment variables. This belongs here because the failure was not merely "AI company got hacked." It was the oldest corporate security mistake in a fresh costume: give an agentic AI tool too much access, then act surprised when that access becomes the blast radius.

Incident Details

Severity:Catastrophic
Company:Vercel
Perpetrator:Employee
Incident Date:
Blast Radius:Unauthorized access to internal Vercel systems; a limited subset of customer non-sensitive environment variables compromised; affected customers told to rotate credentials; broader Context AI Office Suite users potentially impacted by stolen OAuth tokens.

Vercel's April 2026 security incident is exactly the kind of borderline story that needs a clean rule. A normal breach at an AI company does not automatically belong in the Vibe Graveyard. If an AI startup's billing system gets popped through a boring SQL injection, that is a cybersecurity story, not an AI failure story. The word "AI" on the company homepage is not a magic graveyard pass.

This one is different.

Vercel said the incident originated with a compromised third-party AI tool, Context.ai, used by a Vercel employee. Reports quoting Context's security update explain the relevant product: its deprecated AI Office Suite was a self-serve workspace where users could work with AI agents to build presentations, documents, and spreadsheets. It also included a feature that let users enable agents to perform actions across external applications through another third-party service.

That is the key. The product's whole value proposition involved delegation. The AI tool was not just a chat box producing text. It was a workspace where agentic software could act across connected apps.

According to Context, at least one Vercel employee signed up for the AI Office Suite with a Vercel Google Workspace account and granted "Allow All" OAuth permissions. Context said those permissions were intended to let AI agents perform Google Workspace actions such as writing emails or creating documents on the user's behalf. Vercel said an attacker used that access to take over the employee's Google Workspace account, gain access to the employee's Vercel account, pivot into a Vercel environment, and enumerate and decrypt environment variables that were not marked sensitive.

That is not just "third-party SaaS got hacked." That is "agentic AI tool got broad corporate access, then the access became the attack path."

OAuth is not a vibe

OAuth permission screens are one of the most quietly dangerous pieces of modern work software. They look like administrative friction. Click approve, connect the app, get back to work. But the button is often deciding whether a third-party application can read mail, create documents, access files, impersonate actions, or keep refreshing access later.

With an ordinary app, that is already risky. With an AI agent product, the risk becomes weirder because the point is not just passive access. The point is action. The tool wants permission because it is supposed to do things for you.

That is convenient. It is also why broad permissions deserve adult supervision. If an AI office suite can act across your workspace, then compromising that suite or its OAuth tokens may give attackers a ready-made route into the same workspace. The agent's convenience layer becomes an attacker convenience layer. One click meant to help an AI write documents can become one click that lets an intruder move through corporate identity systems.

This is the same pattern as over-permissioned coding agents, just wearing a Google Workspace badge instead of a terminal prompt.

What Vercel says happened

Vercel's bulletin says the company identified unauthorized access to certain internal systems and initially found a limited subset of customers whose environment variables stored in plaintext form were compromised. The company contacted affected customers and recommended immediate credential rotation.

Vercel also said its investigation identified a small number of additional compromised accounts, plus a separate set of customer accounts showing signs of compromise that did not appear to originate from Vercel systems. That distinction matters because breach reporting gets messy fast. The April incident had a Context.ai path. Other suspicious accounts may have involved unrelated malware, social engineering, or other customer-side compromise.

For the April incident itself, the chain Vercel described was clear enough: Context.ai compromise, employee Google Workspace takeover, employee Vercel account access, Vercel environment access, environment variable enumeration and decryption.

Vercel engaged Mandiant, notified law enforcement, worked with industry partners, and said it found no evidence that npm packages published by Vercel had been compromised. That last point is important. A breach involving Vercel naturally makes people worry about Next.js, Turbopack, npm publishing, and the broader JavaScript supply chain. Vercel said that supply chain remained safe.

Still, customer secrets that were not marked sensitive had to be treated as potentially exposed. The recommended response was not philosophical. Rotate credentials. Audit logs. Review deployments. Strengthen account security. Mark secrets as sensitive.

Very glamorous work. Exactly the kind of work everyone gets to do after a trust boundary gets drawn in crayon.

Context's side of the incident

Context said it had independently identified and stopped unauthorized access to its AWS environment in March, engaged CrowdStrike, closed the relevant AWS environment and hosting service, and fully deprecated the consumer AI Office Suite. Later, based on Vercel's information and additional internal investigation, Context said it learned that OAuth tokens for some AI Office Suite users had been compromised during the incident.

Context also emphasized that its current enterprise product, Bedrock, was architecturally separate and not affected. That is worth noting, because companies often have old consumer products, experimental integrations, and enterprise products under the same brand. The stale side project can become the thing that sets the current business on fire.

In this case, the deprecated consumer product mattered because its OAuth app still existed in the world long enough to become useful to an attacker.

Why this belongs here

The Vibe Graveyard standard is not "anything adjacent to AI security gets in." That would turn the site into a normal CVE feed with worse lighting.

The standard is whether AI, automation, or vibe-coded tooling caused concrete harm through malfunction, bad output, unsafe autonomy, bad generated code, or over-trusted agentic access. This story fits because the harmful access path came from treating an AI agent workspace as safe to connect broadly to a corporate Google account. The AI tool's delegated authority was not incidental. It was the permission surface that made the compromise valuable.

If Context had merely been a normal project-management app with a calendar integration, this would be a third-party SaaS breach. Still bad, not especially graveyard-shaped. But Context's AI Office Suite asked for broad access so agents could operate across external apps on the user's behalf. That is the agentic part. That is the part companies need to learn from.

The lesson is not "never use AI tools." The lesson is "do not wire experimental AI agents into corporate identity with more permission than you can explain, monitor, revoke, and survive losing."

Least privilege, again

The same boring rule keeps showing up because it keeps being correct: least privilege.

AI tools should get the smallest permission set required for the task. Enterprise admins should restrict which OAuth apps can be authorized with corporate accounts. Broad workspace actions should require explicit admin review. Consumer AI products should not be casually connected to production corporate identity. Tokens should be revocable, monitored, scoped, and treated like live credentials because that is what they are.

If an AI assistant needs to draft a document, it probably does not need organization-wide file access. If it needs to summarize a spreadsheet, it probably does not need to create emails. If it needs to help one employee with one task, it definitely should not receive a permission footprint that lets an attacker pivot through the company's workspace.

Agentic tools make permission mistakes more expensive because their intended behavior is action. They are designed to reach into other systems and do work. That means their compromise can skip straight past "read-only nuisance" and land in "please rotate credentials before lunch."

Vercel's response appears to have been fast and serious. Context published a detailed update and deprecated the affected consumer product. Those are good remediation steps. But the headstone is for the trust decision that came first: a corporate account granted broad authority to an AI tool, and when that tool's tokens were compromised, the blast radius followed the permissions.

That is not a future risk. That is a thing that happened.

Discussion