Study: one in five organizations breached because of their own AI-generated code

Tombstone icon

Aikido Security's "State of AI in Security & Development 2026" report - a survey of 450 developers, AppSec engineers, and CISOs across Europe and the US - found that 20% of organizations have suffered a serious security breach directly caused by vulnerabilities in AI-generated code that those organizations deployed into production. Nearly seven in ten respondents reported finding vulnerabilities introduced by AI-written code in their own systems. With roughly a quarter of all production code now written by AI tools, the report documents an industry-wide accountability vacuum: 53% blame security teams, 45% blame the developer who wrote the code, and 42% blame whoever merged it.

Incident Details

Severity:Facepalm
Company:Industry-wide (450 organizations surveyed)
Perpetrator:Developer
Incident Date:
Blast Radius:Industry-wide; 20% of surveyed organizations report serious breaches from their own AI-generated code, rising to 43% in the US

What the Report Measured

Aikido Security, a Belgian application security company, surveyed 450 developers, application security engineers, and CISOs across Europe and the United States. The report, titled "State of AI in Security & Development 2026," set out to quantify something the industry has been debating anecdotally for two years: when organizations use AI coding tools to generate production code, how often does that code introduce security vulnerabilities, and how often do those vulnerabilities actually lead to breaches?

The distinction matters - and it's worth being precise about it. This report is not about hackers using AI as an offensive tool. It's not about AI being weaponized by external attackers. It's about organizations using AI to write their own code, deploying that code into production, and then getting breached because the AI-generated code contained security vulnerabilities that nobody caught before it shipped.

The headline finding: one in five organizations (20%) reported a serious security incident directly caused by vulnerabilities in their own AI-generated code. These are breaches with material business impact - not theoretical risks, not near-misses, not vulnerabilities found and patched before anything happened. Actual breaches. In production. Caused by code that an AI tool wrote and a human approved.

The Numbers

The survey found that approximately 24% of all production code across surveyed organizations is now written by AI coding tools. That figure climbs to 29% in the United States and sits at 21% in Europe. Nearly a quarter of the code running in production environments was generated by a machine that does not understand security requirements, threat models, or the specific context of the application it's building.

Of the organizations surveyed, 69% - almost seven in ten - reported discovering vulnerabilities that had been introduced by AI-generated code within their own systems. These are vulnerabilities that made it past whatever review processes the organizations had in place, into the codebase, and were only identified later. Not all of these led to breaches, but they all represented security flaws that the AI created and the humans missed.

The breach rate showed a sharp geographic divide. In the US, 43% of organizations reported a serious security incident caused by their AI-generated code. In Europe, the figure was 20%. Aikido attributes this gap to Europe's stricter regulatory environment - GDPR, the EU AI Act, and generally more conservative compliance cultures - leading to more rigorous testing and review practices that catch AI-introduced vulnerabilities before they reach production.

US developers were also more likely to bypass security controls entirely: 72% of US respondents admitted to doing so at least some of the time, compared to 61% in Europe. When you combine higher rates of AI code generation with higher rates of bypassing security checks, a higher breach rate is not surprising. It's math.

The Accountability Vacuum

Perhaps the most revealing finding in the report isn't about vulnerabilities at all. It's about blame.

When AI-generated code causes a breach, who is responsible? The report asked this question and got answers that illustrate a profession-wide accountability problem.

  • 53% of respondents said the security team bears responsibility
  • 45% said the developer who wrote (or prompted) the code is responsible
  • 42% said the person who approved and merged the code into production is at fault

These percentages add up to well over 100%, which means the respondents are pointing at multiple parties simultaneously. Everyone is somewhat responsible, which in organizational practice means nobody is clearly responsible. This is a predictable consequence of inserting a non-human code author into a pipeline that was designed around human accountability.

In traditional software development, the chain of responsibility is clear. A developer writes code. A reviewer approves it. If that code has a security flaw, the developer and the reviewer share accountability, with the security team responsible for the broader process that should have caught the problem. Everyone knows their role.

When AI generates the code, the accountability chain fractures. The developer didn't write the code; they prompted it. The AI doesn't have professional accountability; it's a tool. The reviewer may not have the context to evaluate AI-generated code patterns that differ from what a human would write. The security team didn't approve the use of AI tools for this particular function. Nobody's job description says "verify the security of code written by a machine that doesn't understand security."

The result is that AI-generated vulnerabilities can pass through review processes designed for human-authored code and reach production without anyone feeling individually responsible for the security of the output.

Why AI Code Has Security Problems

The "why" has been documented by multiple reports before Aikido's, including CodeRabbit's December 2025 study (which found AI code has 2.74 times more security vulnerabilities than human code) and Tenzai's January 2026 study (which found 69 vulnerabilities across 15 AI-generated applications). But Aikido's report adds something those studies didn't: the breach data that connects "AI code has more vulnerabilities" (which everyone already suspected) to "those vulnerabilities are causing real breaches" (which is harder to quantify).

The underlying causes are well-understood at this point. AI coding tools generate code based on patterns learned from training data that includes decades of insecure practices. They lack awareness of the specific security requirements and threat models of the application they're building. They optimize for plausibility and syntactic correctness, not for defense-in-depth. They produce code that looks professional, passes linting and basic tests, and often contains the exact same classes of vulnerabilities - SQL injection, XSS, insecure direct object references, missing access controls - that the security industry has been fighting since the early 2000s.

The difference is speed. A human developer writing insecure code produces it relatively slowly, giving review processes time to intervene. An AI coding assistant can generate thousands of lines of vulnerable code in minutes, overwhelming review capacity. If review processes aren't scaled to match the velocity of AI code generation - and the Aikido report suggests they broadly haven't been - vulnerabilities will reach production at rates proportional to the increase in code generation speed.

The Tool Sprawl Problem

A counterintuitive finding in the report: organizations using more security tools experienced more incidents, not fewer. Organizations running six to nine different security tools reported security incidents 90% of the time, compared to 64% for organizations using only one or two tools.

This isn't because security tools cause breaches. It's because tool sprawl creates complexity that degrades the effectiveness of each individual tool. When security teams are managing half a dozen dashboards, correlating alerts across multiple platforms, and maintaining integrations between tools that weren't designed to work together, the operational overhead reduces the time available for actual vulnerability analysis. The tools generate noise faster than the teams can process signal.

For AI-generated code specifically, this finding has a sharp edge. AI coding tools are generating code at unprecedented velocity. The security tooling meant to catch vulnerabilities in that code is fragmenting into more and more specialized products. The net result is that code volume is increasing while security review effectiveness is decreasing - exactly the wrong combination for catching the additional vulnerabilities that AI-generated code introduces.

What the Numbers Mean

The Aikido report arrives at a moment when the industry has largely moved past the question of whether to use AI coding tools. That decision is made. Twenty-four percent of production code is already AI-generated, and that percentage is increasing quarter over quarter.

The question now is how to use AI coding tools without creating a security debt that compounds faster than it can be addressed. The Aikido data suggests that, for a meaningful fraction of organizations, the answer is currently "you can't" - or at least "you aren't." One in five organizations has already been breached by code that an AI wrote and a human shipped without adequate review.

The report's respondents are optimistic about the long-term future: 96% believe AI will eventually produce secure and reliable code. They estimate this will take an average of 5.1 to 5.5 years. Only 21% think it will happen without human oversight. The gap between current reality and that optimistic future is currently being filled with breaches.

Discussion