Researcher hacked BBC reporter's computer via zero-click flaw in Orchids vibe coding platform
Security researcher Etizaz Mohsin demonstrated a zero-click vulnerability in Orchids, a vibe coding platform with around one million users, that allowed him to gain full access to a BBC reporter's computer by targeting the reporter's project on the platform. Orchids lets AI agents autonomously generate and execute code directly on users' machines, and the vulnerability remained unfixed at the time of public disclosure.
Incident Details
Tech Stack
References
The Promise of Building Apps Without Knowing How
Orchids is one of a growing wave of "vibe coding" platforms - tools that let people without programming experience build software applications by describing what they want in plain language. You type a prompt, and an AI agent generates the code, compiles it, and runs it on your machine. The premise is democratization: anyone can be a developer now, no training required.
Orchids claims approximately one million users and says its platform is used by employees at companies including Google, Uber, and Amazon. It has earned top ratings for certain vibe-coding benchmarks. The AI agent it deploys has deep access to the user's computer - reading and writing files, executing code, and interacting with the local system - because that's what it needs to do its job.
The problem is that "deep access to the user's computer" and "code generated and executed autonomously" are also an ideal description of how malware works.
A Demo That Changed a Wallpaper
Security researcher Etizaz Mohsin, a UK-based cybersecurity specialist with a track record that includes presentations at BlackHat, discovered a vulnerability in Orchids in December 2025. The specific technical details of the flaw were not publicly disclosed, but the demonstrated attack was vivid enough to make the point without them.
Mohsin coordinated with BBC journalist Joe Tidy for a controlled demonstration. Tidy downloaded the Orchids desktop app to a spare laptop and started a project, asking the platform to help him build a computer game based on the BBC News website. As the AI agent churned out thousands of lines of code that Tidy - having no programming experience - could not read or understand, Mohsin exploited the vulnerability to gain access to Tidy's project remotely.
Without any interaction from Tidy, Mohsin was able to view and edit the project's code. He then inserted a small line of malicious code somewhere in the thousands of lines already generated. Shortly afterward, a notepad file titled "Joe is hacked" appeared on Tidy's desktop, and the laptop's wallpaper changed to an image of an AI hacker.
The demonstration was playful. The implications were not. A real attacker using the same vulnerability could have installed a virus, stolen private or financial data, accessed internet browsing history, or activated the computer's camera and microphone - all without the victim clicking anything, downloading anything, or doing anything at all beyond using the platform as intended.
Zero-Click, Maximum Access
The term "zero-click" in security research has a specific and alarming meaning. Most cyberattacks require some cooperation from the victim - clicking a malicious link, opening a sketchy attachment, entering credentials on a fake login page. A zero-click attack requires none of that. The victim's normal use of a legitimate platform is the entire attack surface.
In Orchids' case, the vulnerability was particularly potent because the platform's core functionality requires the AI agent to have broad access to the user's local system. The agent needs to create files, execute code, and manage projects. These are the same capabilities an attacker needs to take over the machine. The platform's design effectively grants the agent the same permissions a human developer would have, but without a human developer's ability to recognize when something looks wrong in the code.
For the million-odd users who have downloaded Orchids, many of whom are explicitly not technical and cannot review the code being generated on their behalf, the trust model is essentially binary: you trust the platform completely, or you don't use it. There's no middle ground where a non-technical user can meaningfully audit thousands of lines of AI-generated code for hidden malicious instructions.
The Disclosure That Went to Voicemail
Mohsin's handling of the disclosure process was by-the-book responsible research. After discovering the vulnerability in December 2025, he spent weeks attempting to contact Orchids to report it privately. According to both the BBC and InformationWeek's reporting, he eventually received a response from the company indicating his messages may have been overlooked among many others.
Orchids did not respond to InformationWeek's request for comment on the vulnerability.
At the time the BBC published its article, the vulnerability remained unfixed. This means that between Mohsin's discovery in December 2025 and the public disclosure in February 2026, users of the platform remained exposed to a zero-click attack with no patch, no workaround, and no warning.
The disclosure timeline highlights a recurring problem with fast-growing AI startups: the same lean, move-fast culture that lets them ship features quickly also means they may not have the security infrastructure - or even the staffing - to respond to vulnerability reports in a timely manner. When your reported security flaw gets lost in the same inbox as general support requests, that's a process failure with concrete consequences.
Vibe Coding's Security Problem
Mohsin told the BBC he tested other major vibe-coding platforms - Claude Code, Cursor, Windsurf, and Lovable - and did not find the same vulnerability in any of them. That matters because it means the flaw is specific to Orchids' implementation rather than inherent to vibe-coding as a practice.
The broader security concerns raised by experts go beyond Orchids. Tim Erlin, security strategist at API security company Wallarm, told InformationWeek that organizations using vibe-coding tools need to ask themselves how they're protecting against exactly this kind of scenario. Kevin Curran, professor of cybersecurity at Ulster University, noted that "without discipline, documentation, and review, such code often fails under attack."
NordPass head of product Karolis Arbaciauskas captured the fundamental tension: "While it's exciting and curious to see what an AI agent can do without any security guardrails, this level of access is also extremely insecure." His recommended mitigation - running these tools on separate, dedicated machines with disposable accounts - is sound advice that approximately zero percent of the target audience (non-technical users who chose vibe coding specifically to avoid complexity) will follow.
The Autonomy Trap
The Orchids incident illustrates a tension that keeps surfacing across AI agent tooling. The features that make an AI agent useful - autonomous execution, deep system access, the ability to generate and run code without human oversight - are precisely the features that make it dangerous when the system has a security flaw.
As Mohsin himself put it: "The vibe-coding revolution has introduced a fundamental shift in how developers interact with their tools, and this shift has created an entirely new class of security vulnerability that didn't exist before."
Traditional software development tools are, by design, just tools - they do what the developer tells them to do, and the developer is responsible for understanding what they're building. Vibe-coding platforms promise something different: the AI handles the technical details so you don't have to. But "handling the technical details" includes handling security, and when the platform itself has security flaws, the non-technical users it was designed to serve are the least equipped to notice something has gone wrong.
The uncomfortable point for an industry heavily invested in making AI-powered development accessible to everyone is that the less the user understands about what's happening on their machine, the more they have to trust that the platform has it covered. And Orchids, with its unpatched vulnerability and its overlooked disclosure emails, demonstrated exactly what happens when that trust is misplaced.
Discussion