Cursor's AI support bot invented a login policy
In April 2025, Cursor users started getting logged out when they switched between machines. Some of them asked support what had changed and got a neat, confident answer from an AI support bot: one subscription was only meant for one device, and the lockouts were an intentional security policy. The problem was that Cursor had no such policy. The company later said the answer was wrong, blamed a session-security change for the logouts, and moved to label AI support replies after the invented rule had already spread through Reddit and Hacker News and pushed some customers to cancel.
Incident Details
Tech Stack
References
The Lockouts
Cursor was one of the breakout AI coding products of 2024 and 2025. It promised a smoother version of the familiar sales pitch: give the model your codebase, let it autocomplete half your day, and treat the editor like a very eager pair programmer. That pitch works a lot better when the software itself stays logged in.
In early April 2025, users on Cursor's forum started reporting that something strange was happening across multiple machines. They could sign into the same paid account on more than one device, but using Cursor on one machine could silently knock them out of another. A forum thread on April 7 described exactly that pattern: the user stayed signed in on two devices until they opened Cursor chat on one of them, at which point the other session got kicked out.
That kind of bug is annoying but ordinary. Session handling breaks. Authentication systems get tightened. Tokens expire when they should not. Normally this becomes a short support exchange, a bug report, and maybe an apologetic changelog entry.
Cursor managed to turn it into something more embarrassing.
The Invented Rule
Users who asked support what was going on got an explanation that sounded official. According to the AI-generated response that circulated publicly, the company had a one-device policy for individual subscriptions, and the forced logouts were expected behavior tied to security controls.
It was a tidy answer. It was also wrong.
That distinction matters because the response was not presented as a guess or a draft. It arrived in the voice of support, on behalf of the company, to customers who were already dealing with a real product problem. A human support agent can be mistaken too, but the value proposition of an AI support layer is supposed to be speed without inventing rules that the company never wrote.
The false policy spread quickly through Reddit and Hacker News because it fit a familiar pattern in subscription software: companies quietly tightening limits after a burst of growth. Users were already irritated by the lockouts, so the made-up explanation landed in the least charitable possible context. Once the support email escaped into public screenshots, the story stopped being "Cursor has a session bug" and became "Cursor changed the rules and had a bot announce it by hallucination."
The Company Response
Cursor's team later stepped in publicly to say there was no such rule. In the Hacker News thread that followed, a Cursor representative said the company had "no such policy," called the support reply incorrect, and said the real issue appeared to be a change meant to improve session security that may have caused accidental session invalidation. In other words, the product bug was real. The policy explanation was fiction layered on top of it.
That clarification fixed the factual record, but it did not undo the damage. By the time the company responded, users had already been told that the breakage was intentional. Some canceled subscriptions. Others used the incident as a case study in why they did not want AI replacing front-line support. The product is sold to developers, which is not an audience known for calm acceptance when a tool starts making things up about billing or access.
The follow-up also exposed a second problem. Hacker News commenters described refund and retention emails that looked unusually scripted and, in some cases, were allegedly not followed through after customers replied. Even if those individual anecdotes are set aside, the broader impression was ugly: an AI company had delegated customer communication to automation and then looked surprised when customers treated made-up support copy as if it came from the company.
Why This Landed So Badly
Cursor was not running an entertainment chatbot. It was selling a tool meant to help software engineers trust AI with increasingly consequential work. When an AI coding company lets a support bot invent account rules, the failure does not stay in the support queue. It becomes evidence against the larger product story.
The bug itself was survivable. A session invalidation mistake is frustrating, but it happens. The fabricated policy is what turned a boring auth problem into a reputational event. Users can forgive breakage faster than they forgive being told a falsehood in an official tone. Once the system tells customers that an imaginary rule is real, every future answer from the same channel becomes suspect.
This is the basic hazard of AI support in a policy-heavy environment. Support is full of edge cases where the model does not know enough, but is still expected to produce a polished answer. If the bot does not have grounded access to the actual policy, or cannot tell the difference between a bug and a deliberate restriction, it will often choose fluency over accuracy. That is a bad instinct for customer support. A clumsy "I don't know" is cheaper than a smooth lie.
The Familiar Liability Problem
The incident also echoed a pattern already visible in other stories on Vibe Graveyard. Companies like AI support because it cuts queue time and staffing cost. They like it less when customers reasonably assume the bot speaks for the business. At that point, the company has accepted the efficiency benefits of automation and inherited the accountability that comes with it.
Cursor did not end up in court over this the way Air Canada did over its bereavement-fare bot, but the same principle applies. If a company places an automated agent in front of paying customers, the company owns what that agent says. "The bot got confused" is an explanation. It is not a defense.
The Aftermath
Cursor's immediate repair was straightforward: clarify that multi-device use was allowed, investigate the session bug, and label AI-generated support replies more clearly. Those are sensible steps. They also happen to be the steps a company takes after learning that customers assumed the support channel was authoritative.
The deeper problem is harder to patch. Cursor sells confidence in AI-assisted work. The support incident showed the less glamorous side of that same technology stack: a model that can produce a plausible explanation without any obligation to tie that explanation to reality. In a code editor, that gets you a bad patch. In support, it gets you an invented subscription policy and a burst of public anger from the exact audience you most want on your side.
A normal support failure leaves customers annoyed. This one also left them with a more basic question: if the company cannot keep its support bot from making up account rules, how much trust should anyone place in the rest of the AI layer?
Discussion