Lovable-showcased EdTech app found riddled with 16 security flaws exposing 18,000 users

Tombstone icon

A security researcher found 16 vulnerabilities - six critical - in an EdTech app featured on Lovable's showcase page, which had over 100,000 views and real users from UC Berkeley, UC Davis, and universities across Europe, Africa, and Asia. The AI-generated authentication logic was backwards, blocking logged-in users while granting anonymous visitors full access. 18,697 user records including names, emails, and roles were accessible without authentication, along with the ability to modify student grades, delete accounts, and send bulk emails. Lovable initially closed the researcher's support ticket without response.

Incident Details

Severity:Facepalm
Company:Lovable
Perpetrator:AI platform
Incident Date:
Blast Radius:18,697 user records exposed including students at major universities; student grades modifiable and accounts deletable without authentication

The Showcase That Showcased Too Much

Lovable, the $6.6 billion vibe-coding platform that generates full-stack web applications from natural language prompts, maintains a Discover page where it highlights apps built on its platform. Think of it as a curated gallery designed to demonstrate what's possible when you let AI write your code. One of those featured apps - an EdTech platform for creating exam questions and viewing student grades - had accumulated more than 100,000 views and around 400 upvotes on the showcase page. Behind the polished storefront, it was leaking everything.

Security researcher Taimur Khan, a tech entrepreneur with a software engineering background, decided to take a closer look at the featured app. What he found in a few hours of testing was not a subtle misconfiguration or an edge case vulnerability. It was a fundamental inversion of how authentication is supposed to work - and it had been running in production, serving real students at real universities, since its deployment.

Authentication, But Backwards

The app was built on Supabase, a popular backend-as-a-service platform that provides PostgreSQL databases, authentication, and file storage. Supabase offers Row Level Security (RLS) - a feature that controls which users can access which rows of data - and role-based access controls. These are powerful tools when used correctly. The key phrase being "when used correctly."

The AI that generated the app's backend implemented access control using Supabase remote procedure calls. The intention was to block non-admin users from accessing sensitive parts of the application. What it actually did was the opposite: the access control guard blocked authenticated (logged-in) users while granting anonymous visitors full access.

As Khan put it, this was "a classic logic inversion that a human security reviewer would catch in seconds, but an AI code generator, optimizing for 'code that works,' produced and deployed to production." The error wasn't isolated to a single function - it was repeated across multiple critical endpoints of the application.

This is the kind of bug that makes security professionals wince. Not because it's sophisticated, but because it's the mirror-universe version of correct. The code looked functional. It handled authentication. It just did it in exactly the wrong direction.

The Scope of Exposure

Because the app was an exam platform used by teachers and students, the exposed user base included people from UC Berkeley, UC Davis, and universities across Europe, Africa, and Asia. Some users were from K-12 schools, meaning the exposed accounts included minors.

The numbers Khan uncovered were stark: 18,697 total user records were accessible without any authentication. Of those, 14,928 were unique email addresses, 4,538 were student accounts, and 870 had their full personally identifiable information exposed - names, emails, and organizational data, all accessible to anyone who bothered to look. Another 10,505 were enterprise accounts.

With the security flaws in place, an unauthenticated attacker could access every user record on the platform, send bulk emails through the platform's infrastructure, delete any user account with a single API call, grade student test submissions as if they were a teacher, and access organizations' admin emails and internal data. No credentials required for any of it.

Sixteen vulnerabilities in total. Six rated critical. All in a single app that Lovable was actively promoting to potential customers on its showcase page.

"You Can't Showcase It and Then Close the Ticket"

Khan reported his findings to Lovable through the company's support channels. The response, or rather the lack of one, became its own story. Lovable closed his support ticket without any response.

"You can't showcase an app to 100,000 people, host it on your own infrastructure, and then close the ticket when someone tells you it's leaking user data," Khan said. His frustration was understandable. The researcher had done the responsible thing - found vulnerabilities, documented them, reported them through proper channels - and the company's response was to mark the ticket as resolved and move on.

It took Khan posting his findings publicly, including a detailed writeup on Reddit's cybersecurity community (which attracted significant attention), before Lovable's security team reached out. In an update to his Reddit post, Khan noted that Lovable had contacted him, received his full report, and said they were investigating. The pattern - ignore the private report, scramble after the public one - is unfortunately familiar in the security world.

Lovable's stated position is that users are responsible for addressing security issues flagged before publishing. The platform does include a free security scan before publishing, and according to reports, the app's developer received security warnings but didn't implement the recommendations. This creates a grey area: if the platform warns you and you ignore it, whose fault is it?

The Vibe-Coding Trust Problem

Khan argued that the responsibility question isn't as simple as Lovable suggests. "If Lovable is going to market itself as a platform that generates production-ready apps with authentication 'included,' it bears some responsibility for the security posture of the apps it generates and promotes," he said.

The argument has force. When Supabase provides Row Level Security, it's a feature that needs to be explicitly enabled and configured. When a human developer builds on Supabase, they (ideally) know to enable RLS on every table. When an AI generates the code, the AI should also enable RLS - but as this case demonstrates, it may generate code that looks correct without actually being correct. The gap between "functional code" and "secure code" is precisely the gap that human reviewers exist to catch, and vibe-coding platforms are marketed partly on the premise that such reviewers aren't necessary.

The broader pattern extends beyond Lovable. A related incident found that another Supabase-based app, Moltbook, was launched without Row Level Security, exposing 1.5 million records. The security firm SymbioticSec released an open-source tool called Vibe-Scanner specifically to audit Row Level Security configurations in Lovable projects, citing the urgency of this class of vulnerability. Their tool runs 62 detection rules against a project's RLS configuration and flags exactly the problems that Khan found.

The Uncomfortable Arithmetic

The math on vibe-coded security is unflattering. Lovable's AI generated authentication code that was backwards. The platform's security scan flagged problems. The developer ignored the warnings. Lovable showcased the app to over 100,000 people. A researcher found 16 vulnerabilities in a few hours. Lovable closed the security report. The researcher went public. Lovable suddenly cared.

At every step where a human or a system could have intervened, the intervention either didn't happen or was ignored. The AI wrote insecure code. The developer didn't review it. The platform didn't enforce its own security recommendations. The support team didn't read the vulnerability report. The result was 18,697 user records sitting open on the internet, including student data from some of the most prominent universities in the United States.

For the vibe-coding industry, this case is a concrete example of what happens when "anyone can build an app" meets "nobody reviewed the code." The promise of AI-generated applications is speed and accessibility. The cost, apparently, is that the applications may authenticate their users in exactly the wrong direction - and the platforms hosting them may not notice until someone posts about it on Reddit.

Discussion