Lovable AI builder shipped apps with public storage buckets
Security researcher Matt Palmer discovered that applications generated by Lovable, a vibe-coding platform, shipped with insufficient Supabase Row-Level Security policies that allowed unauthenticated attackers to read and write arbitrary database tables. The vulnerability, tracked as CVE-2025-48757, affected over 170 apps and exposed sensitive data including personal debt amounts, home addresses, API keys, and PII. A separate researcher found 16 vulnerabilities in a single Lovable-hosted app that leaked more than 18,000 people's data. Lovable's response was widely criticized as inadequate.
Incident Details
Tech Stack
References
Lovable is a vibe-coding platform that lets users describe what they want in natural language and generates full web applications from those descriptions. It markets itself as a tool for building production-ready apps, complete with authentication and database integration, without requiring users to write code. The platform uses Supabase as its default backend, giving generated applications a PostgreSQL database with an API layer that clients can query directly from the browser.
This architecture requires Row-Level Security (RLS) to work safely. RLS is a PostgreSQL feature that controls which rows in a database table a given user can access. Without properly configured RLS policies, any client with the database's public API key - which is embedded in every Lovable app's frontend code by design - can query any table and retrieve any row. The API key is not a secret. It is explicitly meant to be public. RLS is the only thing standing between that key and unrestricted database access.
Lovable's AI did not reliably generate correct RLS policies for the applications it built.
The discovery
Matt Palmer, a security researcher, identified the vulnerability while examining Linkable, a Lovable-built site that generates pages from LinkedIn profile data. Linkable was actively maintained by a Lovable employee, which made it a reasonable indicator of the platform's default security posture. Palmer found that by removing authorization headers from HTTP requests to the Supabase API, he could bypass all access controls. This worked because the RLS policies - or lack of them - treated unauthenticated requests as having full access to the data.
Palmer reported the vulnerability to Lovable on March 21, 2025, one day after confirming the RLS misconfiguration. The vulnerability was assigned CVE-2025-48757 and classified as CWE-863 (Incorrect Authorization). The CVE description states that through April 15, 2025, "an insufficient database Row-Level Security (RLS) policy in Lovable allows remote unauthenticated attackers to read or write to arbitrary database tables of generated sites."
The "write" part is important. Follow-up testing conducted on May 24, 2025 confirmed that the vulnerability extended beyond unauthorized data access to include malicious data injection. An attacker could not only read every row in every table but could insert or modify data in the database, potentially corrupting application state, injecting phishing content, or escalating access further.
The scope
Analysis by Superblocks found that more than 170 Lovable-generated applications were affected by CVE-2025-48757. The exposed data included personal debt amounts, home addresses, API keys, credentials, and other personally identifiable information. These were not test applications. They were live, publicly deployed apps built by users who trusted the platform to handle security fundamentals.
Separately, Taimur Khan, a tech entrepreneur with a software engineering background, audited a single Lovable-hosted application and found 16 vulnerabilities. Six were critical. That one app had leaked more than 18,000 people's data. Khan reported his findings through Lovable's support channel. His ticket was reportedly closed without a response.
Khan's assessment was blunt: "If Lovable is going to market itself as a platform that generates production-ready apps with authentication 'included,' it bears some responsibility for the security posture of the apps it generates and promotes."
Why the AI got it wrong
The root cause was not that Lovable's AI was incapable of generating RLS policies. It is that the policies it generated did not match the business logic of the applications it built. RLS policies are highly context-dependent - they need to describe, in SQL, exactly which users should access which rows under which conditions. A social media app, a financial dashboard, and an e-commerce site all require different access patterns. The AI was generating policies that looked functional during normal use but failed under adversarial conditions where requests came without authentication or with modified parameters.
Lovable's architecture made this worse. The platform uses a client-driven design where the browser communicates directly with Supabase's REST API. There is no intermediary server to enforce access controls. This is a legitimate architectural pattern - Supabase is designed for it - but it places the entire burden of data security on RLS policies being correct. If the AI generates incomplete or incorrect policies, nothing else stops unauthorized access.
Supabase's own documentation explicitly warns about this. Public buckets (for file storage) and unrestricted database access are both addressed in Supabase's security guides, which explain the differences between public and private configurations and provide guidance on hardening the data API. The information needed to build secure applications on Supabase is well-documented. The problem is that Lovable's AI did not consistently apply it.
The platform responsibility question
Lovable's response to the vulnerability drew criticism from the security community. The company's position was that users are responsible for addressing security issues in their applications before publishing them. This response misses the point of the platform's value proposition. Lovable's users are, by definition, people who chose not to write code themselves. Telling them to review and fix AI-generated database security policies is like selling a car and telling the buyer to check the brakes before driving.
The security researchers involved in disclosing the vulnerability described Lovable's response as "inadequate and non-transparent." Palmer's report was acknowledged, but the broader pattern of insecure defaults was not addressed with the speed or visibility that the security community expected.
This is not a unique problem to Lovable. Every vibe-coding platform that generates backend code faces the same tension between ease of use and security. The platforms promise that users can build applications without understanding the underlying technology. But database security is not an optional feature that can be added later - it needs to be correct from the moment the application first accepts user data. If the AI generates code that looks like it works but does not properly restrict data access, users have no way to know until someone exploits it.
Public buckets and storage
Beyond the database RLS issue, Lovable-generated applications also had problems with Supabase storage bucket configurations. Supabase allows developers to create public or private storage buckets for file uploads. Public buckets are accessible to anyone with the URL. Private buckets require authentication. Lovable's AI was generating applications that stored user-uploaded files in public buckets, making uploaded content accessible to any visitor who could construct or guess the file URL.
For applications handling user profile pictures, this might be a minor concern. For applications handling sensitive documents, financial records, or medical information, public storage buckets represent a significant data exposure. The fix - configuring buckets as private by default and generating signed URLs for authorized access - is straightforward, but it requires the AI to make that choice during code generation rather than defaulting to the simpler public configuration.
Aftermath
CVE-2025-48757 was formally documented through April 15, 2025, suggesting that some remediation occurred at the platform level around that date. However, the ongoing discoveries by Palmer (testing through May 24) and Khan (finding 16 vulnerabilities in a single app, with The Register reporting his findings in February 2026) indicate that the underlying pattern - AI generating insecure code that users cannot evaluate - persisted beyond the initial fix.
The incident became one of the more concrete examples of the security risks inherent in vibe-coding platforms. Building a web application that handles user data requires getting security right at multiple layers simultaneously: authentication, authorization, database access control, file storage permissions, API rate limiting, input validation. A human developer might get some of these wrong. Lovable's AI was getting them wrong systematically, at scale, across every application it generated, because its training and prompting did not prioritize these concerns at the level required for production use.
Discussion