Vibe-coded dating safety app leaked 72,000 private images and 1.1 million messages to 4chan

Tombstone icon

Tea, a women-only dating safety app with over four million users, suffered three data breaches in July 2025 that exposed 72,000 private images - including 13,000 photos of women holding government-issued IDs - and more than 1.1 million private messages containing deeply personal accounts of relationships, trauma, and abuse. The exposed data circulated on 4chan and hacking forums. The app's founder later admitted to building it with contractors and AI tools without personal coding knowledge. Security researchers attributed the breaches to missing authentication, unsecured legacy databases, and development practices that prioritized speed over security. Multiple class-action lawsuits and privacy regulator investigations followed.

Incident Details

Perpetrator:Executive
Severity:Catastrophic
Blast Radius:72,000 private images including 13,000 government IDs exposed; 1.1 million private messages leaked to hacking forums; 4+ million users affected; class-action lawsuits filed; regulatory investigations opened

A Safety App With No Safety

Tea launched in 2023 with a premise that was genuinely compelling: a women-only platform where users could share dating experiences, warn others about problematic behavior, and vet potential partners. Think of it as a collective knowledge base for dating safety, where the information came from the people who needed it most.

By July 2025, the app had climbed to the top of Apple's free-app charts and accumulated over four million users. Women were uploading highly sensitive information: personal accounts of relationships gone wrong, descriptions of abusive behavior, names and identifying details of people they were warning others about, and - for identity verification - photographs of themselves holding government-issued IDs. The app asked for this level of trust because it was specifically designed to be a safe space.

Three data breaches in a single month proved it was not.

The First Breach

The first breach targeted a legacy Google Firebase storage bucket. Firebase is a popular backend-as-a-service platform, and storage buckets are where apps keep files - in this case, user-uploaded images. The bucket in question was a remnant from an earlier version of the app's infrastructure, left behind during a migration to a newer system.

The legacy bucket had no authentication protecting it. An attacker who found it could access its contents without credentials, without bypassing security controls, without doing anything particularly clever. The bucket was simply open.

Inside were approximately 72,000 images. Of those, 13,000 were photographs of women holding their government-issued identification cards - driver's licenses, passports, national ID cards - submitted as part of the app's identity verification process. The remaining 59,000 images came from posts, private messages, and comments within the app.

Thirteen thousand photographs of women holding their government IDs, stored in an unsecured bucket, accessible to anyone who stumbled across it. The verification images were supposed to prove the users were real. Instead, they gave attackers government-grade identity documents tied to the users' faces.

The Second Breach

Days later, a second breach exposed something worse. Over 1.1 million private messages exchanged between Tea users from February 2023 to July 2025 - essentially the app's entire message history from launch to the present - became accessible.

The content of those messages was not casual. Women used Tea to share experiences with dating and relationships that they did not share anywhere else. The messages included detailed accounts of abusive partners, descriptions of sexual assault, discussions of trauma, and deeply personal narratives about vulnerability and fear. Many messages contained names, phone numbers, and specific locations that could identify both the women who wrote them and the men they were describing.

This data was not just private in the conventional data-breach sense of "someone's email address got exposed." It was private in the sense that the women who shared it had made a deliberate decision to share it only within what they believed was a secure, women-only environment. They shared things they would not have put on social media, would not have told friends, might not have told anyone except the anonymous community they trusted to keep it between them.

That trust was misplaced. The messages ended up on 4chan and hacking forums.

The Scale of the Harm

The exposed data created layers of vulnerability. Women whose ID photos were leaked had their government identification documents in the hands of strangers on forums not known for their respectful treatment of women's personal information. Women whose messages were exposed had their most private narratives - descriptions of abuse, trauma, and intimate relationships - available for anyone to read, search, and potentially use to identify and target them.

The messages were particularly dangerous because they often contained identifying information about third parties - the people the women were warning others about. If an abusive ex-partner found messages describing his behavior, linked to identifying details that traced back to the woman who wrote them, the safety app would have accomplished the exact opposite of its stated purpose.

Multiple class-action lawsuits were filed. Privacy regulators in several jurisdictions opened investigations. The company engaged third-party cybersecurity experts, took affected systems offline, and implemented what it described as additional security measures. Whether those measures would have prevented the breaches if they had been in place from the start is a question the company has not directly addressed.

The Vibe Coding Connection

After the breaches became public, reporting revealed the circumstances under which Tea had been built. The app's founder acknowledged building it with contractors and AI coding tools without personal coding or software engineering knowledge. The development approach prioritized getting a product to market quickly. It succeeded at that goal: the app launched, grew rapidly, and reached millions of users.

What it did not prioritize - or, more precisely, what the development approach was not equipped to prioritize - was the kind of security architecture that storing government IDs and abuse narratives requires. An unsecured legacy storage bucket is not a sophisticated attack vector. It is the kind of oversight that a security review would catch in the first hour. Missing authentication on a database containing sensitive user data is not a zero-day exploit. It is a configuration step that was never completed.

Security researchers who analyzed the breaches identified the root causes as missing authentication and authorization policies, an unsecured legacy database that was never decommissioned after a platform migration, and development practices consistent with vibe coding - building fast with AI tools, shipping to production, and moving on without the security review that traditional development processes include (or at least aspire to include).

The Trust Geometry

The particular cruelty of this breach lies in the relationship between what the app asked of its users and what it did with what they gave it.

Tea asked women to upload photos of themselves holding government identification. It asked them to share their most sensitive personal experiences. It asked them to provide identifying details about people who had harmed them. It asked for all of this because the app's value proposition depended on that level of trust and disclosure. A dating safety app where no one shares anything sensitive is a dating safety app that serves no purpose.

The app's security posture was wildly mismatched with the sensitivity of the data it collected. An unsecured Firebase bucket and a database with no access controls might be acceptable (though not recommended) for an app that stores recipe bookmarks or workout logs. For an app that stores government IDs and abuse narratives, those gaps are a betrayal of the implicit promise the app made to every user who decided to share something difficult.

The founder's admission about using AI tools and contractors without coding expertise adds a specific dimension to the failure. Building software quickly is not inherently dangerous. Building software quickly that handles sensitive data, without the security knowledge to understand what "sensitive data" requires in terms of infrastructure, access controls, encryption, and retention policies - that is where the development approach becomes the story.

What Remained After

The images appeared on 4chan. The messages circulated on hacking forums. The lawsuits were filed. The regulators began their investigations. The company issued statements about engaging security experts and taking systems offline.

None of that retrieved the 13,000 photos of women holding their IDs from the places they had already been shared. None of it unsaid the 1.1 million private messages that were written in confidence and published without consent. The data, once exposed, does not come back. The women who trusted a safety app with their most vulnerable disclosures learned that the app could not keep a Firebase bucket locked.

Four million users signed up for a platform built to protect them. The platform was built with AI tools, by people without the security expertise to protect the data those users entrusted to it. When the data leaked, it leaked in the worst possible direction: the intimate, personal, sometimes desperate words of women trying to keep themselves safe, broadcast to the parts of the internet least interested in their safety.