AI chatbot app leaked 300 million private conversations
Chat & Ask AI, a popular AI chatbot wrapper app with 50+ million users, had a misconfigured Firebase backend that exposed 300 million messages from over 25 million users. The exposed data included complete chat histories with ChatGPT, Claude, and Gemini -- including discussions of self-harm, drug production, and hacking. A broader scan found 103 of 200 iOS apps had similar Firebase misconfigurations.
Incident Details
Tech Stack
References
The Wrapper App With an Open Door
Chat & Ask AI, developed by Istanbul-based company Codeway, had built itself into one of the most downloaded AI chat applications on both the Google Play Store and the Apple App Store, claiming more than 50 million users. The app's business model was straightforward: it resold access to large language models from OpenAI (ChatGPT), Anthropic (Claude), and Google (Gemini) through a single mobile interface, offering limited free access to users who didn't want to manage separate subscriptions to each service.
The app stored every conversation. Every prompt, every response, every timestamp, every configuration detail - the complete history of how millions of people talked to their AI assistants - sat in a Google Firebase backend. Firebase is a Backend-as-a-Service platform from Google that provides cloud databases, authentication, and storage. It is also, when misconfigured, one of the most reliable ways to accidentally publish your entire user database to the open internet.
In January 2026, an independent security researcher known as Harry discovered that Codeway had done exactly that. The Firebase Security Rules for Chat & Ask AI - the configuration that determines who can read and write data - had been set to allow public reads. No authentication required. No special tools needed. Anyone with the right URL and basic knowledge of Firebase's REST API could query the database and pull back whatever they wanted.
What Was Exposed
Harry accessed approximately 300 million messages from more than 25 million users. He extracted and analyzed a sample of around 60,000 users and over one million messages to confirm the scope before reporting the issue.
The database contained user files with their complete chat histories: every question they had asked the AI, every response they received, which model they used, custom names they had given their chatbot persona, timestamps, and internal configuration details. For a service that acted as a gateway to multiple AI providers, this meant the exposure captured the full breadth of how people used AI assistants in what they believed was a private setting.
The content of those conversations was exactly what you would expect when millions of people talk to an AI assistant they believe no one else can see. Harry reported finding discussions of self-harm and suicide, requests related to drug production, hacking queries, mental health struggles, workplace secrets, and intimate personal details. Many users treated their AI chatbot as a confidential sounding board - part therapist, part advisor, part diary - sharing things they would not willingly make public. All of that had been sitting in a database anyone could read.
The breach also extended beyond Chat & Ask AI itself. The Firebase misconfiguration affected Codeway's entire app ecosystem, meaning data from users of the company's other applications was similarly exposed.
The Firebase Problem
The specific vulnerability - misconfigured Firebase Security Rules - is among the most well-documented and preventable security failures in mobile app development. Firebase databases do not start in an insecure state by default, but developers must actively configure access rules to match their application's authorization model. When those rules are left permissive (or never properly set up), the database becomes publicly accessible through Firebase's standard API.
The pattern is familiar enough that security researchers have given it its own category. Harry, after discovering the Chat & Ask AI vulnerability, built a scanning tool and tested 200 iOS apps for the same Firebase misconfiguration. He found that 103 of them - more than half - had the same flaw, collectively exposing tens of millions of stored files. He subsequently created a public registry called Firehound at firehound.covertlabs.io where users could check whether their apps were affected.
The 103-out-of-200 statistic is staggering on its own. It means that a researcher with a scanning script could walk through the App Store and find, in effectively every other app that uses Firebase, the same wide-open database configuration. Chat & Ask AI was the biggest name in the batch and the one that stored the most sensitive data, but the underlying problem was endemic to how mobile developers (and the AI coding tools that help them) handle backend configuration.
The Disclosure and Response
Harry disclosed the vulnerability to Codeway on January 20, 2026, following responsible disclosure practices. According to reporting from 404 Media and Malwarebytes, the company fixed the Firebase configuration across all of its applications within hours of notification. The speed of the fix was, in a grim way, proportional to the simplicity of the problem: changing Firebase Security Rules is a configuration update, not a code rewrite.
The harder question was how long the database had been exposed before Harry found it. The Malwarebytes report noted that more detailed analysis found 18 million users and 380 million messages exposed as of January 18, 2026 - two days before Harry notified Codeway. The database appeared to have been misconfigured from the time the app was deployed, raising the possibility that user data had been accessible for the entire operational life of the application.
There is no public evidence that anyone other than Harry accessed the exposed data before the fix. There is also no way to prove that no one did. Data exposed on the open internet can be copied, scraped, and redistributed without leaving a trace on the source system. For the 25 million affected users, the realistic assessment is that their private AI conversations may have been available to anyone who looked for them, for an unknown period of time.
What Users Actually Lost
The standard data breach notification template talks about "email addresses, names, and passwords." This breach was different in kind. What leaked was not biographical data or login credentials. It was the substance of private conversations in which people believed they were speaking to an AI in confidence.
A leaked email address can lead to spam. A leaked password can be rotated. A leaked conversation in which someone discussed suicidal thoughts, asked for help with substance abuse, confided workplace frustrations about their boss, or explored questions they were too embarrassed to ask another human - that is a categorically different kind of exposure. There is no "rotate your password" equivalent for a conversation you had with what you thought was a private AI assistant.
This distinction matters because AI chatbot wrapper apps like Chat & Ask AI are often marketed with language that implies privacy and confidentiality. Codeway's own materials referenced GDPR compliance and "enterprise-grade security" - claims that sit uncomfortably next to a Firebase database with public read access enabled.
The Wrapper App Risk
Chat & Ask AI was a "wrapper" app - it didn't run its own AI models but provided a mobile interface to models operated by OpenAI, Anthropic, and Google. This architecture introduces a specific risk that direct-to-provider usage avoids: a third party sits between the user and the AI, and that third party stores a copy of every conversation.
When a user chats through the official ChatGPT or Claude apps, their data is subject to OpenAI's or Anthropic's privacy policies and security infrastructure - companies that have invested heavily in data protection because their business depends on user trust. When a user routes those same conversations through a wrapper app, the data is also subject to whatever security practices the wrapper developer implemented. In this case, those practices included a Firebase database with no access controls.
The wrapper app market grew rapidly alongside the AI chatbot boom, as developers saw an opportunity to package multiple AI services into a single interface and monetize access through subscriptions and ads. Many of these apps were built quickly, some with AI coding tools, and shipped to app stores where download counts and ratings mattered more than security audits. Chat & Ask AI, with its 50 million users, was the largest known example of what happens when that approach meets a misconfigured backend.
The Scale Problem
The 300-million-message figure made this one of the larger data exposures of 2026, measured by record count. But the more telling metric was the 103-out-of-200 scan result from Harry's broader investigation. If more than half of iOS apps using Firebase have the same misconfiguration, the total volume of user data sitting in publicly accessible databases across the mobile app ecosystem dwarfs what was found in Chat & Ask AI alone.
The Firebase misconfiguration problem has been documented in security research papers, conference talks, and blog posts for years. Google has added warnings and default security improvements to Firebase's developer console. Security scanning tools flag the issue automatically. And still, in January 2026, an independent researcher could scan 200 apps and find 103 of them wide open. The tools to prevent the problem exist. The knowledge to prevent the problem is freely available. The problem persists because the incentive structure of mobile app development rewards shipping fast over shipping secure, and because neither Apple nor Google's app review processes check whether a submitted app's backend database has functioning access controls.
Discussion