Vibe-coded Moltbook AI social network exposed 1.5M API keys and 35K emails

Tombstone icon

Moltbook, a viral social network built for AI agents to post, comment, and interact, was entirely vibe-coded and shipped with a misconfigured Supabase database granting full read and write access to all platform data. Wiz researchers found a Supabase API key in client-side JavaScript within minutes, exposing 1.5 million API authentication tokens, 35,000 email addresses, and private messages. The database also revealed the platform's claimed 1.5 million agents were controlled by only 17,000 human owners.

Incident Details

Severity:Facepalm
Company:Moltbook
Perpetrator:Founder
Incident Date:
Blast Radius:1.5 million API tokens, 35,000 email addresses, and private messages exposed via unauthenticated database access

A Social Network for AI, Built by AI (Sort Of)

Moltbook arrived on the tech scene in late January 2026 with a premise so perfectly of the moment that it practically begged for attention: a social network designed exclusively for AI agents. Positioned as "the front page of the agent internet," the platform invited autonomous AI agents to register, post content, comment on each other's posts, vote, and build reputation through a karma system. Think Reddit, but where the average user has a token budget instead of a personality disorder.

The platform went viral almost immediately. Within days, Moltbook reported 1.5 million registered agents, generating a flurry of coverage and genuine curiosity about what an AI-native social network might look like in practice. The idea resonated with the broader surge of interest in autonomous agents - if AI could browse the web, write code, and manage calendars, why not give it a social life?

There was just one problem. The entire platform had been vibe-coded, and the person who built it apparently vibed right past the part where you secure your database.

The Discovery

On January 31, 2026, researchers at Wiz - one of the most prominent cloud security companies in the world - decided to take a look under Moltbook's hood. It did not take long. Within minutes, the Wiz team found a Supabase API key sitting in plain view in the website's client-side JavaScript. That's the code your browser downloads and anyone can read by right-clicking and selecting "View Source."

Supabase is a popular open-source backend platform that provides PostgreSQL databases, authentication, and APIs. It includes a public API key by design - this key is meant to be visible in frontend code. The security model relies entirely on Row Level Security (RLS), which is Supabase's mechanism for controlling who can read or write which rows in which tables. When RLS is properly configured, the public API key can only access data that the current authenticated user is permitted to see.

When RLS is not configured, that same public API key becomes an all-access pass to the entire database. Read access, write access, every table, every row, no authentication required.

On Moltbook, RLS was not configured.

The Scope of Exposure

The Wiz researchers used the exposed key to query the Supabase REST API and found they had unrestricted access to the production database - the live system serving active users. The exposed data included:

  • 1.5 million API authentication tokens - the credentials that AI agents used to authenticate with external services. These weren't just Moltbook login tokens; they were the keys AI agents needed to access whatever services their owners had connected them to, including major AI providers.
  • 35,000 email addresses - the personal email accounts of the humans who had registered agents on the platform.
  • Private messages - agent-to-agent communications that were supposed to be visible only to the participating agents and their owners.
  • Full platform data - the ability to read every post, comment, vote, and piece of metadata in the system.

The write access was equally unrestricted. Anyone who found the same API key could post content as if it came from any AI agent on the platform, simply by crafting a basic POST request. They could modify other agents' profiles, alter karma scores, or delete content. The entire social network was essentially an open notebook that anyone on the internet could read from or write to.

Behind the Curtain: The 88:1 Ratio

Beyond the straightforward security exposure, the database access revealed something about Moltbook's business narrative that the platform itself had not advertised. While Moltbook proudly claimed 1.5 million registered agents - a number impressive enough to fuel its viral ascent - the database told a different story.

Only 17,000 human owners sat behind those 1.5 million agents, an 88:1 ratio. To state the obvious: the platform's most impressive metric was primarily a function of how many agents each human chose to register, not how many humans had been attracted to use the platform. Whether this was intentional misdirection or just a natural consequence of measuring the wrong thing, the exposed database made the gap between the headline number and the underlying reality impossible to ignore.

The Vibe Coding Connection

According to Infosecurity Magazine's reporting, Moltbook was "vibe coded by its creator, Matt Schlicht, as a place for AI 'to hang out.'" The term "vibe coding" describes a development practice where AI tools generate most of the code through natural-language prompts, with minimal manual review or traditional software engineering practices.

The specific failure here - a Supabase database with RLS completely disabled - is one of the most well-documented and frequently warned-about pitfalls of vibe-coded applications. AI coding tools, when asked to build an app with a Supabase backend, will reliably generate code that connects to the database and performs CRUD (create, read, update, delete) operations. They will almost never generate the RLS policies needed to restrict who can access which data, because RLS policies require understanding the application's authorization model - who should see what, and under what conditions.

An AI tool asked to "build a social network for AI agents with posts, comments, and karma" will build exactly that: a social network where AI agents can post, comment, and accumulate karma. It will not spontaneously decide that agent profiles should only be readable by authenticated users, that API tokens should be stored in a non-queryable column, or that private messages should be restricted to their participants. Those are security decisions, and AI coding tools don't make security decisions unless explicitly instructed to.

The Disclosure Timeline

The Wiz timeline for the disclosure was characteristically tight:

  • January 31, 2026, 22:06 UTC - Wiz reported the Supabase RLS misconfiguration to Moltbook, specifically flagging the exposed agents table containing API keys and emails.
  • The issue was subsequently patched, with Moltbook posting a "Security Notice" on its own platform acknowledging the breach and confirming the database exposure had been resolved.

Moltbook's response was relatively prompt, and the Wiz team credited the platform's cooperation in the disclosure. But the damage window - the time between when the platform launched and when the vulnerability was reported - was the period during which anyone with basic technical knowledge could have harvested the entire dataset.

The Broader Implications

The Moltbook incident became a case study for a risk that security researchers had been flagging with increasing urgency: the AI agent attack surface. When the exposed data consists of traditional user credentials like passwords and email addresses, the remediation pathway is well-understood: reset passwords, notify affected users, monitor for misuse.

When the exposed data consists of AI agent API keys - the machine credentials that autonomous agents use to interact with external services - the blast radius becomes harder to map. Those 1.5 million exposed tokens could potentially be used to access the AI providers and services that Moltbook agents had been connected to. An attacker harvesting those keys would not just gain access to Moltbook data; they would gain the ability to impersonate the AI agents on whatever external platforms those agents operated.

As one security analysis noted, the breach exposed not just a single platform's data but the "shadow AI" infrastructure that organizations were building - often without IT approval or formal security review - by deploying autonomous agents with production API keys on third-party platforms they did not control.

The Fundamental Lesson

The Moltbook breach was not sophisticated. There was no advanced persistent threat, no zero-day exploit, no social engineering campaign. A researcher opened a browser, looked at the JavaScript, found a key, and queried a database that had no access controls. The entire chain from "I wonder if this is secure" to "I have full read/write access to everything" took minutes.

Wiz researcher Ami Nagli summed up the wider concern in Infosecurity Magazine: "As AI continues to lower the barrier to building software, more builders with bold ideas but limited security experience will ship applications that handle real users and real data." Moltbook was a viral product with genuine creative ambition - a social network for AI agents is a genuinely interesting concept. But viral growth and creative ambition are not substitutes for the single Supabase configuration step that separates "interesting experiment" from "1.5 million credential breach."

The irony was hard to miss: a platform built to showcase the capabilities of AI agents was undone by the most basic security failure that AI coding tools consistently produce.

Discussion