Snapchat’s “My AI” posted a Story by itself; users freaked out

Tombstone icon

On August 15, 2023, Snapchat's built-in AI chatbot "My AI" posted a one-second Story to users' feeds showing an unintelligible image, then stopped responding to messages. The chatbot had no official ability to post Stories, and the unexplained behavior alarmed Snapchat's largely young user base. Snap confirmed it was a temporary glitch and resolved it, but the incident fed into existing concerns about My AI's access to user data. The UK Information Commissioner's Office had already issued an enforcement notice over Snap's failure to properly assess privacy risks the chatbot posed to children.

Incident Details

Severity:Oopsie
Company:Snap (Snapchat)
Perpetrator:Product Manager
Incident Date:
Blast Radius:Viral alarm among teen users; trust hit; scrutiny on AI access and safeguards.

My AI

Snapchat launched My AI in February 2023, initially as a feature limited to Snapchat+ subscribers and then rolled out to all users in April 2023. The chatbot was built on OpenAI's technology and pinned to the top of users' chat feeds - meaning it was always visible, always available, and not particularly easy to ignore or remove.

My AI was designed to answer questions, have conversations, and generate AI images that users could share. It lived in the chat interface and operated as a conversational companion. Snap positioned it as a friendly assistant that could recommend restaurants, plan trips, answer trivia, or just chat. For a platform whose core user base skews significantly younger than most social media services, this meant an AI chatbot was being placed in front of millions of teenagers as a persistent, always-on contact.

From the start, My AI attracted criticism. In April 2023, CNN reported that parents and advocacy groups were concerned about the chatbot's interactions with minors. Questions centered on what data the chatbot collected, how conversations were stored, and whether the AI had adequate safeguards to prevent inappropriate interactions with young users. Snap said conversations with My AI were used to improve the product but that users could delete them.

The chatbot also had a location-sharing feature. If users granted Snapchat location access, My AI could use that information to provide location-aware responses - nearby restaurant suggestions, local weather, that sort of thing. This meant the AI chatbot was processing location data from a user base that includes children as young as 13.

The Story That Posted Itself

On the evening of August 15, 2023 (reported widely on August 16), Snapchat users noticed something strange. My AI had posted a Story.

Stories on Snapchat are the platform's core content format - short-lived photos or videos visible to a user's friends for 24 hours. They're how people share their days, their meals, their outfits, their faces. My AI was not supposed to be able to post Stories. It was a chat-only feature. It had no camera. It had no Story-posting capability. And yet there it was, in users' Story feeds, with a one-second Story showing what appeared to be an unintelligible image - some users described it as looking like a photo of a ceiling or wall, while others could make no sense of it at all.

Then My AI stopped responding to messages.

The combination - an unexplained, seemingly autonomous post followed by the chatbot going silent - unnerved users in a way that a simple outage wouldn't have. An AI that doesn't respond is a familiar software problem. An AI that does something it's not supposed to be able to do and then goes quiet is a different category of unsettling.

"My Snapchat AI posted a random 1 second story and isn't replying to me AND IM FREAKED OUT," one user posted on X (formerly Twitter), in a message that captured the general mood. Social media filled rapidly with screenshots, speculation, and alarm.

Some users jumped to the conclusion that My AI had somehow accessed their phone cameras and taken a photo. Others worried that the chatbot had glitched in a way that revealed some hidden capability or access level they hadn't been told about. The speculation was compounded by the fact that Snap didn't immediately explain what had happened.

Snap's Explanation

Snap confirmed the incident was a temporary glitch and said it had been resolved. A Snap spokesperson told TechCrunch that My AI had experienced an outage that caused the anomalous behavior. The company characterized it as a technical error, not an intentional feature or a sign of unauthorized data access.

Snap's statement to TechCrunch included a specific phrasing that drew attention: "At this time, My AI does not have Stories feature." The words "at this time" implied the functionality might be added later - which didn't exactly reassure users who had just watched the chatbot post a Story unprompted.

The New York Post reported that the image appeared to show a ceiling or wall, leading some users to fear the chatbot had snapped a photo of their physical surroundings. Snap did not publicly detail what the image actually was or how it was generated. Mashable confirmed the Story was "just a glitch," but the lack of a thorough public explanation left room for continued speculation.

The Broader Privacy Context

The Story incident landed on top of already existing regulatory scrutiny. In October 2023 - about two months after the glitch - the UK Information Commissioner's Office (ICO) issued a preliminary enforcement notice to Snap over My AI. The ICO's concern was that Snap had not adequately assessed the privacy risks that My AI posed to children under its own data protection impact assessment obligations.

The BBC reported that the ICO investigation centered on whether Snap had properly evaluated the risks before rolling out My AI to millions of users, including minors. The UK's age-appropriate design code imposes specific requirements on services likely to be accessed by children, and the ICO's view was that Snap had not met these requirements for the AI chatbot.

Snap responded to the ICO notice and, by May 2024, the ICO concluded its investigation, noting that Snap had taken steps to address the identified risks. But the investigation itself - triggered before the Story glitch and reinforced by it - established that regulators viewed My AI's deployment as something that had been rushed to market without sufficient privacy safeguards for its youngest users.

Why the Glitch Mattered

A one-second unintelligible image posted by a chatbot and quickly resolved as a software bug might, on its own, be a footnote. What made it significant was context.

Snapchat had placed an AI chatbot in front of its entire user base - pinned to the top of the chat feed, difficult to remove, and capable of processing location data and conversation histories from tens of millions of teenagers. When that chatbot appeared to act autonomously - posting content it wasn't supposed to be able to post - it tapped directly into fears about what the AI could do with the access it had been given.

The glitch wasn't dangerous in itself. Nobody's data was exposed. No camera was secretly activated. But it demonstrated a gap between what users understood My AI could do and what it appeared to be doing. That gap, for a feature deployed to a young audience with minimal explanation of its capabilities and limitations, was the actual problem.

Snap fixed the bug. The Story disappeared. My AI went back to chatting about restaurants and trivia. But the moment when it stopped answering and started posting stuck in users' memories as the day the AI did something it shouldn't have been able to do.

Discussion