Sears Home Services left AI chatbot calls and chats exposed online
Security researcher Jeremiah Fowler discovered three publicly exposed databases tied to Sears Home Services' AI support system, exposing 3.7 million chat logs, 1.4 million audio recordings, and text transcripts from 2024 to 2026. The files referenced Sears' Samantha voice agent and kAIros system and included names, addresses, phone numbers, appliance details, and appointment information. Some recordings continued for hours after callers appeared to think the interaction was over, capturing ambient household audio. Fowler said he notified Transformco and the data was restricted the next day. Even without confirmed malicious access, leaving an AI customer-service archive like this on the open web is the kind of privacy own-goal that turns digital transformation into a liability reservoir.
Incident Details
Tech Stack
References
Sears the department-store empire is mostly a ghost. Sears Home Services, however, is still very much alive, scheduling appliance repairs and service calls at scale. Like every company trying to look modern on a budget, it also bolted an AI assistant onto the customer-contact layer. That assistant, according to security researcher Jeremiah Fowler, generated and stored a truly bleak quantity of sensitive data in publicly exposed databases.
In March 2026, Fowler reported that he found three separate databases tied to Sears Home Services that were not password-protected or encrypted. The exposed material included 3.7 million chat logs, 1.4 million audio recordings, and text transcriptions of calls spanning 2024 through 2026. The files referenced "Samantha," an AI virtual voice agent for Sears Home Services, and the name of the broader system, "kAIros."
There are ordinary data leaks, and then there are the kinds of leaks that quietly assemble a complete dossier on how strangers live inside their homes. This was much closer to the second category.
What Was Sitting in the Open
According to Fowler's report, the databases contained names, physical addresses, email addresses, phone numbers, product details, appointment information, repair discussions, and other operational data gathered during customer interactions. In one CSV file alone, he said he saw more than 54,000 complete chat logs from beginning to end.
WIRED's follow-up reporting put scale on it: millions of chat transcripts, well over a million audio files, and plain-text transcripts of phone calls, all openly reachable through the exposed databases before they were restricted. Fowler said he sent a responsible disclosure notice to Transformco, the parent company behind Sears Home Services, and the data was locked down the next day.
That is the good part of the story, if one insists on finding one. The bad part is everything that had to be true before that happened. Someone built or operated a customer-service system that captured this much personal information. Someone stored it in a way that let the data sit exposed on the public web. And nobody inside the organization appears to have noticed until an outside researcher did.
The Four-Hour Audio Problem
The number of exposed files is bad enough. The audio behavior is what makes the story feel especially invasive.
Fowler told WIRED that some call recordings continued for hours after customers appeared to believe the interaction had ended. He described hearing television audio, background conversation, and other household noise in recordings that kept running long after the actual service exchange. One screenshot in Fowler's report described transcripts continuing for up to four hours.
That changes the character of the incident. This is not merely "a chatbot stored your support ticket." It is closer to "an AI customer-service system may have captured and exposed slices of private home life that customers never meant to share in the first place." Even if no outsider actually downloaded the audio, the exposure itself is already a serious privacy failure. People calling about broken refrigerators or washing machines are not volunteering for a house-bugging experiment.
The ambient-recording angle also raises a second problem: voice data. Text chat leaks are bad. Audio leaks are worse because they carry far more context and can be reused in ways text cannot. A voice recording can contain health information, family details, background conversations, and the raw material for impersonation or fraud. Fowler explicitly warned about the broader risks of social engineering and voice-based scams.
Why This Is More Than a Generic Database Mistake
There is an easy objection here: exposed databases are normal security incidents, not uniquely AI incidents. Sometimes that objection is correct. In this case it is not the whole story.
The AI layer matters because it shapes what data gets collected, how much of it gets centralized, and what kind of operational archive the company creates. A classic customer-service system might store a ticket summary, a phone number, and a few account notes. An AI chat and voice stack stores transcripts, recordings, metadata, escalation logic, identifiers, timestamps, and often the entire conversational history because that history is useful for automation and analytics.
In other words, the AI deployment changes the blast radius even before anything is exposed. It creates a richer, more searchable, and more revealing dataset than the older support systems it is meant to replace. When that dataset is left open, the privacy loss is correspondingly worse.
Fowler's report also pointed out another AI-specific wrinkle: exposed chatbot logs can reveal internal flows, prompts, escalation decisions, and system behavior that would help a competitor imitate the system or help an attacker learn how to manipulate it. The customer data is the obvious problem. The exposed interaction logic is the quieter one.
The Customer-Service Trade
Sears Home Services used Samantha and kAIros to route scheduling, support, and phone interactions more efficiently. That is the standard business case for deploying AI to the front line: lower staffing cost, higher availability, and the promise that every customer interaction becomes analyzable machine-readable input.
But that bargain only works if the operator treats the resulting data like a toxic asset. Once you persuade customers to explain who they are, where they live, what appliances they own, when technicians are coming, and what is broken inside the house, you have assembled an unusually useful dataset for scammers. Warranty fraud, fake technician calls, phishing texts, and identity-linking attacks all get easier when the attacker knows the victim really did book a repair, really does own the appliance, and really is waiting for a visit.
WIRED also reported that some of the exposed conversations showed customers getting stuck with glitchy bot behavior before being pushed toward humans anyway. That is the comic side of customer-service AI: the bot that insists it can help, then fails at the task, then hands the customer to a real person after harvesting the whole conversation first. The security side is less funny. If the system is going to collect all of that data before failing, it needs to protect it with more seriousness than a temporary marketing database.
The Accountability Gap
Fowler said he could not determine whether the databases were operated directly by Sears Home Services or by a contractor. That ambiguity is common and telling. Modern AI support stacks often involve a tangle of vendors, cloud infrastructure, integration partners, storage layers, and internal tooling. When something is left open, the public frequently gets a shrug instead of a clean answer about who actually owned the problem.
Transformco did not provide WIRED with a public explanation for how long the data had been exposed or whether anyone besides Fowler accessed it. That leaves customers with the standard post-incident non-answer: the records were eventually secured, but the public still does not know how long they were available or whether anyone else took a look.
The Real Lesson
A lot of AI customer-service deployments are sold as efficiency upgrades. This story is a reminder that they are also data-multiplication machines. They convert routine service interactions into giant archives of text, audio, metadata, and operational logic. If the operator does not lock that archive down, the efficiency gain comes bundled with a very large privacy bomb.
Sears Home Services did not need a sci-fi disaster here. It only needed the oldest security failure in the book: sensitive data stored where it should not have been visible. AI made the consequence worse by ensuring the exposed archive was far richer than a normal ticket log. That is the modern pattern. The bot does not have to say anything deranged to become a liability. Sometimes it just has to keep listening while nobody secures the pile of recordings afterward.
Discussion