ChatGPT invented a child-murder conviction for a real man
When Norwegian user Arve Hjalmar Holmen asked ChatGPT who he was, the bot replied with a fabricated story saying he had murdered two of his sons, attempted to kill a third, and been sentenced to 21 years in prison. The story was false, but it also mixed in real details about Holmen's family and hometown. In March 2025, privacy group noyb filed a complaint with Norway's data-protection authority, arguing that OpenAI was processing inaccurate and defamatory personal data in violation of the GDPR and could not paper over the problem with a generic "AI can make mistakes" disclaimer.
Incident Details
Tech Stack
References
The Prompt
Arve Hjalmar Holmen asked ChatGPT a question that should have been harmless: who am I?
ChatGPT answered with a full crime story. According to the response reproduced in noyb's March 2025 complaint, Holmen was a Norwegian man who had murdered two of his sons, attempted to murder a third, shocked the nation, and received a 21-year prison sentence.
None of that was true.
If the story had been pure invention from top to bottom, it would still have been ugly. What made it worse was the blend of fabrication and reality. The complaint says the chatbot correctly included elements of Holmen's real life, including his hometown and the fact that he has three sons. The false part was monstrous. The real part made it plausible.
That is the detail that turns this from a generic "chatbots hallucinate" anecdote into a much sharper failure. ChatGPT did not merely confuse one public figure with another. It assembled a defamatory narrative about a private person and stitched it together with real identifying information.
The Complaint
On March 20, 2025, privacy rights group noyb filed a complaint with Norway's data-protection authority, Datatilsynet, on Holmen's behalf. The legal theory was straightforward. Article 5(1)(d) of the GDPR requires personal data to be accurate and, where necessary, kept up to date. The complaint argues that OpenAI violated that obligation by generating and processing false personal data of an obviously defamatory kind.
The filing also takes aim at OpenAI's standard safety valve for this class of problem: the reminder that ChatGPT can make mistakes. That disclaimer may be useful product copy, but it is a thin legal shield when the output in question accuses a real person of murdering his children.
noyb's position was blunt. If a company processes personal data about a person, it has to get that data right. It does not get to publish fiction and then shrug toward a footer telling users to double-check important facts.
Why This Output Was So Dangerous
The obvious risk is reputational. Holmen is not a politician, a celebrity, or a litigant in a high-profile case. According to the complaint, he is an ordinary private citizen. For a system used by millions of people to generate a story that he murdered his children is not a minor technical glitch. It is a direct reputational threat.
There is also a second risk that matters in cases like this: plausibility through specificity. The complaint says ChatGPT used real details about Holmen's life. It knew he was from Trondheim. It knew he had children, and even landed close to their age gap. That kind of partial accuracy is what makes hallucinated biographies more dangerous than random nonsense. A reader might assume the model must have found a real article somewhere, because some parts of the story line up with reality.
Holmen himself put the problem plainly in noyb's statement: some people assume there is no smoke without fire. That is enough to make this category of hallucination uniquely nasty. The false claim does not need to go viral to do damage. It only needs to be believed by one employer, one neighbor, or one person who repeats it.
OpenAI's Fix, Sort Of
By the time the complaint became public, ChatGPT no longer returned the same murder story when asked about Holmen. noyb attributed that change to OpenAI's newer model behavior, which had started using live web search for some identity queries. In practical terms, that meant Holmen was no longer being described as a murderer in current responses.
That was helpful, but not a clean resolution. The complaint argues that the underlying false data may still remain embedded in the model or its training pipeline, and that users have no reliable way to know whether inaccurate personal data has actually been erased or simply masked from future prompts. That matters under GDPR because the regulation is about processing personal data, not merely about whether the current user interface happens to display the worst version of it today.
This is one of the harder problems for large language models in Europe. Traditional databases can correct a record. Search indexes can remove a result. A model that has absorbed false associations during training is much harder to interrogate and even harder to correct with confidence. OpenAI can change product behavior. It is much less obvious that it can prove the bad association is gone.
The Pattern Behind the Story
AI defamation stories often involve public figures because they are easier to spot. A celebrity or politician checks what the bot says about them, finds a fabricated claim, and has enough reach to get press attention. Holmen's case is more unsettling because it involves a private person with no public role and no obvious reason for a model to generate a lurid crime history in the first place.
That raises a broader question about the long tail of AI hallucinations about real people. Public cases are the ones that get reported. Private cases are harder to detect because the subject often has no reason to test the model and no practical way to learn what strangers may have asked it.
Once that problem is framed as personal-data accuracy rather than generic model unreliability, it starts to look less like a product-improvement issue and more like a compliance issue with teeth. European regulators have spent years arguing that data rights still apply when the processing becomes technically complicated. Holmen's case gives that argument a particularly ugly fact pattern.
Why The Disclaimer Fails
OpenAI's standard warning that outputs may be inaccurate makes sense from a consumer-product perspective. It tells users not to trust the system blindly. The trouble is that disclaimers shift the burden to the reader while leaving the subject of the false statement exposed.
That approach might be defensible when the model is summarizing trivia or fumbling a recipe. It looks much thinner when the output accuses a named person of murdering children. noyb's complaint leans hard on that distinction: the legal duty is not satisfied by admitting in advance that the machine sometimes lies.
The complaint also points to a practical asymmetry. The person harmed by the output is usually not the person who triggered it. Holmen could not meaningfully protect himself by being a cautious ChatGPT user if the false statement was generated for someone else. That is another reason the "please verify" disclaimer falls short here. It assumes the person receiving the output is the person bearing the risk.
Why It Fits Vibe Graveyard
This story belongs here because the failure is concrete, documented, and consequential. A deployed AI product produced a false and defamatory answer about a real person. The output mixed fabricated claims with accurate personal details, which increased its plausibility. The aftermath included a formal regulatory complaint backed by a detailed filing and source material.
It also captures a recurring AI failure mode in unusually stark form: a model that does not know a fact will often invent one, and invention gets much more dangerous when the subject is a real person rather than a technical question or a joke prompt.
OpenAI did not need ChatGPT to be perfect to avoid this story. It only needed the system not to generate a fictional child-murder conviction for an identifiable private citizen. That is a low bar. ChatGPT still managed to tunnel under it.
Discussion