Mediahuis suspended senior journalist over AI-invented quotes
Mediahuis suspended veteran journalist Peter Vandermeersch after reporting found AI-generated quotes in his work. Euronews reported that 15 of 53 articles included fabricated expert quotes, with multiple quoted individuals saying they had not made the attributed remarks. Vandermeersch acknowledged relying on tools such as ChatGPT, Perplexity, and Google's Notebook tools to summarize source material, then trusting the outputs too much.
Incident Details
Tech Stack
References
The Quotes Were Too Convenient
Peter Vandermeersch was not a random blogger discovering ChatGPT between errands. He was a veteran European journalist, a former NRC editor-in-chief, former chief executive of Mediahuis Ireland, and a Mediahuis fellow focused on journalism and society. He had spent years in the kind of senior newsroom roles where "verify the quote" is not a slogan. It is the job.
In March 2026, that made the revelation harder to explain. Reporting summarized by Euronews, NL Times, and The Journal described an NRC investigation finding that Vandermeersch's Mediahuis work contained fabricated quotes generated by AI tools. Euronews reported that 15 of 53 articles contained invented expert quotes, and that several people cited in the articles confirmed they had never said the words attributed to them.
Mediahuis suspended him from the fellowship role. Vandermeersch acknowledged that he had used tools including ChatGPT, Perplexity, and Google's Notebook products to summarize lengthy reports and source material. The tools produced quote-shaped text. He trusted the summaries too much, used the material as if it reflected real source language, and put words into other people's mouths.
The Classic AI Journalism Trap
The failure mode is simple because it is ordinary. A journalist has a stack of reports, articles, and background material. An AI tool can summarize the pile quickly. The summary sounds fluent. It produces crisp lines that capture a point better than the underlying report did. The writer is on deadline, the output feels plausible, and the quote gets inserted.
The problem is that journalism does not need quote-shaped insight. It needs what the person actually said. A paraphrase can summarize meaning, but a quote claims exact language. If the model invents a sentence and the journalist wraps it in quotation marks, the error is no longer just an imprecise summary. It is fabrication.
That distinction is basic, and it is why this incident landed so hard. The public has already watched AI-generated stories invent sources, fake authors, and mangle facts. A senior journalism figure using AI summaries in a way that generated fake expert quotes reinforces the fear that newsrooms will adopt tools that increase output while weakening verification.
Expertise Did Not Save the Workflow
One reason this belongs beside the Ars Technica fabricated-quotes story is that both incidents involved people who should have understood the danger. The lesson is not that journalists are ignorant about hallucinations. The lesson is that knowing about hallucinations in general does not prevent a specific hallucination from slipping through a workflow that rewards speed and plausible prose.
AI text tools are especially treacherous for quotes because they are good at producing sentences that sound like the kind of thing an expert would say. The wording may fit the theme. The person may have expressed a similar idea somewhere. The model's version may even be more concise than the real source. That makes it tempting and dangerous.
A newsroom process has to assume the temptation exists. If an AI tool touches source material, every attributed quote needs to be checked against the original transcript, recording, email, report, or article. If that original source does not contain the words, the words are not a quote. No amount of seniority changes that.
The Brand Damage Was Built In
Mediahuis did not just have a factual correction problem. It had a trust problem. Vandermeersch's role was tied to journalism and society, and the episode involved the exact issue newsrooms have been warning readers about: AI hallucinations presented as real information. The contradiction became part of the story.
The damage also spread beyond one newsletter or column. When a senior media figure fabricates quotes through overreliance on AI, every newsroom experiment with generative tools becomes harder to defend. Editors can say the tool is only for background research, translation, brainstorming, headline suggestions, or summarization. Readers will reasonably ask how they know the boundary held.
That does not mean newsrooms can never use AI. It means the controls need to be visible inside the work. Source attribution, quote verification, correction policies, and disclosure rules matter more when tools can produce confident falsehoods at scale. A newsroom cannot rely on the model to know the difference between a real quote and a tidy synthesis of surrounding ideas.
The Graveyard Lesson
The Mediahuis suspension is not a story about one journalist failing to understand technology. It is a story about a verification habit being outsourced to a tool that cannot bear the responsibility. AI can summarize, but it cannot certify that a quote exists unless the workflow forces it back to the underlying source and a human checks the result.
The fix is not complicated, which makes the failure more frustrating. Treat every AI-assisted quote as unverified. Store the source document. Check the exact words. Prefer paraphrase when the underlying material supports the idea but not the language. Correct quickly when an error is found. Do not let a model upgrade an interpretation into a quotation because the sentence reads well.
Journalism has always had ways to invent authority: anonymous sourcing abuse, quote polishing, sloppy paraphrase, and wishful editing. Generative AI just industrializes the smooth version. It can produce a sentence that feels true enough to publish and false enough to harm trust.
That is the cemetery inscription: the quote sounded perfect because no one actually said it.
Discussion