The New York Times dropped Alex Preston after an AI-assisted review copied a Guardian review

Tombstone icon

A January 6, 2026 New York Times review of Jean-Baptiste Andrea's Watching Over Her was updated on March 30 with an editor's note acknowledging that it contained language and details similar to an earlier Guardian review. On March 31, reporting from The Guardian said the Times had cut ties with freelance reviewer Alex Preston after he admitted using an AI tool that pulled material from the earlier review into his draft. It was not a hallucination story. It was the equally useful reminder that AI-assisted writing can turn plagiarism into something a newsroom does by accident and publishes anyway.

Incident Details

Severity:Facepalm
Company:The New York Times
Perpetrator:Freelance reviewer
Incident Date:
Blast Radius:Published New York Times review carried unattributed language from a Guardian review; editor's note added; freelance relationship terminated; reputational damage for a flagship culture desk
Advertisement

Not Hallucinated, Just Stolen the Modern Way

The AI journalism failure people expect is the hallucination. A model invents facts, fabricates quotes, or makes up a person who does not exist, and the newsroom publishes it because nobody checked. That failure mode is common because the machine is good at sounding authoritative while detached from the underlying material.

The Alex Preston incident at The New York Times was a little different and, in some ways, more revealing. The issue was not that the AI produced pure fiction. The issue was that it inserted language and details from someone else's review, and the review then ran in one of the most prominent newspapers in the world under a real byline.

That is a useful distinction. Generative tools do not just hallucinate. They also recombine. When newsrooms or freelancers use them as drafting assistants, the system can silently drag existing text into the output in ways that feel original during drafting and look suspiciously familiar once published.

The Timeline

On January 6, 2026, The New York Times published Alex Preston's review of Jean-Baptiste Andrea's Watching Over Her. Nothing about the page initially signaled trouble. It was a normal review under a freelance reviewer's byline, the sort of cultural copy a major newspaper publishes every day.

On March 30, the Times appended an editor's note to the review. A reader had alerted the paper that the piece included language and details similar to an earlier Guardian review of the same book. The note said the Times spoke to Preston, who acknowledged using an AI tool that incorporated material from the Guardian review into his draft and that he failed to identify and remove it.

The next day, March 31, The Guardian reported that the Times had cut ties with Preston. EL PAIS followed with its own account on April 1. Preston told The Guardian he was hugely embarrassed and had made a serious mistake. The Times stated plainly that reliance on AI and use of unattributed work by another writer violated its standards.

That is about as clean a factual chain as these stories ever get: public review page, public editor's note, public admission, public termination of the freelancer relationship.

Why This Was Not Just Regular Plagiarism

Newsroom plagiarism cases existed long before generative AI. Writers have always been capable of copying, paraphrasing too closely, or laundering source material through their own prose. What AI changes is the mechanism and the subjective experience of the writer. The tool can insert borrowed wording during drafting without making that borrowing feel as deliberate as copying and pasting from another tab.

That does not make it less serious. If anything, it makes the workflow more dangerous because it lowers the friction of misconduct while preserving deniability. The writer can tell himself he is using a drafting aid, not stealing. The machine can turn source contamination into something that feels like normal revision. The editor sees a polished copy file, not an obvious block of pasted text.

By the time anyone notices, the story has already acquired the institutional authority of publication.

The editor's note on the Times review captured the actual failure better than most corporate cleanups do. The problem was not only the AI tool. The problem was the writer's reliance on it and the failure to identify and remove unattributed material before submission. That is the whole issue. The machine supplied the contamination. The human supplied the trust.

Why the Overlap Was So Damaging

The Guardian's report pointed to repeated similarities between Preston's review and Christobel Kent's earlier review, including character descriptions and the larger assessment of the novel's relationship to Italy. These were not isolated coincidences of plot summary. They were phrasing-level echoes in a genre where voice matters.

Book reviews are not commodity copy. They are one of the places where a publication is supposed to sound most recognizably like itself, or at least like the intelligence of the critic it hired. If a review starts carrying over another critic's language through an AI tool, the resulting failure is not just unattributed borrowing. It is corruption of the part of criticism that is supposed to be distinctly authored.

This is why the incident landed harder than a routine standards note. Readers can tolerate some factual correction in a book review. They are less willing to accept that a reviewer's voice may have been partly assembled by a machine that swallowed another review first.

The Lesson for Editors Is Not Subtle

Newsrooms often frame AI policy around disclosure, as if the central question is whether a writer should announce that a tool helped with drafting. The Preston case suggests the more important question is whether the workflow creates a reliable way to detect source contamination before publication.

If a freelancer uses AI while drafting criticism, what exactly is the editor reviewing? Are they reviewing the writer's judgment, or the writer's judgment plus a synthetic blend of training data, live-source contamination, and autocomplete? If nobody can answer that with confidence, the workflow is not safe enough for publication no matter how tasteful the disclosure language sounds.

The Times did the right thing once the issue was identified. It added the note, acknowledged the standards violation, and cut ties with the contributor. But those are cleanup steps. The harder question is how the copy got to publication before anyone compared it against the relevant source material.

That question matters because culture desks are precisely where AI adoption is sometimes treated as harmless. Nobody is asking a book-review tool to diagnose cancer or route aircraft. It is "just" criticism. The Preston case is a reminder that low-stakes subject matter does not mean low-stakes authorship standards.

Journalism's New Plagiarism Problem

The Vibe Graveyard pattern here is not that a bot wrote a review and nobody noticed. It is that AI assistance blurred authorship enough to produce a published standards breach at The New York Times. The output reached the public. The publication had to attach a corrective note. A contributor relationship ended. Another newsroom's work had been effectively mined in the process.

This is the more professional, more plausible, and therefore more dangerous cousin of obvious AI slop. It is not fake authors with fake headshots. It is a respected outlet, a known writer, a real review page, and a machine-assisted drafting process that carried over someone else's prose.

The old plagiarism model assumed the writer knew exactly what they were taking. The AI era adds a newer and somehow pettier version: let the machine do some of the taking for you, then act surprised when the newspaper has to publish a note explaining where the language came from.

Discussion