Syndicated AI book list ran in major papers with made-up titles
A freelance writer working for King Features Syndicate used AI to research a summer reading list for the Chicago Sun-Times and Philadelphia Inquirer. Of the fifteen books recommended, only five were real. The rest were hallucinated titles attributed to real authors like Isabel Allende and Delia Owens. The list ran in print in a 64-page special section before 404 Media, NPR, and others exposed the fabrications. Both newspapers issued corrections and statements distancing their newsrooms from the syndicated content.
Incident Details
Tech Stack
References
On Sunday, May 18, 2025, the Chicago Sun-Times included a 64-page special section called "Heat Index: Your Guide to the Best of Summer" in its print edition. Inside was a "Summer reading list for 2025" recommending fifteen books. The list looked normal enough - real authors, plausible-sounding titles, short summaries of each book's plot. Readers who tried to buy the books, however, ran into a problem. Ten of the fifteen titles did not exist.
The fake books were attributed to real, well-known authors. Isabel Allende was credited with a novel called Tidewater Dreams, described as her "first climate fiction novel." Delia Owens was listed as the author of another nonexistent book. The descriptions were detailed and specific, which made them more convincing - plot synopses, themes, even genre classifications. One fake title was described as following "a programmer who discovers that an AI system has developed consciousness - and has been secretly influencing global events for years." None of it was real.
The list was not created by the Sun-Times newsroom. It was produced by Marco Buscaglia, a Chicago-based freelance writer working for King Features Syndicate, which is a content distribution arm of Hearst Newspapers. King Features sold the content to media clients across the country. At least two major papers ran it: the Chicago Sun-Times and at least one edition of The Philadelphia Inquirer.
How a freelancer's shortcut reached millions
Buscaglia told 404 Media that he had used AI to assist his research for the book recommendations and other stories in the section. He said he normally fact-checked AI output but failed to do so this time. His explanation was straightforward: he used AI as a research tool, the AI hallucinated, and he published the hallucinations without verification.
The production chain that put fabricated books in front of readers of two major newspapers involved multiple steps and zero quality checks at any of them. Buscaglia wrote the content using AI. King Features Syndicate packaged and distributed it. The Sun-Times and Inquirer received it through their syndication agreements and published it in print. At no point between AI generating fake book titles and ink hitting newsprint did anyone verify that the books existed.
This is worth understanding because it is not a story about a rogue AI experiment or a newsroom deliberately replacing journalists with chatbots. It is a story about a freelancer using AI as a shortcut in a workflow where nobody was checking the output. A simple search for any of the ten fake titles on a bookstore website or library catalog would have revealed the problem instantly. That search did not happen.
The syndication problem
King Features Syndicate is not a small operation. It is one of the oldest and largest content syndicators in the United States, distributing comics, columns, puzzles, and feature content to newspapers and media outlets nationwide. It is owned by Hearst, one of the largest media conglomerates in the country. Content that King Features distributes reaches a significant audience through the combined readership of its client newspapers.
The syndication model is designed for efficiency. A single piece of content is produced once and distributed to many outlets, saving each individual newspaper from having to produce that content independently. This works well when the content is accurate. When the content is fabricated, the same efficiency amplifies the error. One freelancer's failure to fact-check became misinformation printed in multiple major newspapers simultaneously.
The incident exposed a gap in how syndicated content is handled by receiving newsrooms. The Sun-Times stated on Bluesky that the reading list "is not editorial content and was not created by, or approved by, the Sun-Times newsroom." The Sun-Times Guild, the union representing the newspaper's staff, said in a statement on X that the syndicated content was "produced externally without the knowledge of the members of our newsroom." Chicago Public Media's marketing director Victor Lim said the organization was investigating how the list made it into print. Chris Bell, the Sun-Times' news director, stated that the reading list "recommended books that do not exist" and that Chicago Public Media was "actively investigating" other content in the section.
These statements all point in the same direction: nobody at the Sun-Times reviewed the syndicated content before publishing it. The firewall between the newsroom and syndicated content meant that professional editors and fact-checkers never looked at the material. The content arrived from King Features and went to print.
Timing and context
The fake reading list appeared two months after the Chicago Sun-Times announced that 20% of its staff had accepted buyouts as Chicago Public Media, the Sun-Times' nonprofit parent organization, dealt with fiscal hardship. NPR noted this timing in its coverage. Fewer staff means less capacity for oversight, review, and the kind of basic quality control that would have caught ten fake books before they reached print.
This context does not excuse the failure, but it explains the conditions that made it possible. Newsrooms running lean have less bandwidth for checking syndicated content that is supposed to arrive ready to publish. The entire point of syndicating content is that the receiving newsroom does not have to produce or extensively review it. When the syndication source fails quality control, the receiving newsroom has no safety net.
Gabino Iglesias, an author and NPR Books contributor, framed the problem bluntly: "How many full-time book reviewers are there in the U.S.?" The question was rhetorical. Dedicated book reviewers have been cut steadily across American newspapers for years. A reading list that would once have been produced by someone with deep knowledge of the publishing landscape was instead outsourced to a freelancer who outsourced the research to an AI.
The hallucination pattern
The specific failure mode here is one of the most well-documented problems with large language models: hallucination in the form of plausible-sounding but entirely fabricated information. AI text generators do not look up books in a database. They predict what tokens are likely to follow other tokens based on patterns in their training data. Asked for a reading list, they will generate text that looks like a reading list - real author names paired with titles that sound like books those authors would write.
The AI correctly matched real authors to plausible genres and writing styles. Isabel Allende writes literary fiction set in Latin America, so the AI generated a literary-sounding title attributed to her. The descriptions included specific plot details, themes, and genre markers that were internally consistent and convincing. This is what makes AI hallucination particularly dangerous in contexts where the output is presented as factual: it looks right. It reads right. It just is not real.
Book recommendations are a category of content where hallucination is both likely and easily verifiable. LLMs hallucinate frequently about specific bibliographic details - titles, publication dates, ISBNs, attribution. At the same time, verifying whether a book exists takes seconds on any major bookseller's website. The gap between the ease of verification and the failure to verify is what made this incident so widely mocked.
Aftermath
Both newspapers issued corrections and apologies. The Philadelphia Inquirer acknowledged that the list had run in at least one of its print editions. The Sun-Times published its own statement and investigation. The story was covered by the Washington Post, the New York Times, NPR, 404 Media, Snopes, Axios, and the New York Post, among others.
Buscaglia said he wanted to take full responsibility for the error. King Features, as the distributor, faced questions about its own quality control processes for content it distributes under its brand to major newspapers.
The incident became one more entry in the growing list of cases where AI-generated content passed through editorial processes unchanged and published as fact. The common thread in these cases is not that AI generated bad content - it does that reliably. The common thread is that humans in the production chain did not check the output. The tools for verification existed and were trivial to use. Nobody used them.
For the Sun-Times, the embarrassment arrived at precisely the wrong moment. A newspaper already weathering staff cuts and financial pressure did not need its credibility damaged by ten fake books it did not create and did not review. For readers, the incident was a reminder that the name of a trusted newspaper on a piece of content no longer guarantees that a human journalist produced it, or even read it, before it was printed.
Discussion