Gannett pauses AI sports recaps after mockery
In August 2023, Gannett - the largest newspaper chain in the United States - deployed an AI service called LedeAI to auto-generate high school sports recaps for the Columbus Dispatch and other papers. The articles went viral on social media for their robotic phrasing, missing player names, and bizarre constructions like "close encounter of the athletic kind." Several articles required corrections appended with notes about "errors in coding, programming or style." Gannett paused the experiment and said it would add "hundreds of reporting jobs" alongside AI tools, though the connection between the two claims was unclear.
Incident Details
Tech Stack
References
The Pitch
Gannett, which owns roughly 200 daily newspapers across the United States including USA Today and dozens of local outlets, had a perennial coverage problem: high school sports. These games matter intensely to the communities that play and watch them, but the economics of local journalism meant there often weren't enough reporters to cover them. Games went unrecorded. Scores didn't make the paper. Parents, coaches, and students noticed.
LedeAI offered a fix. The service, provided by Source Media Properties and its CEO Jay Allred, was pitched as an automated tool that could take box scores and game data and turn them into readable articles - not generative AI in the ChatGPT sense, but a template-driven system that would fill in the blanks from structured data. The articles would carry the byline "LedeAI" and be published in Gannett papers, covering games that would otherwise get no coverage at all.
The logic was defensible on paper. If a high school football game between two small-town teams wasn't going to get a reporter anyway, an automated recap from box score data was better than nothing. The Washington Post had attempted something similar in 2017 with its Heliograf system, generating automated articles about local high school football in the D.C. area. That experiment had gone more or less smoothly.
LedeAI's rollout at Gannett did not.
"Close Encounter of the Athletic Kind"
In August 2023, readers began sharing LedeAI articles on social media. What had been intended as functional, invisible coverage turned into public entertainment and then embarrassment.
The articles were conspicuously robotic. They lacked player names, favoring generic references to "the team" and "the squad." They used bizarre, almost alien phrasing - the line that circulated widest was a description of a game as a "close encounter of the athletic kind," apparently the system's attempt to make a close score sound colorful. Other articles contained factual errors in scores and team names.
A Columbus Dispatch article published on August 18 got the most attention, going viral on what was then still called Twitter. Readers didn't just notice the bad writing; they recognized it instantly as machine output. The articles read like a system processing data without any understanding of what it was writing about - which is precisely what they were.
The Dispatch and other Gannett papers appended correction notices to several of the articles. The standardized note read: "This AI-generated story has been updated to correct errors in coding, programming or style." The phrase "errors in coding, programming or style" was doing a lot of work in that sentence. The errors were in the writing. Calling them "coding" errors reframed a journalism problem as a software problem.
The Pause
By late August, Gannett announced it was pausing the LedeAI experiment. Jay Allred confirmed to WNYC's On the Media that "Gannett put an indefinite pause on the project of reporting high school sports results using AI with us."
A Gannett spokesperson issued a statement that tried to contextualize the AI experiment within a broader investment narrative: "In addition to adding hundreds of reporting jobs across the country, we are experimenting with automation and AI to build tools for our journalists and add content for our readers." The spokesperson also noted that the AI tool was not replacing human reporters but covering events that wouldn't have been covered otherwise.
That argument was technically true. But it sidestepped the main issue, which was quality. The articles that LedeAI produced weren't just below the standard of a human reporter - they were below the standard of a readable article. A high school athlete whose game recap described their performance in phrases that a reader would mistake for a joke was not being served by the coverage; they were being mocked by it.
Gannett said it never framed the experiment as replacing journalists. But Gannett had spent years laying off journalists. The company had cut roughly a third of its newsroom staff between 2019 and 2023. Whatever its stated intentions, deploying an AI tool to produce content that used to be written by reporters - at a company that had been firing reporters - invited a particular interpretation.
LedeAI vs. Generative AI
An important technical distinction: LedeAI was not a large language model in the GPT sense. It was described as a non-generative AI system, closer to a template engine that filled structured data into pre-written sentence patterns. It took box scores - final scores, team names, dates, locations - and slotted them into article frameworks.
The results made clear that the templates themselves were the problem. The phrasing was repetitive, generic, and occasionally surreal because the system had a limited library of sentence structures and no judgment about when a given phrase was appropriate. "Close encounter of the athletic kind" wasn't a hallucination; it was a template that triggered on close games. The system picked it because it matched a condition, not because it made sense.
This made the failure different from later AI journalism incidents where generative AI produced outright fabrications. LedeAI's errors were mostly about style and readability, with some factual errors mixed in from data processing bugs. The articles were accurate enough about who played whom but wrote about them in a way that no human reporter ever would.
Context: A Year of AI Journalism Experiments
The Gannett incident landed in a year when media companies were testing AI continuously and publicly face-planting. In January 2023, CNET had been caught using AI to write financial explainer articles without clear disclosure. Those articles contained factual and conceptual errors, and CNET paused the experiment after corrections piled up. In November 2023, a few months after Gannett's pause, Sports Illustrated would be exposed for publishing product reviews under fake author names with AI-generated headshots, supplied by a third-party company called AdVon Commerce.
Each company framed its AI experiment differently - as efficiency, as expanded coverage, as innovation. Each hit the same wall: the output wasn't good enough, the disclosure wasn't clear enough, and the public reaction was brutal.
For Gannett, the particular irony was that the experiment targeted exactly the kind of coverage that communities most associate with their local newspaper. High school sports are personal. The players are somebody's kid, somebody's classmate, somebody's neighbor. When those games get covered, people read the stories. And when they read a story about their kid's football game that describes it as a "close encounter of the athletic kind" under a byline that says "LedeAI," the gap between what their local paper used to be and what it has become is hard to miss.
The experiment was paused. LedeAI went quiet. Gannett continued to add and cut jobs in roughly equal measure. The high school games continued, and most of them still don't make the paper.
Discussion