LA Times had to pull AI "Insights" after it softened the Klan

Tombstone icon

The Los Angeles Times launched an AI feature called "Insights" in March 2025 to label opinion pieces, summarize them, and generate an opposing viewpoint. It immediately attached itself to a Gustavo Arellano column about Anaheim's history with the Ku Klux Klan and produced language suggesting the 1920s Klan could be framed as a response to social change rather than as an explicitly hate-driven movement. The feature was removed from that article within a day. The newspaper had managed to bolt an automated both-sides machine onto a hate group history piece and act surprised when that went badly.

Incident Details

Severity:Facepalm
Company:Los Angeles Times
Perpetrator:Executive
Incident Date:
Blast Radius:Public backlash; reputational damage to the paper; newsroom distrust of the feature; the Klan article's framing overshadowed by the AI add-on
Advertisement

A Newspaper Added a Debate Button to Itself

In early March 2025, the Los Angeles Times rolled out an AI feature called "Insights" on opinion pieces. The feature did three things. It tried to place a piece on the political spectrum, produced a machine-generated summary, and then generated an "opposing viewpoint" so readers could sample another side without leaving the page.

That may have sounded clever in a product meeting. In practice it meant the paper built a bot whose job was to algorithmically locate a counterargument, even when the subject was not a debatable tax rate or a zoning dispute but the Ku Klux Klan.

The failure happened almost immediately. Gustavo Arellano had written a column about the centennial of Anaheim voters removing Klan members from the city council, a story about organized racism in Southern California and a public memory failure around it. The new AI feature attached itself to that column and produced language saying that "local historical accounts occasionally frame the 1920s Klan as a product of white Protestant culture responding to societal changes rather than an explicitly hate-driven movement." It also suggested that some critics saw discussion of the Klan's past as distracting from Anaheim's present-day diversity.

The Machine Found a "Perspective" Nobody Needed

The problem was not that readers misunderstood a neutral feature. The feature produced a familiar kind of AI mistake: it smoothed an ugly historical reality into a bloodless balancing exercise. Instead of recognizing that the Klan is a terrorist hate movement and that the column was documenting that history, the system treated the article as input for a posture generator. The result was language that sounded like a half-remembered academic caveat jammed into a moral vacuum.

That is exactly the kind of failure a newsroom should have seen coming. Journalism already has a long record of getting criticism when it forces false balance onto subjects that are not symmetrically arguable. The Times managed to automate the instinct. It took a column condemning the Klan and added a little machine-produced note implying there might be another way to look at things, which is one method of ensuring the technology department becomes part of the story.

The backlash was immediate. Reporters, critics, and readers circulated screenshots. The feature was removed from Arellano's article within roughly a day of launch. The Guardian reported that the output had downplayed the Klan. AP described the blowback around the wording and noted that the perspective bullets were taken off the piece. Even inside the Times, by AP's account, there was skepticism about the feature.

The Product Theory Was Flawed Before the Output Arrived

The deeper issue was not one bad sentence. It was the product logic behind "Insights." Owner Patrick Soon-Shiong had described the tool as a way to provide varied viewpoints and help readers navigate contentious issues. That premise assumes every opinion article benefits from a machine-generated summary and a machine-generated dissent.

Many do not. Sometimes the article itself is the argument. Sometimes the context is the point. Sometimes the "other side" is already in the story as the thing being criticized. And sometimes, as with the Klan, the effort to synthesize a counter-perspective does not broaden understanding. It launders extremism into the language of polite disagreement.

Large language models are good at producing plausible connective tissue. They are not good at moral salience. Give one a column about racist political organizing and tell it to add nuance, and it may decide that the missing ingredient is soft-focus historical relativism. That is not a bug in the sense of a random glitch. It is the expected output of a system optimized to sound balanced rather than to exercise editorial judgment.

The Newsroom Got a Product Lesson in Public

The Times did not publish this feature in a vacuum. News organizations across the industry have been under pressure to show they are not falling behind on AI, especially in products tied to engagement, personalization, and audience retention. Automated summaries are easy to pitch. Opposing viewpoints are easy to market as reader service. The harder question is whether either one improves the journalism.

In this case, the answer was no. The AI add-on became more memorable than the reporting and commentary it was supposed to complement. Arellano's own response in the Times made that plain. He argued that some of the subsequent headlines overstated what the feature had said, but he also acknowledged that the output was clumsy and out of context. That distinction matters for precision, but it does not rescue the feature. A newspaper should not need a follow-up column explaining that its new AI tool was merely fuzzy and historically tone-deaf rather than fully pro-Klan.

Product teams often talk about AI as a layer that can sit harmlessly on top of existing work. Newsrooms know better, or should. Presentation changes editorial meaning. An automatically generated note placed beneath a column inherits the authority of the publication that placed it there. Readers do not experience it as a separate experiment from the same corporate family. They experience it as part of the article.

Why This Fits the Site

This story belongs on Vibe Graveyard because the AI output reached readers in a production setting and altered the meaning of published journalism in a concrete, embarrassing way. It was not merely a policy controversy about whether newspapers should use AI. The feature went live, produced a bad output on a sensitive topic, triggered backlash, and had to be pulled back from the article.

The blast radius was reputational rather than physical, but it was real. The Times invited readers to question the judgment of the paper, annoyed its own journalists, and shifted attention away from an article about local Klan history toward a debate about why the publication thought a machine-generated counterpoint was needed there at all.

There is also a narrower lesson for media companies. A generic "different viewpoints" engine is risky enough on contemporary politics. On historical subjects involving racism, extremism, or organized violence, it becomes a machine for manufacturing context collapse. The model does not understand which topics can tolerate flattening and which ones cannot. That is supposed to be the editor's job.

The Los Angeles Times tried to add synthetic balance to opinion journalism. The first memorable result was an AI note that softened a Klan history column. That is one way to test a new feature, though usually not the preferred one.

Discussion