Microsoft’s AI poll on woman’s death sparks outrage
In late October 2023, Microsoft Start republished a Guardian article about the death of Sydney water polo instructor Lilie James and auto-attached an AI-generated "Insights" poll asking readers, "What do you think is the reason behind the woman's death?" - with options of murder, accident, or suicide. Readers blamed the Guardian's journalist directly, with some demanding the writer be fired, unaware the poll was Microsoft's AI. Guardian CEO Anna Bateson wrote to Microsoft President Brad Smith calling the poll an inappropriate use of generative AI. Microsoft deactivated all AI-generated polls on news articles and launched an investigation.
Incident Details
Tech Stack
References
The Poll
Lilie James was a 21-year-old water polo instructor at a school in Sydney. In late October 2023, she was found dead, and the circumstances of her death were the subject of active police investigation and extensive Australian media coverage. The Guardian published a news article about the case.
Microsoft Start, the company's news aggregation platform (formerly MSN News), had a licensing agreement to republish Guardian content. When the article appeared on Microsoft Start, the platform's AI auto-generated an "Insights" poll and attached it directly alongside the story. The poll asked: "What do you think is the reason behind the woman's death?"
It offered three options: murder, accident, or suicide.
The poll appeared as if it were part of the Guardian's content. There was no visible disclaimer marking it as a Microsoft AI creation. To readers, it looked like the Guardian had chosen to gamify speculation about a young woman's death.
Readers Blamed the Wrong People
The blowback was immediate - and misdirected. Readers, unable to distinguish the AI-generated poll from the Guardian's editorial content, assumed the newspaper was responsible. Comments piled up accusing the Guardian journalist who wrote the article of being tasteless and exploitative. Some demanded the writer be fired.
The journalist had nothing to do with the poll. The Guardian's newsroom had no knowledge it was being attached to their work. The poll was generated automatically by Microsoft's AI systems, applied to a licensed story that the Guardian's staff had written, reported, and published under their own editorial standards.
This attribution confusion was a direct consequence of how Microsoft Start presented the AI-generated content - integrated into the article layout without clear separation or labeling. The Guardian's brand took the hit for something Microsoft's system had done.
The Guardian's Response
Guardian Media Group CEO Anna Bateson wrote directly to Microsoft President Brad Smith. In the letter, provided in full to The Verge, Bateson called the poll "clearly an inappropriate use of [generative AI] by Microsoft on a potentially distressing public interest story, originally written and published by Guardian journalists."
Bateson pointed out that the Guardian had previously warned Microsoft about the risks of using experimental AI features alongside its licensed content. The publisher had specifically asked Microsoft not to deploy AI tools on its work without prior approval. Microsoft had done it anyway.
The letter reflected a fundamental tension in the content licensing arrangements between tech platforms and publishers. The Guardian licensed its articles to Microsoft for republication, but that license didn't extend to having AI systems modify the presentation of the content or bolt on new interactive elements that could be mistaken for the publisher's own editorial choices.
Microsoft's Response
Microsoft moved quickly once the story gained public attention. A spokesperson said the company had "deactivated Microsoft-generated polls for all news articles" and was "investigating the cause of the inappropriate content."
The decision to disable polls platform-wide, rather than just for this one article, suggested Microsoft recognized the problem wasn't limited to a single unlucky content pairing. If the AI could generate a tasteless death-speculation poll next to a story about a dead woman in Sydney, it could presumably generate comparably inappropriate polls next to any sensitive news article. The feature was broken by design, not by accident.
An Axios report confirmed that Microsoft shut off all AI-generated polls and launched an internal investigation. The speed of the takedown - polls disabled across the entire platform - indicated the company understood the reputational exposure.
Not an Isolated AI Problem at Microsoft Start
The poll incident was not the first time Microsoft Start's AI had created problems. The platform had replaced much of its human editorial staff with AI systems for selecting, curating, and augmenting news content. Microsoft had been laying off journalists and editors from its MSN news operation for years, replacing human curation with algorithmic systems.
Those systems had already produced embarrassing results. In 2020, Microsoft fired the human editors who selected stories for MSN.com and replaced them with AI, which promptly confused mixed-race members of the pop group Little Mix with each other. In other instances, the AI had selected and promoted stories with false or misleading content.
The poll feature was an extension of the same approach: automating editorial functions that require judgment, taste, and contextual awareness - qualities that an AI generating engagement polls from article text does not possess. An AI reading the words "woman found dead" and generating a death-cause guessing game was doing exactly what it was trained to do: create engagement widgets from content. It had no model for what's appropriate to ask readers to vote on and what isn't.
The Licensing Question
The incident exposed a structural problem in how publishers relate to the platforms that redistribute their work. The Guardian licensed its content to Microsoft under terms that presumably specified how that content could be used and presented. Microsoft's AI poll wasn't covered by the license, and the Guardian had explicitly asked that AI tools not be applied to its work.
Yet it happened anyway, and when it did, the reputational damage landed on the Guardian first - before anyone understood the poll was Microsoft's doing. The publisher bore the consequences of a tech company's automated system acting on its content without permission.
Bateson's letter to Brad Smith was polite in the way corporate communications between companies with existing business relationships tend to be. But the subtext was plain: Microsoft had taken the Guardian's journalism, attached AI-generated content that was tasteless and inappropriate, presented it as if it were part of the Guardian's article, and then let the Guardian take the public beating for it.
The poll was removed. The feature was disabled. Microsoft said it would investigate. The Guardian got its letter answered. But the episode demonstrated how quickly automated AI features can damage the very publishers whose content they're supposed to complement - and how little recourse those publishers have when the damage is already done.
Discussion