Metacritic briefly carried an AI-written Resident Evil Requiem review

Tombstone icon

In February 2026, Metacritic briefly listed a positive Resident Evil Requiem review from VideoGamer under the byline Brian Merrygold, a critic whose profile image and online footprint quickly drew suspicion. Readers and games writers flagged the review as AI-generated slop, Metacritic removed it, and the aggregator said outlets caught using AI-written reviews would no longer be accepted. The incident was smaller than a full newsroom collapse, but it landed on a platform whose entire value proposition is that the reviews it aggregates come from real critics rather than synthetic enthusiasm engines.

Incident Details

Severity:Facepalm
Company:Metacritic
Perpetrator:Review aggregation / editorial
Incident Date:
Blast Radius:Fake review reached Metacritic; outlet credibility damaged; aggregator tightened source policy for review partners

The Whole Point of Metacritic Is Supposed to Be Filtering

People can argue forever about whether Metacritic is healthy for criticism, healthy for game development, or healthy for anyone's blood pressure. What nobody is supposed to argue about is whether the reviews feeding the score are written by actual critics. That is the minimum viable promise of the entire setup.

In late February 2026, that minimum promise failed in a way almost too on-brand for the moment. Metacritic briefly carried a positive review of Resident Evil Requiem from VideoGamer under the byline Brian Merrygold. Readers quickly concluded that the author likely did not exist in any meaningful journalistic sense and that the review itself looked machine-generated. Metacritic later removed it and said publications using AI-generated reviews would be barred from participation.

That response was correct. It was also the sort of response that would have been more useful before the review was already live on the largest scoreboard in games media.

Why People Clocked It So Fast

The review did not collapse because Metacritic performed an elegant internal audit. It collapsed because readers and other writers noticed that something was off. The byline was suspicious. The profile image had the smooth, synthetic quality people now associate with AI-generated headshots. The author's online footprint did not look like the footprint of a working critic. And the prose itself had the flavor of generated enthusiasm: loud, vague, and overconfident in a way that sounded less like criticism than like a model trying to synthesize what game praise is supposed to sound like.

This is one of the funnier parts of AI slop in public media. The systems are often deployed to save time, scale output, or fake editorial breadth. Yet the resulting copy frequently contains its own telltale texture. It looks polished from a distance and strangely dead up close. Human readers, especially readers who spend too much time online, have become fairly good at spotting this texture even when publishers still pretend not to notice it.

In the Resident Evil Requiem case, the detection work happened fast enough that the review became a platform embarrassment rather than a durable rewrite of the game's score. That does not make the underlying failure small. It only means the audience caught it before the contamination lasted long.

Why This Matters Beyond One Review

The easiest dismissive take is that a single fake review does not matter much on a site that aggregates dozens of them. That misses how these systems work. Aggregators are trust concentrators. They gather dispersed criticism, flatten it into a score, and invite users to treat that score as a reliable shorthand for consensus. If the input stream includes machine-generated filler from suspicious outlets, then the number stops representing criticism and starts representing whoever is best at gaming the intake process.

That is already a bad dynamic when the offending outlet is real but careless. It gets worse when the byline itself appears synthetic. At that point the review is not merely low quality. It is counterfeit criticism.

Games media is especially vulnerable here because review embargoes and launch-day traffic create strong incentives to publish quickly and at scale. A publication that no longer has enough real critics, or no longer wants to pay them, can try to fill the gap with generated copy. An aggregator built to ingest review links and scores can wind up laundering that fake output into legitimacy unless it performs actual source governance rather than assuming partner outlets are policing themselves.

The Brian Merrygold Problem

A fake-sounding review by itself is one thing. A fake-sounding review attached to a fake-sounding critic is worse because it tells you the whole pipeline may be synthetic. A reader cannot evaluate a critic's judgment, taste, or track record if the critic is an invented shell with a generated headshot and no meaningful history.

That matters because criticism is not just information. It is authorship. Even people who only look at review scores are benefiting from the assumption that someone with a recognizable taste and some experience actually played the game and made a case for the number. Strip away the real critic and you are left with a performance of criticism rather than criticism.

That performance is enough for platforms built around frictionless scoring. It is not enough for anyone who still thinks reviews should be tethered to a person who can be wrong on purpose rather than a machine that is wrong by construction.

Metacritic's Policy Response

After the incident surfaced, Metacritic removed the review and said it would no longer accept outlets found to be using AI-generated reviews. That is the obvious policy response, and it is better than shrugging. It also quietly concedes that the aggregator's prior assumptions were no longer valid.

For years, review aggregation worked on a soft trust model. If an outlet looked like a real publication and had the expected review format, the platform could take the output as criticism. AI content breaks that shortcut. A site can still have a logo, a CMS, a publication history, and a review template while quietly replacing the expensive part, the human critic, with synthetic text and synthetic bylines.

Once that becomes technically and economically easy, an aggregator cannot rely on appearance alone. It needs active standards and some willingness to police inputs. That does not mean manual line-by-line vetting of every review. It does mean some basic skepticism about suspicious bylines, suspicious authorship patterns, and copy that reads like a generic praise engine.

The Broader Media Pattern

This story also fits a larger pattern the site has already had plenty of reason to document. Media failures around AI do not always arrive as giant newsroom scandals. Sometimes they show up in the corners where everyone assumes less is at stake: affiliate copy, side verticals, review sites, recommendation lists, freelancer workflows, or automated summaries. Those spaces are treated as lower-status production zones, which means they often get less scrutiny right when generative tools make fakery cheapest.

The Resident Evil Requiem review looks minor compared with fabricated reporting in national publications or financial explainers with incorrect advice. It is still useful because it shows how quickly AI slop can slip into a system built to convert articles into reputational signals. One fake review does not stay one fake review once Metacritic ingests it. It becomes part of a score, part of a launch narrative, and part of the market-facing reputation of the game and the outlet.

Why Readers Detected It Faster Than the System

One recurring feature of these incidents is that the public often spots the fake faster than the institution hosting it. Readers do not need a formal standards policy to know when a byline looks invented or when the prose has the lifeless swagger of machine output. Institutions, by contrast, often need the full embarrassment cycle to complete before they admit that the obvious problem was obvious.

That gap is not accidental. Readers are evaluating authenticity because they are suspicious. Platforms are evaluating throughput because they are operational. AI slop thrives in the distance between those incentives.

Metacritic eventually did the right thing, and the fake review came down. The bigger takeaway is that a platform built to aggregate criticism had to be reminded, in public, that criticism is supposed to come from critics. In 2026 that apparently counts as a policy update.

Discussion