AI-generated images and claims muddied Air India crash coverage
After Air India Flight 171 crashed in Ahmedabad on June 12, 2025, killing 275 people, AI-generated images of the crash spread across social media platforms. One widely shared synthetic image depicted the Boeing 787 broken in half across a building, but contained physically impossible details that experts identified as AI-generated. Fake victim photos, fabricated reports, and fraudulent fundraising campaigns followed. Google's AI Overview compounded the problem by incorrectly identifying the crashed aircraft as an Airbus rather than Boeing. Mashable reported the AI-generated content was convincing enough to confuse even aviation professionals.
Incident Details
Tech Stack
References
Air India Flight 171 departed Ahmedabad for London Gatwick on June 12, 2025. The Boeing 787-8 Dreamliner crashed shortly after takeoff, hitting a building used as accommodation for doctors at a nearby hospital. Of the 242 passengers and crew, one person survived. At least 34 people on the ground also died, bringing the total to 275. It was the deadliest aviation disaster in India in decades.
Within hours, the internet had its own version of the crash - one generated by AI.
The fake images
An image that appeared to show the Air India plane hitting a building began circulating on Facebook, X, and YouTube almost immediately after news of the crash broke. The image showed an aircraft in Air India livery, broken in two across the corner of a building complex, engulfed in flames. It was shared with captions like "No words" and attracted tens of thousands of engagements before fact-checkers flagged it.
The image was not a photograph. Emmanuelle Saliba, Chief Investigations Officer at GetReal Labs, a company that uses forensic techniques to identify synthetic content, examined it and said: "This image is obviously generated using an AI. Structurally the plane does not make sense, the way that is supposedly broken is not logical. Looks like it was broken in half and then stacked. And the building it crashed into is completely nondescript, another sign of generative AI."
Gina Neff, Professor of Responsible AI at Queen Mary University of London, identified further tells: "The front section of the fuselage appears to be floating, defying gravity. The portside wing and engine somehow is folded underneath the body, again, defying physics." She noted the building in the image had four levels on one side and six on the other around a corner, with windows and balconies that "look completely identical and yet somehow off."
Full Fact compared the viral image to actual photographs from the crash site, which looked entirely different. The fake image's aircraft also lacked specific details present on Air India's real Boeing 787-8 fleet, such as red outlines on cabin windows and a symbol on the engine.
None of those details matter at the speed social media moves. By the time experts had analyzed the image and published their findings, it had already been shared widely enough to become part of how many people visualized the disaster.
Beyond one image
The synthetic crash image was the most visible piece of AI-generated content, but it was not the only one. Following the crash, a range of AI-generated material appeared across platforms:
-
Fake victim photos. AI-generated images of supposed crash victims circulated on social media, compounding the grief of actual families. Nikkei Asia reported on Kuldeep Bhatt, a teacher from Rajasthan who lost his cousin Komi Vyas in the crash, and whose mourning was made worse by the circulation of fabricated images of supposed victims.
-
Fraudulent fundraising. Scammers used the disaster to set up fake fundraising campaigns, some of which incorporated AI-generated content to appear legitimate.
-
Fabricated reports. AI-generated text posts appeared claiming to contain insider information about the crash cause, spreading conspiracy theories into a vacuum that official investigators had not yet filled.
-
Miscaptioned real footage. Reuters fact-checked a viral video that was misleadingly captioned as showing crew boarding the Air India flight before the crash, when it actually depicted an unrelated event.
The Times of India reported that fraudsters were specifically exploiting the vulnerability of grieving families and concerned public, using AI-generated content to add plausibility to scams.
When the AI tools got it wrong too
The misinformation was not limited to bad actors crafting fake content. Google's own AI Overview - the AI-generated summary that appears at the top of search results - incorrectly identified the crashed aircraft as an Airbus rather than a Boeing. The flight was a Boeing 787-8 Dreamliner. Getting the aircraft manufacturer wrong in an AI-generated summary about an aviation disaster is the kind of error that feeds conspiracy theories and undermines trust in the basic facts of the incident.
The AI Incident Database cataloged the Air India 171 misinformation as Incident 1125, flagging it as a related variant of the broader pattern where Google AI Overviews misstated facts about the crash.
The information vacuum
An official information vacuum worsened the problem. Air India's CEO, Campbell Wilson, released a video the evening of the crash saying it was "not a time for speculation" and pledging to share facts. Civil Aviation Minister Ram Mohan Naidu announced a control room and helplines. Regulators ordered inspections of all Boeing 787 aircraft, and Air India temporarily reduced flights.
But after the first week, official updates slowed. Air India reportedly sent a memo instructing employees not to speak with journalists. The ministry did not hold regular press briefings. The preliminary report, released on July 12, stated that both engines shut down after fuel control switches moved to the "CUTOFF" position but raised more questions than it answered. When official channels go quiet, unofficial content fills the space - and in 2025, much of that unofficial content is AI-generated.
Mashable India reported that the AI-generated misinformation was sophisticated enough to confuse "even expert aviation professionals." That claim is hard to verify precisely, but it captures a real problem: the quality of AI-generated images has reached a point where casual inspection by subject matter experts is no longer a reliable filter. The tells that Saliba and Neff identified - impossible physics, inconsistent building architecture, missing aircraft details - require careful examination. At social media scroll speed, careful examination does not happen.
A pattern, not an anomaly
AI-generated misinformation following disasters is not new, but the Air India crash illustrated how the pattern has matured. During earlier crises, AI-generated content was typically easy to spot - obvious artifacts, distorted text, the uncanny-valley quality of early image generators. The Air India crash images were far more polished. The aircraft livery was roughly correct. The scene composition was plausible at a glance. The emotional content - a plane broken apart, flames, destruction - matched what viewers expected to see after learning about the crash.
The combination of grief, urgency, algorithmic amplification, and an official information gap creates ideal conditions for synthetic content to spread. People searching for information about a disaster are primed to engage with visual content. Social media algorithms prioritize engagement. AI-generated content that triggers emotional reactions - shock, grief, anger - gets amplified. By the time fact-checkers identify and flag the content, it has already shaped perceptions for millions of viewers.
For the families of the 275 people killed in the crash, this meant navigating grief alongside a stream of fabricated images, fake victim profiles, and fraudulent campaigns - all generated at machine speed and distributed by platforms that treat engagement as the primary sorting signal, regardless of whether the engaging content is real.
Platform response
Social media platforms did eventually remove or flag much of the AI-generated content. Fact-checking organizations including Full Fact, Reuters, and India Today published analyses identifying the synthetic images. But the response was reactive, not preventive. The fake images spread during the hours and days when public attention was highest, and the corrections arrived after the initial engagement wave had passed.
The Air India crash was the 1,125th incident cataloged in the AI Incident Database. It will not be the last disaster where AI-generated content complicates the information environment before anyone has time to verify what is real. The tools to generate fake crash images are widely available, the motivation to use them (engagement, scams, ideology) is constant, and the conditions that allow them to spread - information vacuums during unfolding crises - are inherent to how disasters work.
Discussion