Image Generation Stories

4 disasters tagged #image-generation

Tombstone icon

Getty’s UK suit leaves Stable Diffusion mostly intact

Nov 2025

The UK High Court ruled that Stability AI's Stable Diffusion model is not an "infringing copy" of copyrighted works under English law, dismissing Getty Images' core copyright and database right claims in the first UK judgment on AI training. The court did find limited trademark infringement where the model generated synthetic versions of Getty's watermarks, leaving Stability liable on that narrower ground. The ruling exposed a jurisdictional gap: training happened outside the UK, and UK law had no good mechanism to reach it.

Facepalmby AI Vendor
Mixed ruling fuels ongoing lawsuits, exposes Stability AI to injunctions over watermarked outputs, and leaves copyright liability unanswered globally.
image-generationlegal-riskbrand-damage
Tombstone icon

Warner Bros. says Midjourney ripped its DC art

Sep 2025

Warner Bros. Discovery sued Midjourney in Los Angeles federal court, arguing the image generator ignored takedown notices and "brazenly" outputs Batman, Superman, Scooby-Doo, and other franchises it allegedly trained on without a license. The studio wants statutory damages up to $150,000 per infringed work plus an injunction forcing Midjourney to purge its models of the data.

Facepalmby AI Vendor
Major studio litigation threatens Midjourney with statutory damages and potential model shutdowns across entertainment IP.
image-generationlegal-riskbrand-damage
Tombstone icon

AI-generated images and claims muddied Air India crash coverage

Jun 2025

After Air India Flight 171 crashed in Ahmedabad on June 12, 2025, killing 275 people, AI-generated images of the crash spread across social media platforms. One widely shared synthetic image depicted the Boeing 787 broken in half across a building, but contained physically impossible details that experts identified as AI-generated. Fake victim photos, fabricated reports, and fraudulent fundraising campaigns followed. Google's AI Overview compounded the problem by incorrectly identifying the crashed aircraft as an Airbus rather than Boeing. Mashable reported the AI-generated content was convincing enough to confuse even aviation professionals.

Facepalmby Social platforms
Public misinformation; platform moderation challenges.
ai-hallucinationimage-generationplatform-policy
Tombstone icon

Gemini paused people images after historical inaccuracies

Feb 2024

Google paused Gemini's image generation of people on February 22, 2024, after users discovered the tool was producing historically inaccurate depictions - including racially diverse World War II German soldiers, Black female popes, and multiethnic U.S. Founding Fathers. The overcorrection stemmed from diversity tuning meant to counter training-data biases, but the model failed to distinguish when diversity adjustments were inappropriate for specific historical prompts. CEO Sundar Pichai called the outputs "completely unacceptable." Google SVP Prabhakar Raghavan later published a blog post acknowledging the model had "overcompensated" and been "over-conservative."

Facepalmby AI Product
Feature paused; trust hit; policy and model adjustments.
ai-hallucinationimage-generationplatform-policy+2 more