Image Generation Stories
4 disasters tagged #image-generation
Getty’s UK suit leaves Stable Diffusion mostly intact
The UK High Court ruled that Stability AI's Stable Diffusion model is not an "infringing copy" of copyrighted works under English law, dismissing Getty Images' core copyright and database right claims in the first UK judgment on AI training. The court did find limited trademark infringement where the model generated synthetic versions of Getty's watermarks, leaving Stability liable on that narrower ground. The ruling exposed a jurisdictional gap: training happened outside the UK, and UK law had no good mechanism to reach it.
Warner Bros. says Midjourney ripped its DC art
Warner Bros. Discovery sued Midjourney in Los Angeles federal court, arguing the image generator ignored takedown notices and "brazenly" outputs Batman, Superman, Scooby-Doo, and other franchises it allegedly trained on without a license. The studio wants statutory damages up to $150,000 per infringed work plus an injunction forcing Midjourney to purge its models of the data.
AI-generated images and claims muddied Air India crash coverage
After Air India Flight 171 crashed in Ahmedabad on June 12, 2025, killing 275 people, AI-generated images of the crash spread across social media platforms. One widely shared synthetic image depicted the Boeing 787 broken in half across a building, but contained physically impossible details that experts identified as AI-generated. Fake victim photos, fabricated reports, and fraudulent fundraising campaigns followed. Google's AI Overview compounded the problem by incorrectly identifying the crashed aircraft as an Airbus rather than Boeing. Mashable reported the AI-generated content was convincing enough to confuse even aviation professionals.
Gemini paused people images after historical inaccuracies
Google paused Gemini's image generation of people on February 22, 2024, after users discovered the tool was producing historically inaccurate depictions - including racially diverse World War II German soldiers, Black female popes, and multiethnic U.S. Founding Fathers. The overcorrection stemmed from diversity tuning meant to counter training-data biases, but the model failed to distinguish when diversity adjustments were inappropriate for specific historical prompts. CEO Sundar Pichai called the outputs "completely unacceptable." Google SVP Prabhakar Raghavan later published a blog post acknowledging the model had "overcompensated" and been "over-conservative."