Public Sector Stories

13 disasters tagged #public-sector

Tombstone icon

India's Supreme Court calls AI-hallucinated citations in trial court order "misconduct"

Feb 2026

India's Supreme Court stayed a property-dispute ruling after discovering the trial court judge had relied on non-existent, AI-generated case citations. An Andhra Pradesh junior civil judge admitted using an AI tool for the first time without verifying the outputs. The Supreme Court termed the reliance on fabricated judgments as "misconduct" with "a direct bearing on the integrity of the adjudicatory process." Separately, the Bombay High Court fined a litigant 50,000 rupees for filing AI-generated submissions citing the non-existent case "Jyoti vs. Elegant Associates." The Chief Justice flagged an "alarming trend" of AI-fabricated judgments including one titled "Mercy vs Mankind."

Facepalmby Judge
Property-dispute ruling stayed by Supreme Court; institutional concern raised over AI-generated judgments across Indian judiciary; litigant fined for separate AI-fabricated filing
ai-hallucinationlegal-riskvibe-lawyering+1 more
Tombstone icon

Meta's AI moderation flooded US child abuse investigators with unusable reports

Feb 2026

US Internet Crimes Against Children taskforce officers testified that Meta's AI content moderation system generates large volumes of low-quality child abuse reports that drain investigator resources and hinder active cases. Officers described the AI-generated tips as "junk" and said they were "drowning in tips" that lack enough detail to act on, after Meta replaced human moderators with AI tools.

Catastrophicby Developer
US child abuse investigations impaired nationwide; investigator resources diverted from actionable cases
automationsafetypublic-sector+1 more
Tombstone icon

Government contractor sanctioned for AI-fabricated deposition testimony

Feb 2026

The Civilian Board of Contract Appeals sanctioned a party in Louis J. Blazy v. Department of State (CBCA 7992) after discovering four non-existent legal decisions and four fabricated deposition excerpts in filings. The supposed direct quotations from witness testimony didn't appear on the cited transcript pages. When pressed, Blazy admitted the quotes were "constructed" and offered substitute testimony that didn't support the original wording. He also misrepresented existing case law by submitting real decisions as stand-ins for the fake ones, characterizing them as supporting principles they did not contain. The CBCA issued a formal admonishment and warned that continued misconduct could result in dismissal - making this one of the first federal sanctions involving AI-fabricated witness testimony, not just made-up case law.

Facepalmby AI assistant
Federal government contract dispute; formal CBCA admonishment with threat of dismissal; new precedent for AI-fabricated testimony sanctions
vibe-lawyeringai-hallucinationlegal-risk+1 more
Tombstone icon

Wisconsin DA sanctioned for AI-hallucinated legal citations in burglary case

Feb 2026

Kenosha County District Attorney Xavier Solis was sanctioned by Circuit Court Judge David Hughes after his office submitted court filings containing AI-generated legal citations that did not exist. The filings were part of a burglary case against two defendants, and Solis failed to disclose his use of AI - violating Kenosha County's court policy requiring disclosure and verification of AI-generated content. The charges were ultimately dismissed (primarily for lack of probable cause), but not before the bogus citations made the DA's office a cautionary tale for prosecutors nationwide. Solis acknowledged the error and promised to "review and reinforce internal practices." It's always reassuring when the person responsible for prosecuting crimes can't be bothered to read the citations in their own filings.

Facepalmby Legal Professional
Burglary case dismissed; DA's office publicly sanctioned; national media coverage undermining public trust in prosecutorial competence
vibe-lawyeringai-hallucinationlegal-risk+1 more
Tombstone icon

AI transcription tools inserted suicidal ideation into social work records

Feb 2026

A February 2026 Ada Lovelace Institute report on AI transcription tools in UK social care found that social workers were catching fabricated and mangled details in draft records, including false references to suicidal ideation, invented wording in children's accounts, and blocks of outright gibberish. Councils had adopted tools such as Magic Notes and Microsoft Copilot in the name of efficiency, but the frontline workers still carried full responsibility for correcting the output. In social work, a made-up sentence is not just a typo. It can follow a family through the system.

Facepalmby AI vendors
Multiple UK councils using AI transcription in social care; risk of inaccurate case notes affecting children, families, and later decisions; workers forced into constant manual verification
automationpublic-sectorsafety+1 more
Tombstone icon

Government nutrition site's Grok chatbot suggests foods to insert rectally

Feb 2026

The HHS-backed realfood.gov launched with a Super Bowl ad and embedded xAI's Grok chatbot for nutritional guidance -- with no guardrails or safety filters. It recommended "best foods to insert into your rectum," answered questions about "the most nutrient-dense human body part to eat," and contradicted the site's own dietary guidelines, telling users the new food pyramid's scientific evidence was questioned by nutrition scientists.

Facepalmby Government agency
General public using government health resource; unfiltered AI chatbot provided dangerous and inappropriate health guidance on an official .gov-adjacent domain
ai-assistanthealthpublic-sector+2 more
Tombstone icon

AI police report claims officer shape-shifted into a frog

Dec 2025

Heber City Police Department's Axon Draft One AI report tool transcribed background dialogue from The Princess and the Frog playing on a television into an official police report, claiming an officer had shape-shifted into a frog while conducting police activity. The incident exposed design flaws in AI report-writing tools that process all body camera audio without distinguishing between relevant police interactions and ambient background noise.

Facepalmby AI Vendor
Viral media coverage; raised questions about AI reliability in law enforcement report writing.
ai-content-generationai-hallucinationpublic-sector
Tombstone icon

AI mistook Doritos bag for a gun, teen held at gunpoint

Oct 2025

Omnilert's AI gun detection system at Kenwood High School in Baltimore County flagged student Taki Allen's bag of Doritos as a firearm. Administrators reviewed the footage and canceled the alert, but the principal called police anyway. Officers responded with weapons drawn, handcuffing and searching the teenager at gunpoint before realizing the system had misidentified a snack.

Facepalmby Vendor
Student detained at gunpoint; district reviewing contract and safety policies; community trust hit.
safetypublic-sectorproduct-failure+1 more
Tombstone icon

Deloitte to refund Australian government after AI-generated report

Oct 2025

Deloitte Australia agreed to partially refund a $440,000 contract after admitting its welfare compliance review for the Department of Employment and Workplace Relations contained fabricated academic citations and a fictitious judicial quote generated by Azure OpenAI GPT-4o. University of Sydney researcher Christopher Rudge found the revised report introduced even more hallucinated references than the original.

Facepalmby Consultant
Refund issued; public-sector trust and procurement review; reputational harm.
ai-content-generationai-hallucinationpublic-sector+2 more
Tombstone icon

Canada's $18M tax chatbot gave correct answers a third of the time

Oct 2025

Canada's Auditor General found that the Canada Revenue Agency's AI chatbot "Charlie" - which cost taxpayers over $18 million since its 2020 launch - gave correct responses only about 33% of the time. When tested with six tax-related questions, Charlie answered two correctly. Other publicly available AI tools scored five out of six. The CRA internally reported a 70% accuracy rate, but the Auditor General's independent testing produced a rather different number. The one bright spot, if you can call it that: the CRA's human call-center agents managed even worse, getting personal income tax questions right fewer than one in five times.

Facepalmby Product Manager
Millions of Canadian taxpayers potentially received incorrect tax guidance; $18M+ in taxpayer funds spent on a 33%-accurate chatbot.
ai-assistantcustomer-disservicepublic-sector+1 more
Tombstone icon

GAO dismisses 15 AI-hallucinated bid protests as abuse of process

Sep 2025

The Government Accountability Office dismissed three consolidated protests filed by Oready, LLC - the culmination of 15 pro se bid protests filed over eight months, all riddled with non-existent citations, fabricated decisions, and hallmarks of unverified generative AI output. The GAO labeled Oready's pattern as "Gen-AI Misuse" and dismissed the protests as an abuse of the bid protest process, marking the GAO's first published dismissal for AI-driven abuse. Prior warnings issued in June and August 2025 were ignored. The fallout also prompted the GAO's January 2026 decision in Bramstedt Surgical to devote several pages to cautioning against AI-hallucinated citations, signaling that federal procurement tribunals are done issuing gentle reminders.

Facepalmby AI assistant
First published GAO dismissal for generative AI misuse; 15 protests wasted federal procurement resources over eight months; precedent-setting for AI citation standards in government contracting
vibe-lawyeringai-hallucinationlegal-risk+1 more
Tombstone icon

California's failed bar exam included AI-drafted questions

Apr 2025

The State Bar of California disclosed in April 2025 that 23 scored multiple-choice questions on its already troubled February bar exam were developed with AI assistance by its psychometric vendor, ACS Ventures. Test-takers had already reported crashes, lag, copy-paste failures, and lost answers. Then the bar admitted that some questions in this licensing exam for future lawyers had been drafted with AI, reviewed by the same outside vendor, and used anyway. The bar asked the California Supreme Court for score relief, while legal academics described the admission as staggering.

Catastrophicby Public agency
Thousands of California bar applicants affected; score adjustments sought; confidence in the licensing exam damaged; millions in follow-on costs and vendor fallout
ai-content-generationlegal-riskpublic-sector+1 more
Tombstone icon

NYC’s official AI bot told businesses to break laws

Mar 2024

New York City launched a Microsoft-powered AI chatbot called MyCity in October 2023 to help small business owners navigate regulations. A March 2024 investigation by The Markup found the bot was routinely advising businesses to break the law - telling employers they could pocket workers' tips, landlords they could discriminate against housing voucher holders, and bosses they could fire whistleblowers. Mayor Eric Adams acknowledged the errors but refused to take the chatbot offline, calling AI a "once-in-a-generation opportunity." NYU professor Julia Stoyanovich called the city's approach "reckless and irresponsible."

Facepalmby Executive
City guidance channel distributed illegal advice; public backlash.
ai-hallucinationautomationlegal-risk+2 more