South Africa withdrew its draft AI policy after finding fictitious sources in the references

Tombstone icon

South Africa's Department of Communications and Digital Technologies withdrew its Draft National Artificial Intelligence Policy after officials confirmed the reference list contained fictitious sources. Communications Minister Solly Malatsi said the most plausible explanation was unverified AI-generated citations and called the lapse serious enough to compromise the draft's integrity and credibility. This is vibe-lawyering wearing a government badge: an official policy about regulating AI tripped over the exact hallucination problem that every first-year ChatGPT cautionary slide already warned about.

Incident Details

Severity:Facepalm
Company:South Africa Department of Communications and Digital Technologies
Perpetrator:Policy drafting team
Incident Date:
Blast Radius:National AI policy withdrawn from public consultation; government credibility damaged; department ordered to redo quality assurance and manage consequences for the drafting and review process.

South Africa's Draft National Artificial Intelligence Policy was supposed to help the country define how AI should be governed. It was gazetted on April 10, 2026, opened for public comment, and positioned as part of a broader national plan for AI adoption, safeguards, sector strategy, oversight bodies, and long-term digital policy.

Then the references started collapsing.

On April 26, Communications and Digital Technologies Minister Solly Malatsi withdrew the draft after an internal process confirmed that the document's reference list contained fictitious sources. SAnews, the government news service, quoted Malatsi saying the failure compromised the integrity and credibility of the draft. More painfully, he said the most plausible explanation was that AI-generated citations had been included without proper verification.

That is a rough sentence for any document. It is a particularly rough sentence for a national AI policy.

AI policy, brought down by AI citations

The draft was not a random memo. It was a government policy document approved for public consultation after Cabinet consideration. The Government Gazette notice invited comments for 60 days and directed submissions to the Department of Communications and Digital Technologies. The policy itself discussed AI governance, institutional frameworks, sectoral approaches, safeguards, capacity building, and the need for human-centered deployment.

The document was, in other words, trying to tell the country how to handle AI responsibly.

Then it had to be pulled because its own source list apparently did not survive basic verification. That is not just a clerical error. References are part of the evidence layer of a policy document. They tell readers what the authors relied on, what claims can be traced, and whether the policy is grounded in real scholarship or dressed-up autocomplete.

In court filings, hallucinated citations waste judicial time and can trigger sanctions. In journalism, fabricated sources destroy trust. In government policy, they do something equally corrosive: they make the public wonder whether the people designing AI governance understand the most famous failure mode of AI tools.

Why this is vibe-lawyering

The vibe-lawyering tag started with lawyers submitting fake cases to courts, but the broader pathology is not unique to courtrooms. The pattern is: use AI to produce formal, authority-bearing work; fail to verify the authorities; publish or file the result; then watch the process collapse when the references are checked by someone else.

That is exactly what happened here, except the filing was a national policy document rather than a legal brief.

A government policy does not need Bluebook citations, but it does need provenance. If the draft says a claim rests on a paper, report, treaty, or institutional framework, that source should exist. The citation should point to the right thing. The authors should have read it. These are not advanced scholarly methods. This is the "look before walking into traffic" tier of document review.

AI makes this failure easier because hallucinated citations often look convincing. They use plausible titles, plausible institutions, plausible dates, and plausible phrasing. A rushed reviewer may see the shape of legitimacy and move on. That is how fake authority slips into professional work: not because the fake reference is genius, but because the surrounding process is lazy, overloaded, or too trusting.

Public-sector blast radius

Malatsi's withdrawal statement was unusually direct. He said the department had not delivered the standard expected of an institution leading South Africa's digital policy environment. He also said the lapse proved why vigilant human oversight over AI use is critical and that there would be consequence management for those responsible for drafting and quality assurance.

That matters. The harm here is not that one footnote was embarrassing. The harm is that an entire public consultation process had to be interrupted because the document's evidentiary foundation was no longer trustworthy. Stakeholders who were preparing comments on the draft now have to wait for a replacement. The department has to redo work that should have been checked before publication. The public gets one more reason to distrust official AI competence at the exact moment the state is asking to regulate AI.

The irony is thick enough to stop a spoon. A policy about AI governance became a live demonstration of why AI governance needs verification, accountability, and human review.

The policy context

The withdrawn draft was ambitious. It talked about aligning AI with constitutional values, improving national competitiveness, addressing social inequality, creating institutional oversight, and developing sector-specific roadmaps. Semafor reported that it also contemplated watchdog-style structures for AI governance. Those are serious policy questions. They deserve a serious process.

That does not mean every line of a draft policy needs to be perfect. Public consultation exists because policy improves through review. But there is a difference between "stakeholders may disagree with the proposed governance model" and "some of the references appear to be imaginary." The first is politics. The second is quality control failure.

AI can be useful in policy work. It can summarize submissions, compare draft language, find inconsistencies, and help researchers organize source material. But it cannot be treated as a citation engine without verification. If a human would be expected to open the source, read the source, and confirm the source supports the claim, the same rule applies when the source was suggested by a model.

Especially then.

How to avoid doing this again

The fix is boring, which is how you know it is probably real.

Build a source-checking step into the publication process. Require every cited work to be opened, archived, and matched to the claim it supports. Separate drafting assistance from source validation. Make reviewers responsible for citations, not for vibes. Use citation-management tools, DOI lookups, library databases, and institutional repositories. If a source cannot be found, it does not go in the document.

Also: do not let the person who generated the draft be the only person who checks the references. That is not an AI-specific rule. That is just how review works when the stakes are higher than a blog post.

The department now gets a chance to produce a clean version. It should. South Africa does need serious AI policy, and it is better that the fake references were found during public consultation than after adoption. But the headstone still earns its place because the failure was so avoidable. The country asked for an AI policy and got an object lesson in AI hallucination instead.

Discussion