South Africa has officially withdrawn its draft national artificial intelligence policy after an investigation revealed that several academic citations within the document were entirely fabricated by AI.
The decision, announced by Communications Minister Solly Malatsi, follows the discovery that the policy—intended to guide the nation’s technological future—was compromised by “hallucinations,” a phenomenon where generative AI creates plausible but non-existent information.
The Integrity Crisis
The draft policy was originally designed to position South Africa as a regional leader in AI innovation. It proposed a comprehensive regulatory framework, including:
– The establishment of a national AI commission.
– The creation of an AI ethics board.
– The formation of an AI regulatory authority.
– Financial incentives, such as tax breaks and grants, to encourage private-sector investment in AI infrastructure.
However, the document’s credibility collapsed when journalists from News24 discovered that at least six of the 67 academic citations used to support the policy’s arguments did not exist. While the journals referenced—such as the South African Journal of Philosophy and AI & Society —are legitimate, the specific articles cited were confirmed to be fabrications by the journals’ editors.
Why This Matters: The “Hallucination” Problem
This incident is a high-profile example of a growing challenge in the age of Large Language Models (LLMs). Tools like ChatGPT and Google Gemini are designed to predict the most statistically likely next word in a sequence, not to verify factual truth. When these models encounter gaps in their training data, they often “fill in the blanks” with authoritative-sounding but entirely fake information.
This is not an isolated trend. The implications for academia and governance are significant:
– Rising Error Rates: A study in the journal Nature noted a sharp increase in AI-generated errors, with the percentage of academic papers containing hallucinated citations jumping from 0.3% in 2024 to over 2.5% in 2025.
– Scale of Impact: This translates to an estimated 110,000 papers published in 2025 containing invalid references.
– Institutional Risk: When policymakers rely on unverified AI outputs, they risk building national laws on a foundation of misinformation.
Moving Forward
Minister Malatsi emphasized that this was not merely a technical glitch but a fundamental failure of oversight.
“This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy,” Malatsi stated on X (formerly Twitter).
The Minister has indicated that there will be consequences for those responsible for the drafting error and has stressed that the policy will undergo a rigorous revision process before being reissued for public comment.
Conclusion
The withdrawal of South Africa’s AI policy serves as a stark warning for governments and institutions worldwide: while AI can accelerate the drafting process, it cannot replace the necessity of rigorous human verification to ensure accuracy and institutional trust.
