One of Zimbabwe’s most prominent legal and political figures, Welshman Ncube, is facing scrutiny after admitting to submitting a legal brief to the Supreme Court that contained fabricated case law generated by artificial intelligence.
In the case of Pulserate Investments (Pvt) Ltd v Andrew Zuze and Others [SC202/25], the lawyer, acknowledged including 12 non-existent or misapplied legal citations in documents filed on behalf of the appellant. The fake references were identified and flagged by the legal team representing the first respondent, led by Advocate Thabani Mpofu.
In a letter to the Registrar of the Supreme Court, Ncube issued a formal apology and called the incident “a catastrophic lapse in professional judgment.” He explained that a graduate researcher working under his supervision had sourced the references using AI tools without verifying their authenticity, and that he himself had failed to cross-check the material before submission.
“There is no excuse that can justify such an error. The integrity of all legal proceedings depends absolutely on the accuracy of authorities cited,” Ncube wrote.
The incident has ignited debate within Zimbabwe’s legal circles and raised broader concerns over the uncritical use of AI tools in high-stakes legal settings. While AI can assist in legal research, experts emphasize that it cannot substitute the rigorous verification required of legal professionals.
Ncube, a veteran lawyer and political figure, maintained that the mistake was unintentional and not an effort to mislead the court. He also extended an apology to opposing counsel, acknowledging the burden placed on them to verify the fictitious case law.
The controversy mirrors a similar case in South Africa, where a junior advocate who relied on AI-generated legal arguments is now under investigation by the Legal Practice Council. The case, which involves a disputed licence connected to the sale of Rappa Resources to Northbound Processing, has emerged as one of South Africa’s most high-profile legal episodes involving the misuse of generative AI.
Both incidents serve as cautionary examples of the risks posed by over-reliance on AI in legal practice, calling for human oversight and professional responsibility in an evolving digital landscape.