When AI 'invents' law, courts face perils of non-existent judgments
OPINION

When AI 'invents' law, courts face perils of non-existent judgments

C

Chinmay Chaudhuri

Author

February 23, 2026

Published

As lawyers lean more and more on generative AI, fabricated judgments are infiltrating courtrooms, forcing judges to fact-check pleadings and exposing grave risks to justice and trust

New Delhi: As governments and technology leaders debate how to harness AI while containing its risks, India’s judiciary has issued an urgent warning. Supreme Court judges have expressed alarm at a troubling development in courtrooms: advocates using generative AI tools to draft petitions and, in the process, citing fictional or non-existent judgments.

Chief Justice of India Surya Kant recently observed that the court was “alarmed to learn that lawyers are using AI to draft petitions”, noting that some pleadings quoted paragraphs from judgments that simply do not even exist. Justice B V Nagarathna referred to an instance in which a non-existent case titled ‘Mercy vs Mankind’ was cited. These were not isolated errors. The judges have detected several similar instances. What began as anecdote now appears to be a pattern.

At one level, this reflects the rapid adoption of AI tools by lawyers. Generative platforms are being increasingly used to draft contracts, summarise case laws, and prepare writ petitions and bail applications. In an overburdened legal system where advocates juggle multiple matters and courts confront heavy dockets, the appeal of a tool that can process vast volumes of text in seconds is obvious.

Yet alongside efficiency gains, a systemic danger has emerged: AI “hallucination” in legal practice. It refers to instances in which a generative model produces information that appears authoritative but is entirely fabricated. In law, that may mean inventing case citations, misattributing principles, or composing passages that resemble authentic judgments but have no basis in fact.

These systems do not “understand” the law as a trained advocate does. They predict text based on patterns in data. When prompted for a precedent, they may generate a citation complete with plausible party names, a year, and a confident articulation of a legal principle. To the hurried or untrained eye, it can appear indistinguishable from a genuine report.

The accuracy of citations is not ornamental; it is foundational. A single paragraph from a Constitution Bench ruling can shape the trajectory of a case. An invented paragraph can distort it, with serious consequences.

The consequences of failing to verify AI-generated content are no longer hypothetical.

The Bombay High Court recently imposed costs on a petitioner for submitting AI-generated arguments, citing a non-existent case. Neither citation nor copy of the judgment was supplied. The court and its law clerks were “at pains” to locate the case but could not find it. The court’s warning was unequivocal: while AI tools may assist research, advocates bear a “great responsibility” to cross-verify references and ensure that machine-generated material is genuine.

Several other courts have voiced similar concerns. The Delhi High Court encountered a petition relying on paragraphs from two judgments — one entirely fictitious and another containing fabricated extracts. The court described the episode as an instance of “AI hallucination”.

These episodes reveal a disturbing possibility: fabricated precedents can slip into real proceedings and influence real decisions before being detected.

Loss of trust

Chief Justice Kant highlighted another dimension of the problem. Judges, he said, must now scrutinize the authenticity of every paragraph quoted from cited judgments. What was once a matter of professional trust is becoming an exercise in verification.

Court proceedings rest on an assumption of candor. An advocate’s signature on a pleading carries an implicit assurance that the facts and authorities cited are accurate to the best of their knowledge. If courts must routinely fact-check basic citations, scarce judicial time — already stretched — will be further consumed.

The risk is particularly acute in trial courts and quasi-judicial forums where digital research infrastructure may be uneven and time pressures intense. A fabricated Supreme Court ruling cited before a lower court could, at least temporarily, influence a judge who assumes it has been verified. Even if corrected later, delay, cost and confusion follow.

Beyond inconvenience lies a serious ethical question. Advocates in India are governed by the Advocates Act, 1961, and the Bar Council of India Rules. These impose duties of integrity, prohibit misleading the court, and require adherence to high standards of professional conduct. Submitting a petition containing fabricated case law, whether generated by AI or otherwise, may amount to professional misconduct. The obligation to verify rests squarely on the advocate.

Courts have previously imposed costs and initiated action against lawyers for citing overruled or irrelevant precedents. The negligent or deliberate submission of fake citations could invite sterner consequences, including ‘contempt of court’ if it is found that the judiciary was knowingly or recklessly misled.

The legal system operates on trust. Once that trust is compromised, the ripple effects are profound.

Insight Post Image

The rule of law demands rigorous verification, integrity, and commitment to truth; in the AI age, ultimate accountability and responsibility must always remain human, not algorithmic. (Photo: PickPik)

Miscarriage of Justice

The gravest concern is miscarriage of justice. Legal reasoning is cumulative: courts build on earlier rulings, distinguish them, or extend their logic. If the foundation is fictitious, the structure becomes unstable.

Consider a bail application citing a non-existent Supreme Court judgment that purportedly liberalizes standards for a particular offence. If relief is granted on that basis, the order may later be challenged — but only after liberty has been affected. Conversely, a fabricated precedent restricting rights could result in unjust denial of relief. In constitutional matters, where interim orders can influence public policy and fundamental rights, the stakes are higher still.

A single erroneous citation may not collapse the edifice of law. Repeated lapses, however, can erode its credibility. The rule of law depends not only on outcomes but on the integrity of process.

Deskilling Risk

A subtler danger lies in the erosion of core professional skills. Legal research, careful reading of judgments, understanding ratio decidendi (the reason for the decision), appreciating the hierarchy of courts, and distinguishing precedents are foundational to advocacy. If young lawyers rely excessively on AI-generated drafts without independent verification, the profession risks gradual deskilling.

Legal practice is not merely the assembly of authorities. It requires context, nuance and judgment. AI tools, however powerful, cannot fully grasp statutory interpretation’s subtleties or the interplay of constitutional principles. An advocate who treats them as infallible may miss distinctions that alter outcomes.

The surge in AI use also raises data privacy issues. Lawyers routinely handle sensitive information — commercial secrets, personal data, and details of ongoing investigations. Uploading such material to free or publicly accessible AI platforms raises serious questions about confidentiality and privilege.

If client data is stored or processed without adequate safeguards, attorney-client privilege may be compromised. In a regulatory environment where data protection frameworks are still evolving, the legal profession has yet to establish uniform guidelines on safe AI usage. The risk strikes at the core of client trust.

AI Usage Guardrails

None of this suggests that AI must be banished from courtrooms. Technology has long transformed legal practice — from typewriters to online databases. Properly deployed, AI can enhance productivity, reduce costs, and improve access to justice.

But guardrails are essential:

(i) Courts could consider practice directions requiring advocates to certify that all citations have been independently verified against authoritative databases.

(ii) Law firms and chambers should adopt internal protocols mandating cross-checking of AI-generated content.

(iii) Bar associations and judicial academies can conduct training on responsible AI use.

(iv) Law schools must teach not only how to deploy these tools, but how to interrogate and verify their outputs.

(v) Above all, accountability must remain human. An AI system is a tool; it does not bear responsibility. The advocate who signs the petition does.

As Indian courts modernize — embracing e-filing, virtual hearings, and digital case management — the integration of AI into legal workflows is inevitable. Yet speed cannot come at the cost of accuracy. Convenience cannot undermine credibility.

The growing incidence of fictional judgments in real petitions is not a minor technical glitch. It is a warning. In a justice system where liberty, property, and constitutional rights are at stake, there is little room for invented authorities.

The rule of law demands rigorous verification, professional integrity, and an unwavering commitment to truth — whether a draft originates in a law chamber or is suggested by an algorithm. In the age of AI, the most indispensable safeguard remains human responsibility.