New Delhi: The expanding footprint of AI in governance has inevitably reached the judiciary, provoking both optimism and unease across the world. In India, however, the highest judiciary has drawn a firm boundary around its role.
At a national conference in Bengaluru, Justice Vikram Nath, the Supreme Court’s senior-most judge, articulated a position that captures the core of judicial resistance: “AI can be used as a tool to augment the judicial system but it cannot replace the judgment or judges’ minds as to what is to be decided.” This assertion is not merely cautionary, it’s foundational, reflecting a deeper concern about the compatibility of machine logic with the moral and interpretive demands of justice.
At the heart of the skepticism lies the nature of adjudication itself. Legal decision-making is not a mechanical exercise in pattern recognition but a layered process involving context, discretion and human sensitivity. Courts routinely confront situations where identical statutes produce different outcomes because the facts, intentions, and equities vary. Judicial reasoning often requires an appreciation of nuance that cannot be reduced to datasets or predictive models. In matters such as family disputes, criminal liability, or constitutional interpretation, the ability to weigh competing interests is inseparable from human judgment. It is precisely this capacity for moral calibration that AI lacks.
The risks become more pronounced when AI is positioned closer to substantive decision-making. In complex criminal proceedings, for instance, courts may grant bail to some accused while denying it to others within the same case, based on subtle distinctions in evidence and circumstance. Such decisions demand a level of individualized assessment that resists standardization. Similarly, constitutional adjudication involves interpretive philosophies, historical context, and evolving societal values — domains where algorithmic systems, bound by training data, struggle to operate meaningfully.
Adoption Across The World
Despite these limitations, AI is steadily being integrated into judicial systems across the world. Countries such as Brazil, Singapore, the United Kingdom, and China have adopted AI-driven tools to manage case flows, assist legal research, and improve administrative efficiency. These deployments are often presented as evidence of inevitable technological progress. Yet, they also serve as cautionary examples. The more embedded these systems become, the greater the risk that their outputs — however flawed — may begin to influence or even shape judicial reasoning in subtle ways.
Meanwhile, the Indian government has allocated ₹54 crore under Phase-3 of the eCourts project to integrate AI into judicial processes. Tools like Legal Research Analysis Assistant (LegRAA) and Digital Courts 2.1 aim to support judges in research, case management and document analysis. Additional features such as ASR-SHRUTI (voice-to-text) and PANINI (translation) assist in drafting judgments. These tools are in a controlled pilot phase with no reported bias, the government said.
In the absence of a formal AI policy, the responsibility for developing and managing implementation frameworks will rest with the High Courts. These frameworks will be shaped and governed in accordance with each court’s own rules of business and policy guidelines.
One of the most immediate dangers is the phenomenon of “AI hallucination”, where systems generate convincing but entirely fabricated legal references. Indian courts have already encountered instances where non-existent case law was cited, raising serious questions about the reliability of AI-assisted research. Legal reasoning depends on precedent, and when the authenticity of precedents themselves is compromised, the entire edifice of adjudication is threatened. Errors of this nature are not trivial; they strike directly at the credibility of judicial outcomes and risk eroding public trust.
Beyond textual inaccuracies, AI introduces profound evidentiary challenges. The rise of deepfakes and AI-manipulated digital content complicates the assessment of truth in the courtroom. Photographs and videos, once considered strong forms of evidence, are no longer inherently reliable. Judges may increasingly be forced to rely on forensic verification, shifting the burden onto litigants to prove authenticity. This transformation not only lengthens proceedings but also creates new avenues for deception, undermining the integrity of the justice system.

Concerns about accountability further intensify the debate. AI systems operate through opaque algorithms that are often difficult to audit or interpret. If an AI-assisted recommendation influences a judicial outcome, determining responsibility becomes complex. The diffusion of accountability, from developer to deployer to end-user, creates a grey zone that is incompatible with the judiciary’s need for transparency and reasoned justification.
Even proponents of AI within the legal ecosystem acknowledge these constraints. Judicial voices have emphasized, albeit indirectly, that data-driven intelligence cannot replicate human conscience or the emotional intelligence required to balance rights and liabilities. The act of judging is not merely analytical; it is deeply human, rooted in empathy and societal understanding. To delegate any part of this function to machines risks hollowing out the very essence of justice.
Tangible Advantages
This is not to deny the tangible advantages AI offers. In a country burdened with massive case backlogs, tools that assist in document management, translation, and legal research can significantly enhance efficiency. Faster retrieval of precedents and automated categorization of cases can reduce delays and improve access to justice. However, these benefits exist firmly within the realm of assistance, not adjudication. The danger lies in conflating efficiency with wisdom.
The path forward, therefore, demands restraint and rigorous oversight. Without institutional safeguards, such as standards for AI deployment, mechanisms for auditing outputs, and clear accountability frameworks, the integration of AI risks becoming uneven and potentially hazardous. The experience of other jurisdictions underscores that technological adoption, if left unchecked, can outpace the development of ethical and legal guardrails.
Ultimately, the judiciary stands at a critical intersection between innovation and principle. While AI may enhance the mechanics of justice delivery, it cannot embody the values that underpin it. The legitimacy of courts rests on public faith in their ability to deliver reasoned, humane, and context-sensitive decisions. Any erosion of that faith, whether through error, opacity, or over-reliance on machines, carries consequences far beyond efficiency gains.
The warning from India’s judiciary is therefore both timely and necessary. AI may assist the system, but the responsibility of judgment, anchored in human conscience, must remain firmly in human hands.
(Photo by Nahrizul Kadri on Unsplash)

