India’s courts must adopt artificial intelligence responsibly and transparently

Quarterly-SFG-Jan-to-March
SFG FRC 2026

Source: The post India’s courts must adopt artificial intelligence responsibly and transparently has been created, based on the article “Set the guardrails for AI use in courtrooms” published in “The Hindu” on 23rd August 2025. India’s courts must adopt artificial intelligence responsibly and transparently.

India’s courts must adopt artificial intelligence responsibly and transparently

UPSC Syllabus Topic: GS Paper 2- Justice And GS paper 3-Science and Technology- developments and their applications and effects in everyday life.

Context: July this year, the Kerala High Court issued the country’s first policy on AI use in district courts. It seeks speed and efficiency in a system with five crore pending cases, but stresses safeguards. The article maps risks, gaps, and governance for responsible adoption.

For detailed information on How AI Can Improve Indias Justice System read this article here

Early Policy and Promised Benefits

  1. First-of-its-kind guidance: Kerala’s policy directly regulates AI in judicial processes. It is timely and sets strict safeguards. It shows openness to innovation with caution.
  2. Efficiency gains for a burdened system: AI can assist translation, transcription, and defect detection in filings. These tools promise faster routine work in overburdened courts.
  3. Experimental deployments underway: Some tools are used on a non-commercial test basis for oral arguments and depositions. Benefits must be weighed against risk and capacity limits.

Practical Risks in Seemingly Simple Tasks

  1. Translation and transcription errors: A judge saw “leave granted” rendered as chhutti sweekaar. In Noel Anthony Clarke v. Guardian News & Media Ltd. (2025) EWHC 550 (KB), an AI tool repeatedly transcribed “Noel” as “no”. OpenAIs Whisper sometimes hallucinatesphrases, especially with long pauses.
  2. Research and citation pitfalls: Search bias can invisibilise precedents by nudging toward user-patterned results. A Journal of Empirical Legal Studiesstudy shows legal LLMs can make up case law and cite incorrect sources.
  3. Risks to adjudication quality: Over-reliance on rule-based outputs can sideline human judgment, context, and careful use of precedent in decisions.

Pilots, Dependencies, and Risk Management Gaps

  1. Undefined guardrails for pilots: Pilots often lack time-frames, success parameters, and data access, storage, and use rules for non-public, sensitive, or personal data.
  2. Dependencies and infrastructure needs: “Test” access may create dependencies without a clear path to sustainable adoption. AI needs reliable Internet and adequate hardware.
  3. Tender patterns and oversight reality: Tenders show incomplete risk frameworks. Manual vetting by retired judges, advocates, and translators exists, yet AI learns from changing data and can err.
  4. Hallucinations are inherent: Scholars note hallucinations are a feature, not a bug. Human oversight is essential in high-risk settings.

Governance Measures for Responsible Adoption

  1. Build critical AI literacy: Judges, staff, and lawyers need capacity building to use tools and understand limitations. Judicial academies and bar associations can partner with AI governance experts.
  2. Transparent courtroom use: Guidelines should cover personal use for research and writing. Litigants must be informed if AI is used and may opt out of pilots or deployments where safeguards worry them.
  3. Procure with standards: Adopt standardised procurement to test reliability and fitness. Pre-procurement should confirm the problem and whether AI is the best solution. Assess explainability, data management, and risk mitigation.
  4. Monitor vendor performance: Frameworks should track compliance and outcomes beyond routine judicial expertise.

Institutional Scaffolding and Purpose

  1. Technology offices under eCourts: The eCourts Phase III Vision Document calls for technology offices to assess, select, and oversee complex digital solutions and infrastructure.
  2. Specialist support for courts: Dedicated specialists can close expertise gaps and guide comprehensive planning for AI adoption.
  3. Keep the justice purpose central: AI must serve the ends of justice. Efficiency should not eclipse nuanced reasoning and human decision-making.
  4. Mind the paper-based reality: Most procedures remain paper-based. Any shift to advanced AI must not debilitate an imperfect system.

Question for practice:

Examine the opportunities and risks of adopting artificial intelligence in the Indian judicial system as highlighted in the article.

Print Friendly and PDF
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Blog
Academy
Community