Artificial intelligence is also making inroads into India’s justice delivery system. Courts are looking at digital tools to help manage cases, predict outcomes, and even make decisions. The Supreme Court of India spearheaded projects such as SUPACE (Supreme Court Portal for Assistance in Courts Efficiency) to aid judges in legal research and sorting precedents, and for managing bulky case files. This marks an institutional acknowledgment that technology can speed up the glacial pace of the justice administration and somehow ease the colossal backlog of over 4.5 crore pending cases across courts in the country.
AI offers efficiency, but fairness is also at stake. Algorithms are only as good as the data that is fed into them; if that data is biased, the results will reflect and, in many cases, reinforce that bias. Article 14 of the Constitution of India provides constitutional guarantee or mandate for equality and Justice should be free from the vice of arbitrariness.[i] Algorithmic decision-making that takes these forms and that entrenches structural inequality against marginalized communities is at odds with this constitutional vision.[2] [AS3] The US example, where the COMPAS software used in convict sentencing proved to be discriminatory to African-Americans, is a case in point. These risks cannot be overlooked by the Indian judiciary as they incorporate artificial intelligence into their institutional routines.[ii]
These natural justice principles require nothing less than transparency and accountability. Yet, most AI models often operate as what are described as “black boxes.” The term does not mean that their operations are completely unknowable, but rather that the statistical heuristics and pattern-matching at their core lack human-readable reasoning. As Sylvaggio notes, the “black box” metaphor can itself be misleading, since AI does not make moral or discretionary choices but deploys probabilistic mimicry of data fed into it.[iii] This opacity in explanation denies litigants their Article 21 rights as guaranteed under Part III of the Constitution.[iv] Without explainable and contestable systems, AI-driven choices risk becoming immune to scrutiny. International guidelines such as the Organisation for Economic Co-operation and Development (OECD) Principles on Artificial Intelligence underscore that transparency, accountability, and human oversight are non-negotiable when deploying AI in governance and law.
There is a social-economic layer to AI in courts as well. There is inequity in access to AI tools. Richer litigants or law firms are able to purchase sophisticated analytics and predictive systems; poorer people still rely on ordinary legal aid. This incongruence is at odds with the constitutional mandate of equal justice under Article 39A and forces legal aid authorities In India to carefully implement AI in a manner that promoted equality and does not exacerbate inequality between rich and poor litigants.[v]
India’s use of AI in justice delivery will need to strike a balance between innovation and constitutional morality. Comparable jurisdictions such as the European Union have already passed the AI Act which bans high-risk AI uses that infringe on fundamental rights. Indian law does have the Information Technology Act, 2000, but lacks any specific protections for judicial AI tools. That there is no European regulation around algorithmic bias further punctuates the need to accelerate the building of a national regulatory infrastructure that ensures the use of AI is consistent with concepts of dignity, equality, and fairness.[vi]
Algorithmic Bias: Concept And Concerns
A. Meaning and sources of bias
Algorithmic bias refers to systematic distortions in outcomes produced by artificial intelligence systems. These distortions often replicate or amplify pre-existing human prejudices embedded in the data. AI models do not operate in isolation; they rely on training datasets, coding structures, and design decisions made by developers. When those inputs reflect social inequalities, the outputs mirror such inequalities in ways that affect decision-making in justice delivery and governance.[vii]
Bias originates from multiple sources. The first source is data bias. Historical datasets used for training are rarely neutral. For example, if crime data from metropolitan police stations over-represent arrests of persons from lower socio-economic groups, an algorithm trained on that data will predict higher risk scores for such groups. In India, this links with caste-based and socio-economic profiling, raising concerns under “Article 15 of the Constitution of India which prohibits discrimination on grounds of caste, religion, sex, or place of birth.”[viii]
The second source is design bias. Human coders make subjective choices in setting parameters, classifying variables, and framing objectives. These decisions often reflect implicit biases. A further source is interaction bias. AI systems evolve through reinforcement learning, where continuous interaction with users shapes outcomes. Social media algorithms are classic examples, where repeated engagement with certain narratives perpetuates stereotypes. When applied to justice delivery, similar feedback loops may reinforce patterns of exclusion, thereby compromising the constitutional guarantee of equal protection under Article 14.
Prejudices are also grounded in structural and cultural biases. Technology is not value-free; rather, it embodies the sociopolitical moments in which it has been created. In a India stratified on the basis of caste and gender continuum, the threat of algorithms entrenching its inegalitarian rules is higher. In Navtej Singh Johar v Union of India,[ix] the Supreme Court relied on constitutional morality as a bulwark against the tyranny of the majority. And algorithmic models which reproduce social bias offend this constitutional norm and call for greater scrutiny.
B. Impact on fairness and equality
Algorithmic bias distorts fairness. When AI models produce biased results from faulty data, equality under the law erodes. The Constitution, in Article 14, requires equal legal protection, but algorithmic systems can treat similarly situated people differently due to the biases embedded in the training data. Recent audits of Indian AI tools have found this risk. Another study discovered a fairness gap of 0.237 in bail predictive models trained on Hindi legal documents, which suggested that religious information may have impact on the algorithm’s decision.[x] The disparity between the two runs counter to a constitutional vision of substantive equality.
Judicial institutions are responding. The Kerala High Court, via its July 2025 policy, had banned using AI for legal reasoning or case decision at district courts under its policy of July 2025. It called for “extreme caution” in AI deployment, and flagged possible dangers to privacy, trust and transparency. Such judicial safeguards reinforce a pledge to preserve justice being seen to be done. There must also be transparency and contestability. Black-box systems prevent litigants from challenging the outcomes of algorithms. Globally, frameworks reinforce these values. UNESCO’s Recommendation on the Ethics of Artificial Intelligence stresses fairness, non-discrimination, and procedural transparency. The EU’s GDPR, under Article 22, protects individuals from wholly automated decisions that significantly affect them.[xi] These norms underscore that fairness and equality in algorithmic governance are human rights imperatives.
Access To Justice In India
A. Constitutional and Legal Foundations
The Constitution of India recognizes access to justice as a core constitutional value. “Article 14 guarantees equality before law and equal protection of laws.” This equality is not formal but substantive, ensuring that justice is accessible to every person irrespective of status. Judicial pronouncements have consistently linked Article 14 with fairness and absence of arbitrariness in state action. In E.P. Royappa v. State of Tamil Nadu, the Supreme Court held that “arbitrariness violates equality, thereby embedding fairness as an essential component of access to justice.”[xii]
Article 21 of the Constitution protects the right to life and personal liberty. The Supreme Court has expanded this provision to include the right to speedy justice and fair trial. Directive Principles of State Policy reinforce these guarantees. Article 39A directs the State to secure equal justice and free legal aid. Though non-justiciable, this provision has been judicially enforced to establish legal services institutions. The Legal Services Authorities Act, 1987, operationalizes this directive by creating a statutory framework for free legal services. In Khatri v. State of Bihar, the Supreme Court stressed that “legal aid is not a charity but a constitutional right flowing from Article 21 read with Article 39A.”[xiii]
The judiciary has also recognized that access to justice requires procedural fairness and removal of economic and social barriers. In Anita Kushwaha v. Pushap Sudan, the Court elaborated that access to justice comprises four elements: “just and fair adjudication, reasonable access to courts, affordability of legal remedies, and timely resolution.” This judgment places access to justice at the center of constitutional governance and emphasizes that denial of any of these elements amounts to denial of equality itself.[xiv] International human rights law supplements these constitutional foundations. Article 14 of the International Covenant on Civil and Political Rights (ICCPR) guarantees fair trial and equal access to courts. India, being a party to the Covenant, has obligations to align domestic legal structures with these principles.
B. Role of Technology and E-Courts
The Indian judiciary has increasingly turned to technology to confront the mounting crisis of pendency. The E-Courts Mission Mode Project, initiated under the National e-Governance Plan, aimed to digitize case management, enable e-filing, and introduce video-conferencing for hearings. By Phase II, the project had brought computerization to more than 16,000 district and subordinate courts, creating a foundation for an integrated justice delivery system that reduces physical barriers to access.
The Supreme Court’s initiative during the COVID-19 pandemic expanded virtual hearings to an unprecedented scale. In Swapnil Tripathi v. Supreme Court of India,[xv] the Court permitted live-streaming of proceedings in matters of constitutional importance, recognizing that technology enhances transparency and strengthens the principle of open justice. This step reflected judicial acknowledgment that technology can democratize access by allowing citizens direct insight into courtroom proceedings.
E-filing also streamlines the process of accessing procedures by litigants, attorneys and pro se parties. Filing a petition online removes cost and geographic barriers. Article 39A of the Constitution giving the right to free legal aid and equal justice gets further buttressed, with litigants from rural and poor communities not having to travel great distances to attend proceedings. But the digital divide is structural, with internet still not reaching many rural parts of the country. This double-edged blade leads to the paradox where technology opens access for some, but at the same time denies it to others.
Comparative experience reinforces the role of digital courts in widening access. The United Kingdom has advanced Online Courts for small claims, while Singapore has developed the Community Justice and Tribunals System, allowing litigants to file and resolve disputes entirely online. These models show that technology, when regulated and inclusive, can transform legal systems into citizen-centric institutions. India’s adoption of e-courts thus carries the potential to bridge structural inequalities, provided safeguards against exclusion and bias are firmly built into the process.[xvi]
Algorithmic Bias And Indian Justice System
The Indian justice system is beginning to integrate artificial intelligence in multiple ways. The Supreme Court has initiated SUPACE, designed to assist judges with research and management of bulky records. High Courts experiment with translation tools, virtual hearings, and predictive analytics. These developments show that AI is no longer peripheral to judicial administration. Yet the risks of algorithmic bias remain acute because justice in India functions within a society marked by deep social divisions.[xvii]
Algorithmic models trained on historical data can replicate entrenched caste, gender, and class biases. Indian criminal justice records reflect over-policing of Dalits, minorities, and marginalized communities. If predictive policing or risk assessment tools are adopted, they may systematically disadvantage these groups. Such outcomes would collide with Article 14 and Article 15 of the Constitution which prohibit arbitrariness and discrimination. In State of West Bengal v. Anwar Ali Sarkar,[xviii] the Court held that equality demands not only equal laws but equal application. Algorithms reproducing structural inequalities would offend this settled principle.
Fair trial rights under Article 21 are also at stake. Automated tools that suggest bail outcomes, sentencing ranges, or conviction probabilities risk undermining judicial discretion. If judges rely heavily on AI-generated recommendations, litigants may find it difficult to challenge the reasoning. This undermines transparency and accountability which are part of natural justice. In Maneka Gandhi v. Union of India,[xix] the Supreme Court held that “procedure established by law” must be just, fair, and reasonable. Use of opaque algorithms in judicial process would fall short of this constitutional requirement.
Bias in AI also creates barriers to access to justice under Article 39A. Wealthier law firms may deploy advanced analytics to predict case outcomes and strategize, while weaker litigants rely only on legal aid. This asymmetry deepens existing inequality in legal representation. In Anita Kushwaha v. Pushap Sudan,[xx] the Court recognized affordability as an essential element of access to justice. Algorithmic asymmetry between litigants thus contradicts constitutional directives on equal justice.
Comparative Insights
The United States has played a leading role in the debate over algorithmic bias in criminal justice. Risk assessment algorithms such as COMPAS were used to aid judges in sentencing and bail determinations. But investigative studies discovered that the software unfairly tagged black defendants as high risk. The Wisconsin Supreme Court in State v. Loomis[xxi] recognized concerns about transparency but allowed the use of such tools with caveat protections. It’s a case study in how court systems grapple with the tension between efficiently and fairly automating decisions.
The EU has done so preventatively by means of extensive regulation. The proposed Artificial Intelligence Act adopts a risk-based approach in categorizing AI systems. High risk systems, like those employed in policing and the administration of justice, face stringent transparency, human oversight, and non-replicability requirements. The EU additionally provides rights under the General Data Protection Regulation[4] (e.g., reservation of Article 22 for individuals to call out automated decisions that impact their rights). These are provisions that express a rights-orientated regulatory philosophy: human dignity is at the heart of technology governance.
Canada has proposed the Directive on Automated Decision-Making, requiring algorithmic impact assessments prior to using AI in public decision-making. This involves requirement for bias testing, explainability and redress solutions. These preventative schemes signal that use of algorithms by the state will need to conform to constitutional norms of fairness and equality.[xxii]
India on the other hand is still to have legislation against Algorithmic accountability in justice delivery. Courts have interpreted Articles 14 and 21 to prevent arbitrariness. Unlike the EU or Canada, though, India does not have an organized system of regulation for algorithmic transparency. These comparative insights highlight the need for our nation’s lawmakers to erect strong statutory bulwarks to help ensure that technology will enhance, rather than undermine constitutional guarantee of equal justice.
Way Forward
When integrating AI into the judiciary in India, a rights-based approach is required. Constitutional guarantees under Articles 14, 15, 21 insist upon that any use of algorithms in dispensing justice is meted out fairly, transparently and with accountability. Parliament should legislate a standalone approach to AI in the courts (following the EU’s AI Act model), categorising judicial AI as “high risk” and imposing strict regulatory controls. “Without these statutory guardrails, the risk of institutionalizing discrimination will go unchecked.”
Algorithmic transparency has to become a constitutional right. Models that make their way into courts must be explainable and auditable. Access to how an AI-produced result affected a judicial opinion should be available to litigants. It’s consistent with the rule of natural justice that you can’t suffer from a decision the foundation of which is not disclosed. Judicial training however must also be part of the framework. Judges, lawyers, and legal scholars will need training in how to interpret algorithmic decision-making. This level of capability keeps human observation right at the core. It is telling that the Canadian Directive on Automated Decision-Making, which requires bias testing, human oversight and impact assessments prior to the deployment of automated tools, offers a useful example of a model. “Such institutional reforms in India would reduce the chances of discrimination and protect the constitutional morality.
AI adoption must be informed by the public and stakeholders. Policymaking must involve the participation of civil society, legal aid entities and representatives of the marginalized to make sure that it is inclusive of the design. Article 39A of the Constitution mandates equal justice for all and participatory law making will ensure that AI does not raise new walls of exclusion for the vulnerable. Greater international cooperation can also help improve India’s framework of regulations. Combine best practices of OECD principles on Trustworthy AI, human rights frameworks from the United Nations and regulations from the European Union for alignment with global standards.
Conclusion
One of the most serious obstacles to justice being realised in India is algorithmic bias. While artificial intelligence in the judicial system holds out possibilities for efficiency, it cannot be divorced from constitutional morality. Data-led models fed on biased data are at risk of iterating caste/gender/class hierarchies perpetuated within Indian society. These out-comers have a direct implication upon Article 14’s egalitarian commandment and Article 15’s proscription of discrimination. The Supreme Court in E.P. Royappa v. State of Tamil Nadu,[xxiii] observed that equality is antithetical to arbitrariness, a principle that must guide scrutiny of algorithmic decision-making.[5]
The need for justice cannot be served merely by speed but must also be fair and transparent. This black-box operation without the possibility of explainability sabotages that principle, as it strips parties of the opportunity to contest the rationality of decisions. In due process terms, no one may be penalized because of secret practices. Experiences elsewhere indicate that unregulated AI can widen inequality. The US debate over COMPAS, the predictive tool that tagged minorities as high risk, shows that algorithms can inherit social biases. In the European Union, the response from regulators was to classify judicial AI as high-risk under the Artificial Intelligence Act, and to require strict oversight. Canada put into place algorithmic impact assessments in its Directive on Automated Decision-Making to safeguard rights. These measures illustrate that proactive oversight is necessary to keep AI from exacerbating injustice.[xxiv]
Indian law has yet to evolve a comprehensive framework. The Information Technology Act, 2000, addresses electronic governance but is silent on algorithmic accountability. The Personal Data Protection Act[6] does not deal adequately with systemic bias.[xxv] The way forward requires statutory safeguards, judicial oversight, and institutional training. Transparency, accountability, and inclusivity must anchor the deployment of AI in justice delivery. International human rights instruments such as Article 26 of the ICCPR reinforce India’s obligation to eliminate discrimination from legal processes. Aligning AI adoption with constitutional values will ensure that technology strengthens rather than weakens the quest for equal justice.[xxvi]
[i] The Constitution of India, art. 14.
[ii] State v. Loomis, 881 N.W.2d 749 (Wis. 2016).
[iii] Eryk Salvaggio, ‘The Black Box Myth: What the Industry Pretends Not to Know About AI’ (Tech Policy Press, 6 September 2023) https://www.techpolicy.press/the-black-box-myth-what-the-industry-pretends-not-to-know-about-ai/ accessed 30 August 2025.
[iv] The Constitution of India, art. 21.
[v] The Constitution of India, art. 39A.
[vi] European Commission, “Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)” https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 (last visited on Aug. 21, 2025).
[vii] Barocas, Solon, and Andrew D. Selbst. “Big Data’s Disparate Impact.” California Law Review, vol. 104, no. 3, 2016, pp. 671–732. JSTOR, http://www.jstor.org/stable/24758720 (last visited on Aug. 21, 2025).
[viii] The Constitution of India, art. 15.
[ix] Navtej Singh Johar v. Union of India, (2018) 10 SCC 1.
[x] Sahil Girhepuje et al., “Are Models Trained on Indian Legal Data Fair?” arXiv preprint (2023) (fairness gap of 0.237 in bail prediction models).
[xi] UNESCO, “Recommendation on the Ethics of Artificial Intelligence” (2021); Regulation (EU) 2016/679 (GDPR), art. 22.
[xii] E.P. Royappa v. State of Tamil Nadu, AIR 1974 SC 555.
[xiii] Khatri v. State of Bihar, AIR 1981 SC 928.
[xiv] Anita Kushwaha v. Pushap Sudan, (2016) 8 SCC 509.
[xv] Swapnil Tripathi v. Supreme Court of India, (2018) 10 SCC 639.
[xvi] Richard Susskind, Online Courts and the Future of Justice 45 (Oxford University Press, 2019).
[xvii] Government of India, “E-Courts Mission Mode Project Phase II – Policy and Action Plan” (Ministry of Law and Justice, 2015).
[xviii] State of West Bengal v. Anwar Ali Sarkar, AIR 1952 SC 75.
[xix] Maneka Gandhi v. Union of India, AIR 1978 SC 597
[xx] Anita Kushwaha v. Pushap Sudan, (2016) 8 SCC 509
[xxi] State v. Loomis, 881 N.W.2d 749 (Wis. 2016).
[xxii] Government of Canada, “Directive on Automated Decision-Making” (Treasury Board of Canada Secretariat, 2019).
[xxiii] E.P. Royappa v. State of Tamil Nadu, AIR 1974 SC 555
[xxiv] Government of Canada, “Directive on Automated Decision-Making” (Treasury Board Secretariat, 2019).
[xxv] Personal Data Protection Bill 2019 (India)
[xxvi] International Covenant on Civil and Political Rights, 1966, art. 26
Leave a Reply