Algorithmic Discrimination

Algorithmic Discrimination in India’s Legal System: Constitutional Challenges and Policy Reform

written by Guest Author

in ,

Artificial Intelligence is no longer confined to futuristic imagination as it is already transforming India’s legal system. The government has invested heavily, with Phase III of the eCourts Project receiving over seven thousand crores, a portion earmarked for AI and blockchain in High Courts.[i] The seriousness of this investment is clear. Under this project, AI is being integrated for intelligent scheduling,prediction and forecast, improving administrative efficiency, Natural Language Processing (NLP), automatedfiling, enhancing case information systems, communicating with litigants through chatbots, and translation

Some systems are already operational. The Supreme Court uses SUVAS[ii] for translation and SUPACE[iii] for research support. States have adopted predictive policing and facial recognition, with Delhi Police’s CMAPS platform touted as a scientific way to predict and prevent crime.[iv] On the surface, these developments suggest efficiency and modernity.

Yet the speed of adoption is unsettling when compared with the absence of regulation. The 2023 Digital Personal Data Protection Act, though significant, barely addresses issues like algorithmic audits, bias detection, or transparency.[v] What happens when an algorithm makes a decision that reshapes someone’s rights, and no one can explain how it reached that conclusion?

Other countries have already paid the price of such oversight. In the United States, the COMPAS system marked Black defendants as high risk at nearly twice the rate of white defendants, despite similar records.[vi] In the Netherlands, the SyRI welfare fraud system was struck down for opacity and discrimination.[vii] These examples underline a simple truth: algorithms are not neutral. They inherit biases from data and design, often cementing discrimination under the guise of objectivity.

India stands at a crucial moment. The question is not whether AI will enter governance but how it will be governed. Without safeguards, efficiency may come at the expense of justice, embedding bias more deeply into the very systems meant to uphold fairness.

The COMPAS Controversy: Lessons from Abroad

The story of COMPAS illustrates the dangers vividly. ProPublica’s 2016 investigation exposed that Black defendants were much more likely to be flagged as high risk compared to white defendants, even when they had comparable criminal histories.[viii] The consequences were not abstract. In Wisconsin, Eric Loomis[ix] received a harsher sentence partly because of his COMPAS score. Yet neither he nor his lawyer could interrogate the algorithm because it was considered proprietary. The state Supreme Court accepted its use but expressed discomfort with its opacity, creating an uneasy precedent where justice seemed filtered through a black box.[x]

What makes the COMPAS case particularly relevant to India is how it demonstrates the limits of formal equality. The algorithm did not explicitly consider race as a factor, yet it produced systematically biased outcomes because it relied on variables that served as proxies for racial discrimination. This form of indirect discrimination, though more subtle, can be equally destructive.

Predictive Policing in India: The CMAPS System

Delhi Police’s Crime Mapping Analytics and Predictive System (CMAPS) shows how algorithms can entrench social bias instead of correcting it. Researchers have found that CMAPS disproportionately targets Muslim and Dalit communities because it draws on historical crime data already shaped by decades of discriminatory policing.[xi] This creates what scholars call a “triple bind”: minority neighborhoods are overly surveillanced by the police, generating more arrests, which then mark these areas as high risk, prompting even more surveillance. The cycle masquerades as objective, but in reality it amplifies old prejudices. The 2019 Status of Policing in India Report already documented widespread bias among personnel.[xii] CMAPS does not erase such attitudes but it encodes them into code, making discrimination more systematic and far harder to detect.

Facial Recognition Technology and Minority Targeting

The deployment of facial recognition technology across Indian states has raised particularly acute concerns about discriminatory targeting of minorities. Following communal violence in Delhi’s Jahangirpuri area, police extensively used FRT to identify and arrest individuals[xiii], with the vast majority of those charged being Muslim. This pattern repeated earlier incidents and suggested systematic bias in both the technology’s deployment and its algorithmic functioning.

Research by the Vidhi Centre for Legal Policy demonstrates that FRT systems exhibit higher error rates for women and individuals from minority communities.[xiv] But the problem runs deeper than technical limitations. The technology is disproportionately deployed in areas with significant Muslim populations[xv], creating a surveillance infrastructure that specifically targets already marginalized communities.[xvi][xvii]

The constitutional implications are profound. When state authorities use technology that systematically produces higher error rates for certain communities, then deploy that technology specifically in areas where those communities live, the result is state sponsored discrimination with the seal of technological authority.

Constitutional Analysis: Rights in the Age of Algorithms

The Privacy Precedent: The landmark judgment in Justice K.S. Puttaswamy v. Union of India (2017) is a constitutional landmark that resonates deeply with these developments.[xviii] By affirming privacy as intrinsic to life and liberty, the judgment laid the groundwork for contesting opaque algorithmic systems. Importantly, it framed privacy not just as protection from intrusion but also as the right to make autonomous decisions and demand explanations for state actions.[xix]

Justice Chandrachud’s statement that privacy includes the right “to be told why” is especially relevant.[xx] When government agencies use AI systems to make determinations about bail, employment or welfare access, individuals have a constitutional right to understand the basis for those decisions. Current AI systems, operating as black boxes, systematically violate this principle.

Equality and Non Discrimination

Indian constitutional law offers firm ground for challenging algorithmic bias. The Supreme Court’s shift from formal to substantive equality, most clearly in E.P. Royappa v. State of Tamil Nadu (1974), declared that “equality is antithetical to arbitrariness.”[xxi] This principle fits neatly with AI systems, which often produce opaque or arbitrary outcomes. The doctrine of indirect discrimination, articulated in Air India v. Nergesh Meerza (1981), strengthens this position by capturing facially neutral rules that disproportionately harm protected groups.[xxii] Algorithms need not be intentionally biased, if their results consistently disadvantage minorities without adequate justification, they embody the very arbitrariness constitutional law rejects.

Procedural Fairness and Natural Justice

The principle of natural justice faces unprecedented challenges from algorithmic decision making systems. The Supreme Court’s emphasis in Olga Tellis v. Bombay Municipal Corporation (1986) on both “the right to be heard from, and the right to be told why” establishes that procedural fairness encompasses explanation of reasoning.[xxiii] This is precisely what current AI systems cannot provide.

When life and liberty are at stake, the opacity of algorithms cannot be tolerated. Unlike in other domains where opacity may be tolerated for efficiency’s sake, the law demands transparency when fundamental rights are involved. The “explainability imperative” is therefore not a luxury but a necessity.

International Comparisons

The European Union’s AI Act provides a stark contrast to India’s regulatory approach. The EU framework adopts a risk based approach that categorizes AI systems by their potential for harm and mandates increasingly stringent requirements for higher risk applications.[xxiv] Systems used in law enforcement and judicial processes face strict requirements for transparency, human supervision, and robust redress mechanisms.[xxv]

These international frameworks share several characteristics notably absent from Indian approaches: proactive assessment of discriminatory impact, mandatory transparency requirements, meaningful human oversight, and accessible redress mechanisms. They recognize that algorithmic systems pose distinctive challenges to established rights protections that require new forms of regulatory intervention.

Policy Recommendations

India urgently needs dedicated AI regulation grounded in constitutional values and international human rights standards. Such legislation should move beyond the current patchwork approach to address the distinctive challenges posed by algorithmic decision making systems. India cannot afford to continue without robust safeguards. It is about ensuring that technological innovation serves instead of undermining the constitutional commitments to equality and justice.

India must adopt mandatory algorithmic impact assessments, modeled on global practices, before deploying AI in governance. These reviews should examine discriminatory risks, engage affected communities, and address biases which are historical, representational, or proxy based which have the ability to distort outcomes. Technical checks alone are insufficient without genuine public consultation. Equally vital is a non biased apolitical  human in the loop oversight like automated decisions on bail, welfare, or jobs must never stand uncontested. Human reviewers, empowered and accountable, should have the authority to question and overturn algorithmic outputs when constitutional principles are at stake.

Conclusion

India stands at a decisive moment in its encounter with Artificial Intelligence. On one hand, AI promises faster courts, smarter policing, and a more efficient state. On the other, it threatens to entrench discrimination and erode constitutional protections. These dangers are not distant but are already visible in predictive policing that targets minority communities and in facial recognition systems that misidentify and over surveil.

The troubling part is that technologies marketed as neutral and objective often amplify old biases, making them harder to detect or contest. Historical prejudices which were once visible in the actions of individuals can now be embedded in datasets and algorithms that present themselves as scientific. This shift makes discrimination not only more pervasive but also less transparent.

Yet India’s constitutional framework offers grounds for cautious optimism. The right to privacy recognized in Justice K.S. Puttaswamy, the guarantees of equality under Articles 14, 15, and 16, and the principles of natural justice all extend to decisions made by machines. These doctrines demand transparency, reasoned explanation, and fairness, qualities not present from current AI deployments.

Other countries have shown that regulation is possible. The European Union’s AI Act, for instance, adopts a risk based approach, requiring higher safeguards for technologies with greater potential to harm. Such models illustrate that innovation and rights protection can coexist when grounded in careful governance.

For India, the way forward lies not in rejecting AI but in shaping its adoption. Dedicated laws, algorithmic impact assessments, and meaningful human oversight are not barriers to progress; they are the guardrails of constitutional democracy. Without them, India risks creating a two tiered system where some citizens retain constitutional protections while others are left at the mercy of opaque machines.

The challenge is clear: AI must serve human dignity, fairness, and equality and not replace them. The question is whether India will act before the damage is done.


Endnotes

[i] Press Information Bureau, Government of India, “Use of AI in Supreme Court Case Management”, Written Replyby Minister of State for Law and Justice Shri Arjun Ram Meghwal in Rajya Sabha, 19 March 2025, available at: https://www.pib.gov.in/PressReleasePage.aspx?PRID=2100323

[ii] Government of India, Ministry of Law and Justice, “AI backed SUVAS translation tool intended to make legalesesimpler, court proceedings faster: Law minister”, Economic Times, 10 August 2023, available at: https://government.economictimes.indiatimes.com/news/technology/ai-backed-suvas-translation-tool-intended-to-make-legalese-simpler-court-proceedings-faster-law-minister/102648151

[iii] Supreme Court of India, “CJI launches top court’s AI-driven research portal”, The Indian Express, 6 April 2021,available at: https://indianexpress.com/article/india/cji-launches-top-courts-ai-driven-research-portal-7261821/

[iv] Vidushi Marda & Shivangi Narayan, “Data in New Delhi’s Predictive Policing System”, 2020 Conference onFairness, Accountability, and Transparency (FAT*’20), Barcelona, Spain, January 2020, available at: https://www.vidushimarda.com/storage/app/media/uploaded-files/fat2020-final586.pdf

[v] Ministry of Electronics and Information Technology, “The Digital Personal Data Protection Act, 2023”, No. 22 of2023, Government of India, available at: https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf

[vi] Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias”, ProPublica, 23 May 2016,available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[vii] District Court of The Hague, “NJCM et al. v. The State of the Netherlands”, Judgment, 5 February 2020,C/09/550982/HA ZA 18-388

[viii] ProPublica, “How We Analyzed the COMPAS Recidivism Algorithm”, 23 May 2016, available at: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm

[ix] Loomis v. Wisconsin, 881 N.W.2d 749 (Wis. 2016), cert. denied, 137 S. Ct. 2290 (2017)

[x] State v. Loomis, 2016 WI 68, Wisconsin Supreme Court, 13 July 2016

[xi] Vidushi Marda & Shivangi Narayan, “Data in New Delhi’s Predictive Policing System”, supra note 4

[xii] Common Cause and Centre for the Study of Developing Societies (CSDS), “Status of Policing in India Report2019”, August 2019, available at: https://www.lokniti.org/otherstudies/status-of-policing-in-india-report-spir-2019-207

[xiii] “Special teams for probe, Delhi cops to use facial recognition”, The Times of India, 16 April 2022, available at:https://timesofindia.indiatimes.com/city/delhi/spl-teams-for-probe-cops-to-use-facial-recognition/articleshow/90886017.cms 

[xiv] Vidhi Centre for Legal Policy, “The Use of Facial Recognition Technology for Policing in Delhi: An EmpiricalStudy of Potential Religion-Based Discrimination”, 2021, available at: https://vidhilegalpolicy.in/research/the-use-of-facial-recognition-technology-for-policing-in-delhi/

[xv] Ibid

[xvi] “Delhi Police in RTI reply: 80% match in facial recognition is deemed positive ID”, The Indian Express, 16August 2022, available at: https://indianexpress.com/article/cities/delhi/delhi-police-rti-reply-80-pc-match-facial-recognition-deemed-positive-id-8094324/

[xvii] Ibid

[xviii] Justice K.S. Puttaswamy (Retd.)& Anr. v. Union of India & Ors., (2017)10 SCC 1

[xix] Ibid., para 180 (Per Chandrachud J.)

[xx] Ibid

[xxi] E.P. Royappa v. State of Tamil Nadu, (1974)4 SCC 3, para 85

[xxii] Air India v. Nergesh Meerza, (1981)4 SCC 335

[xxiii] Olga Tellis v. Bombay Municipal Corporation, (1985)3 SCC 545

[xxiv] European Union, “Regulation (EU)2024/1689 of the European Parliament and of the Council laying downharmonised rules on artificial intelligence”, Official Journal of the European Union, L 1689, 12 July 2024

[xxv] Ibid., Articles 9-15

Author

  • Aadarsh Anand

    Aadarsh Anand is pursuing his LL.M. at The Energy and Resources Institute. His current research interests include energy & environment law. Beyond professional pursuits he likes to pursue photography.

    View all posts

The views expressed are personal and do not represent the views of Virtuosity Legal or its editors.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Virtuosity Lexicon Motions and Propositions are now Live!

X