Imagine asking an AI-powered financial assistant embedded in a popular fintech application whether you should invest in a particular mutual fund. The assistant recommends it confidently, in fluent Hindi or English, with the warmth of a trusted advisor. What the assistant does not tell you is that the mutual fund in question is offered by the same parent company that built and deployed the AI. You have just encountered one of the most structurally opaque problems in the modern digital economy: the conflict of interest embedded in AI-as-a-Service platforms. In India, where AI-powered consumer services are scaling at extraordinary speed, from lending apps and insurance aggregators to e-commerce recommendation engines and legal chatbots, such a problem is not theoretical. It is happening now, and the law, to address it, is only beginning to catch up.
This blog argues that AI platforms operating with undisclosed or structurally embedded conflicts of interest constitute unfair trade practices under the Consumer Protection Act, 2019, and that while India’s legal architecture offers meaningful entry points, significant gaps in enforcement, evidentiary standards, and AI-specific regulation remain unaddressed.
I. The Structural Conflict of Interest in AI Platforms
The conflict of interest in an AI platform is not always a matter of deliberate corporate malice. It is often architectural in nature. AI systems are trained on datasets curated by corporations, fine-tuned using reinforcement learning shaped by commercial priorities, and deployed as products within ecosystems where the platform has financial stakes in particular consumer outcomes. When Amazon’s AI recommends an Amazon Basics product over a competitor’s, or when a bank’s AI loan advisor steers a customer toward a higher-margin product, the AI is not being “biased” in a colloquial sense. It is performing exactly as its training and deployment incentives designed it to perform — that is, in the interest of the platform rather than the consumer.
In the Indian context, this is especially acute. Apps like Paytm, PhonePe, Groww, Zerodha, and various NBFC-linked lending platforms increasingly embed AI-assisted recommendation and advisory features. Many of these platforms are vertically integrated — they are simultaneously the AI service provider, the product distributor, and the entity that profits from the consumer’s decision. The AI, in this structure, is not a neutral intermediary. It is a commercially interested party dressed in the aesthetics of neutrality. Legal scholar Frank Pasquale’s concept of the “black box” — the opacity of algorithmic systems that make commercially motivated decisions while performing objectivity — maps precisely to this scenario.
II. The Consumer Protection Act, 2019: Entry Points for AI Conflicts
The Consumer Protection Act, 2019 (CPA 2019) replaced the older 1986 legislation with significantly expanded scope, and several of its provisions are directly applicable to conflicted AI services.
Section 2(47) defines “unfair trade practice” to include practices that falsely represent services, adopt deceptive methods to promote the sale of goods or services, and make materially misleading representations about the nature and quality of a service. An AI platform that presents its recommendations as neutral and consumer-oriented while structurally designing those recommendations to serve commercial interests falls squarely within this definition. The representation of objectivity is itself the misrepresentation.
Section 2(28) defines “misleading advertisement” broadly to include any advertisement that falsely describes a service, gives a false guarantee, or is likely to mislead the consumer about the nature or quality of the service. Courts and the Central Consumer Protection Authority (CCPA) have not yet applied this provision to AI-generated recommendations, but the textual basis for doing so exists. An AI assistant that presents a commercially motivated recommendation without disclosure is, in effect, running an undisclosed advertisement.
Section 2(46) defines “unfair contract” to include terms that are one-sided, impose unreasonable conditions on consumers, or exclude the liability of the service provider in a way that is prejudicial to consumers. The Terms of Service of virtually every major AI platform deployed in India like Google Gemini, Meta AI, Microsoft Copilot, and domestic equivalents contain blanket disclaimers that the AI should not be relied upon for professional advice and that the platform bears no liability for outputs. These disclaimers, when read alongside actively promoted use cases that invite exactly such reliance, constitute unfair contract terms under the Act.
III. The CCPA and The Problem of Enforcement
The Central Consumer Protection Authority, established under Section 10 of the CPA 2019, has the power to investigate unfair trade practices, issue recalls, order discontinuation of practices, and impose penalties. In 2021, the CCPA issued guidelines on prevention of misleading advertisements under Section 18, while establishing that endorsements must be honest. They should reflect actual experience, and disclose material connections between endorsers and the product. While these guidelines targeted human influencers, the logical extension to AI-generated endorsements and recommendations is compelling.
However, the CCPA has yet to take action in a case directly involving AI-mediated conflicts of interest. This is partly because consumer harm from AI recommendations is difficult to isolate and attribute, and partly because the CCPA’s investigative infrastructure is not yet equipped for the technical complexity of AI audit. The authority lacks a specialised AI or algorithmic review mechanism, and the evidentiary burden of demonstrating that a particular AI output was causally shaped by a commercial conflict — rather than simply being a statistically likely recommendation — is significant.
The National Consumer Disputes Redressal Commission (NCDRC) similarly has not yet adjudicated an AI-specific conflict of interest case, though its broader jurisprudence on deficiency of service and unfair trade practices in digital financial services is relevant. In Bajaj Allianz General Insurance Co. Ltd. v. Rajan Govind Kalambkar (2025), the Delhi HC affirmed that digital service providers owe a duty of transparency to consumers and that undisclosed conditions that materially affect consumer decisions constitute deficiency of service. The reasoning, though not AI-specific, is readily applicable.
IV. Dark Patterns and The IT Framework
India’s engagement with dark patterns — manipulative design practices that nudge consumers into decisions against their interest — gained formal recognition in 2023. The Department of Consumer Affairs issued the Guidelines for Prevention and Regulation of Dark Patterns, 2023, identifying thirteen specific dark patterns including “disguised advertisement,” “false urgency,” and “bait and switch.” These guidelines explicitly apply to e-commerce platforms and are enforceable under the CPA 2019.
AI-generated recommendations that are commercially motivated but presented as neutral advice constitute a form of “disguised advertisement” under these guidelines. When a lending app’s AI chatbot enthusiastically recommends a particular loan product without disclosing that the platform earns a distribution fee on that product, the recommendation functions as an undisclosed commercial communication. The 2023 guidelines create at least a preliminary basis for CCPA action in such cases.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 also bear on this discussion. Rule 4(2) requires significant social media intermediaries to deploy automated tools to identify information that violates their own rules. While these rules are primarily directed at content moderation, they reflect a regulatory expectation of algorithmic accountability that extends naturally to AI-powered consumer services.
V. The DPDP Act, 2023 and Consent Architecture
The Digital Personal Data Protection Act, 2023 (DPDP Act) introduces a consent-based framework for the processing of personal digital data. For AI platforms, this is directly relevant because the personalisation that enables AI conflict of interest is built on personal data. An AI recommendation engine that uses a consumer’s browsing history, financial behaviour, and demographic data to serve them a commercially motivated recommendation is processing personal data to produce a result that benefits the platform, not the consumer.
Section 6 of the DPDP Act requires that consent for data processing be free, specific, informed, unconditional, and unambiguous. A consumer who clicks “I agree” to a thirty-page Terms of Service that buries the disclosure of commercial recommendation logic has not given informed consent within the meaning of the Act. This creates an important intersection: where an AI platform processes personal data to generate conflicted recommendations, it may simultaneously violate both the CPA 2019’s unfair trade practice provisions and the DPDP Act’s consent requirements.
The Data Protection Board established under the DPDP Act is yet to become fully operational, but once functional, it could become a significant venue for complaints involving AI services that exploit consumer data for commercially conflicted purposes.
VI. What Indian Law Still Cannot Do
Despite these entry points, Indian consumer protection law faces three structural limitations in addressing AI platform conflicts of interest.
First, the causation problem. Proving that a specific AI recommendation caused a specific consumer harm requires technical evidence about the model’s training data, fine-tuning incentives, and deployment architecture. No Indian consumer forum currently has the capacity to compel or evaluate such disclosure.
Second, the aggregation problem. AI harm is diffuse. Millions of consumers may receive subtly biased recommendations, each suffering a modest financial loss, none individually crossing the threshold of forum intervention. Class action mechanisms under the CPA 2019 exist but are underutilised, and there is no developed case law on aggregate AI harm in India.
Third, the accountability gap. Most advanced AI models deployed in Indian consumer markets are built by foreign entities. Jurisdictional application of Indian consumer law to foreign AI providers whose services are accessed in India remains legally unsettled, despite the CPA 2019’s territorial reach and the IT Act’s provisions on extraterritorial jurisdiction.
Conclusion
The Consumer Protection Act, 2019, the CCPA’s dark patterns guidelines, the DPDP Act, and India’s broader digital regulation framework together constitute a meaningful — if incomplete — foundation for addressing conflicts of interest in AI platforms. What is missing is not primarily legislative text but interpretive courage, institutional capacity, and AI-specific regulatory infrastructure. Indian consumer law has the conceptual tools to recognise that a service which serves itself while performing service to the consumer is engaged in an unfair trade practice. The challenge now is building the enforcement architecture to make that recognition consequential. As AI platforms deepen their role as intermediaries in Indian consumers’ financial, medical, and legal lives, the cost of that gap will only grow.


Leave a Reply