Abstract
Artificial intelligence (AI) is transforming India’s digital economy, yet it simultaneously generates unprecedented risks of fraud through deepfakes, phishing bots, and algorithmic scams. The existing cyber law framework under Information Technology Act, 2000, Bharatiya Nyaya Sanhita, 2023, and Digital Personal Data Protection Act, 2023, provides partial remedies but fails to address liability attribution in AI-driven fraud. The absence of clarity on whether developers, deployers, or intermediaries bear responsibility exposes victims to remedial gaps while granting offenders opportunities to evade accountability. This article critically analyses legal and doctrinal challenges in assigning liability for AI-enabled fraud in India. It evaluates comparative global frameworks, including EU AI Act’s risk-based regulation and China’s deep synthesis laws, to propose shared liability model embedded within “accountability by design.” By integrating principles of vicarious liability, explainable AI, and intermediary responsibility, paper advances preventive legal architecture that balances innovation with regulation. The study concludes that only multi-stakeholder liability regime, supported by statutory amendments and regulatory innovations, can secure victims’ rights while fostering ethical AI deployment in India’s evolving digital landscape.
Keywords: Artificial Intelligence Fraud, Shared Liability, Cyber Law in India, Accountability by Design, Intermediary Liability
I. Introduction
1. Contextual Background
The rapid spread of artificial intelligence in India has created both efficiency and new threats. Fraudsters exploit deepfakes, phishing bots, and algorithmic scams to deceive individuals and corporations[i]. Identity theft has escalated as criminals use AI to harvest personal data from social media platforms, impersonating individuals for financial and reputational harm. The Information Technology Act, 2000 was designed to regulate cybercrime but speed of technological innovation outpaces statutory provisions. Fraudulent AI-driven activities in e-commerce and banking systems highlight weaknesses in consumer protection mechanisms and reveal absence of effective deterrents. Phishing chatbots and automated impersonation tactics have become sophisticated with natural language processing, making it hard for victims to detect deception[ii]. Financial regulators such as SEBI have raised concerns over AI-based algorithmic trading systems being misused for market manipulation and insider trading[iii]. Social media platforms have faced repeated scrutiny over circulation of deepfake videos that distort public discourse and create social unrest[iv]. The risks of AI misuse are aggravated by lack of unified legal definitions that distinguish between conventional cybercrime and AI-enabled fraud under Indian law.
2. Problem Statement
Indian cyber law does not clearly define liability when fraud is caused by AI systems rather than human actors[v]. The attribution of criminal responsibility is complicated by autonomous decision-making ability of AI agents which lack human intent. Courts and regulators struggle to apply traditional doctrines of mens rea and actus reus to non-human systems. Intermediaries often claim safe harbour under Section 79 of Information Technology Act, 2000 even when their platforms enable fraud. Balancing innovation with accountability remains pressing challenge. Overregulation risks stifling India’s growing AI sector, while under-regulation leaves victims of fraud without effective remedies. The Supreme Court in K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1, recognised privacy as fundamental right, highlighting risks of data misuse in AI-driven economy. At same time, ruling in Shreya Singhal v. Union of India, (2015) 5 SCC 1, on intermediary liability demonstrated tension between free speech and state regulation. These cases underscore difficulty of balancing rights with responsibilities in context of AI fraud.
II. Conceptual Foundations
1. Understanding AI Fraud
Algorithmic trading fraud involves manipulation of securities markets using automated systems that execute trades in milliseconds, often misleading regulators and investors. Phishing chatbots employ natural language generation to impersonate banks or corporations, persuading victims to disclose sensitive data. Identity theft through AI-driven impersonation has emerged as common fraud, with criminals using deepfake technology to bypass biometric [1] [MT2] verification systems in banks and payment apps, U.S. Financial Crimes Enforcement Network (FinCEN) published an alert after banks filed suspicious activity reports describing schemes where deepfake/GenAI media and AI-generated identity documents were used to circumvent identity verification and open fraudulent accounts (used as funnel accounts, for loans, etc.).[vi] This is an official, government confirmation that the technique is being used in real financial-crime reporting. . Unlike conventional cyber fraud, AI-enabled fraud involves minimal direct human intervention. The fraudster may design AI system but actual execution occurs autonomously, blurring chain of accountability. For instance, deepfake impersonation of political leaders has been used to manipulate elections and public sentiment, creating national security concerns[3] [MT4] , a recent instance during the Russia-Ukraine conflict, a manipulated video circulated on Ukranian social media showing President Volodymyr Zelenskyy supposedly urging Ukrainian soldiers to surrender. It was debunked. The event illustrated how deepfakes are used in geopolitical conflict to create confusion, fear, or demoralization.[vii] The scale, speed, and anonymity of AI fraud distinguish it from earlier forms of cybercrime and demand distinct legal responses.
2. Legal Theory of Shared Liability
Vicarious liability principles under Indian Penal Code and Contract Act traditionally hold employers or principals responsible for acts committed by agents in course of employment[viii]. Section 43 and Section 66D of Information Technology Act, 2000 criminalise identity theft and cheating by impersonation but do not address autonomous AI systems explicitly[ix]. Courts have extended vicarious liability to corporate entities for employee misconduct, particularly under the doctrine of respondeat superior, where employers are held responsible for acts committed by employees in the course of employment. However, the application of this doctrine to AI systems remains unsettled. Unlike human employees, AI lacks legal personhood, intentionality, or contractual employment status, which complicates the direct transposition of vicarious liability principles. Questions arise as to whether AI should be treated as a mere tool, thereby making liability fall on the deploying organization, or whether novel frameworks such as assigning “electronic personhood” or strict liability regimes should govern. Jurisdictions have yet to provide consistent judicial guidance, leaving courts and regulators to grapple with whether harms caused by autonomous decision-making systems can or should trigger employer-style liability.[x] Tort law doctrines of negligence and strict liability provide useful analogies. Under product liability, manufacturers can be held accountable for defects in design or warnings. Similarly, AI developers could be held responsible for foreseeable misuse of their systems[xi]. Comparative insights from European Union’s AI Act highlight hybrid framework combining fault-based and strict liability rules to allocate responsibility among developers, deployers, and users.
3. Accountability by Design
Embedding accountability by design means integrating liability safeguards within AI lifecycle from development to deployment. Developers can implement explainability standards, algorithmic audits, and bias detection as part of compliance obligations[xii]. Explainable AI (XAI) ensures that decisions made by AI systems can be traced, which is crucial for assigning liability in cases of fraud. International best practices such as Organisation for Economic Co-operation and Development. AI Principles emphasise human-centric design and accountability mechanisms. The EU’s proposed rules on watermarking AI-generated content illustrate how regulation can prevent misuse of deepfakes[xiii]. India lacks similar statutory obligations, making accountability by design urgent requirement.
III. Indian Legal And Regulatory Landscape
1. Information Technology Act, 2000 & Intermediary Guidelines
Section 43 penalises unauthorised access and data theft, while Section 66C criminalises identity theft using digital signatures or passwords. Section 66D specifically addresses cheating by impersonation using computer resources, which directly relates to phishing chatbots and AI fraud. Intermediaries, however, often escape liability due to Section 79’s safe harbour provisions, provided they comply with due diligence obligations under IT Rules, 2021. The 2021 Intermediary Guidelines impose duties on social media platforms to identify originators of harmful content and take down unlawful material within 36 hours of notice. However, rules face criticism for overreach and lack of technological feasibility in detecting sophisticated AI-generated frauds.
2. Bharatiya Nyaya Sanhita, 2023 (BNSS)
The BNS, 2023 consolidates provisions on cyber offences including stalking, forgery, and cheating using digital platforms. Cyberstalking provisions explicitly include online harassment, which can extend to AI-enabled impersonation and automated harassment campaigns. Cheating and forgery provisions overlap with identity theft and deepfake manipulation, though language still assumes human actors. The absence of AI-specific clauses leaves ambiguity on whether autonomous systems fall under scope of criminal attribution.
3. Data Protection & Privacy Regime
The Digital Personal Data Protection Act, 2023 introduces obligations on data fiduciaries to ensure lawful processing and security of personal data. AI misuse of sensitive data for fraud places both developers and corporations under potential liability if safeguards are inadequate[xiv]. The Act empowers individuals to seek remedies for unauthorised data use, aligning with Puttaswamy principles of informational privacy[xv]. Yet, enforcement mechanisms remain underdeveloped in comparison to EU’s GDPR.
4. Judicial Precedents and Emerging Case Law
In Shreya Singhal v. Union of India, Court struck down Section 66A of IT Act but upheld intermediary responsibility under Section 79, highlighting due diligence obligations[xvi]. K.S. Puttaswamy v. Union of India recognised constitutional right to privacy, making unlawful AI-driven data harvesting violation of fundamental rights[xvii]. Emerging cases such as Mata v. Avianca in United States, where lawyers were sanctioned for relying on fabricated AI-generated cases, illustrate growing judicial recognition of AI risks internationally[xviii].
IV. Challenges In Attribution Of Liability
1. Mens Rea and AI Autonomy
AI systems operate without human intention, raising question of how mens rea can be attributed. Developers may not anticipate every misuse, while users may lack control over autonomous actions. This creates liability gap where neither party fits traditional criminal law categories.
2. The “Black Box” Problem
AI decision-making often lacks transparency due to complex algorithms. Courts face difficulty in establishing causation when outcomes cannot be explained. This opacity undermines accountability and weakens evidentiary value in criminal prosecutions.
3. Jurisdictional Issues
AI frauds often cross borders, with servers located in one jurisdiction and victims in another. This complicates investigation and prosecution, as seen in global ransomware attacks. Mutual legal assistance treaties (MLATs) remain slow and ineffective in handling real-time cyber fraud.
4. Regulatory Gaps and Legal Voids
India lacks dedicated AI liability regime, leaving regulators to stretch existing cyber and criminal laws to fit novel scenarios. Uncertainty prevails on whether AI should be classified as product, agent, or quasi-person for liability purposes. Without statutory clarity, accountability in AI fraud remains fragmented.
V. Comparative And Global Perspectives
1. EU Artificial Intelligence Act
The European Union Artificial Intelligence Act represents most ambitious attempt at regulating AI through risk-based framework[xix]. It classifies AI systems into unacceptable, high, limited, and minimal risk categories with obligations increasing in proportion to risk. High-risk systems, including those used in finance and law enforcement, must comply with requirements of transparency, robustness, and human oversight. Under Article 43 and Annex VI read with Article 10 of the Act, the Act imposes documentation duties and liability provisions on developers and deployers, demanding conformity assessments before deployment. Transparency obligations under Article 50 of the Act requires disclosure when interacting with AI-generated content like chatbots and deepfakes. The Act also introduces liability principles by extending product liability rules to AI systems under Article 82. Developers may be held accountable for harm caused by algorithmic decisions if compliance duties are breached. The framework recognises “black box” problem by mandating explainability for high-risk AI, ensuring that victims and courts can trace decision-making processes.[xx] [5] [MT6] The Act has implications for India, where absence of such codified standards leaves victims vulnerable. Adoption of risk-tiered model could assist Indian regulators in balancing innovation with accountability, particularly in fraud prevention contexts.
2. US Model
The United States continues to rely heavily on safe harbour principle under Section 230 of Communications Decency Act, 1996[xxi]. It shields online intermediaries from liability for third-party content, treating platforms as distributors rather than publishers. Courts have applied this immunity widely, creating significant protection for platforms even when AI tools amplify harmful or fraudulent content. Recent cases, including Gonzalez v. Google LLC, 598 U.S. (2023), reopened debate on scope of Section 230, though Supreme Court refrained from restricting its broad shield[xxii]. The US debate centres on whether platforms deploying generative AI or recommender algorithms should continue to enjoy immunity or be treated as publishers. Critics argue that Section 230 enables platforms to evade responsibility for fraudulent or harmful AI outputs. The Federal Trade Commission has issued guidance suggesting greater scrutiny of AI practices under consumer protection laws, highlighting bias and deception risks[xxiii]. For India, US model demonstrates both benefits and dangers of strong intermediary immunity, and underscores importance of carefully defining liability of platforms enabling AI fraud.
3. China’s Deep Synthesis Regulations
China has adopted aggressive regulatory stance through its Provisions on Administration of Deep Synthesis of Internet Information Service, 2022[xxiv]. These rules require providers of AI synthesis technologies, including deepfake and voice-mimic applications, to register with authorities. Strict obligations are placed on service providers to watermark synthetic content, authenticate users, and implement content moderation. Liability provisions make providers accountable if their services are misused for fraud or disinformation. The regime reflects China’s emphasis on state control of digital technologies and prioritises prevention of political and social disruption. Platforms are required to prevent creation of content that undermines social order or national security, extending liability directly to AI developers and intermediaries. [7] [MT8] Although India’s democratic context differs, the Chinese approach demonstrates the utility of registration and mandatory watermarking for controlling AI-driven fraud. Were India to adopt similar requirements selectively, the risk of increased red-tapism becomes real: registration and oversight could add new bureaucratic layers, given that India’s regulatory oversight is already fragmented across multiple agencies. This would exacerbate the compliance costs that already burden MSMEs, which are estimated at ₹13–17 lakh annually under over 1,450 regulatory obligations.[xxv] In addition, inconsistent enforcement or corruption could weaken watermarking’s effectiveness; extra registration could divert startup resources from R&D, potentially stifling innovation. Moreover, broad or vaguely-defined mandates might impinge on free speech under Article 19(1)(a) of the Constitution. Judicial precedent such as Shreya Singhal v. Union of India (2015) has held that restrictions on expression must be reasonable, narrowly tailored, and not arbitrary, to avoid chilling effects. Thus, while selective adoption has potential benefits, the trade-offs in regulatory burden, innovation cost, and speech rights must be carefully managed to avoid undermining India’s tech ecosystem.
4. International Human Rights Standards
The United Nations Office on Drugs and Crime has consistently emphasised risks of AI misuse in organised crime, fraud, and cyber-enabled offences[xxvi]. Reports highlight need for states to establish liability regimes that account for autonomy of AI while safeguarding due process. International human rights standards stress principle of accountability, requiring AI systems to be transparent, explainable, and auditable. The OECD Principles on Artificial Intelligence, endorsed by G20, also stress that AI systems must be human-centric, fair, and subject to oversight[xxvii]. Ethical frameworks emphasise proportionality in regulation, balancing technological advancement with human rights. These standards provide guidance for India in framing AI fraud liability regimes that respect constitutional rights while imposing obligations on developers and intermediaries.
VI. Accountability By Design In AI Fraud
1. Technical Safeguards as Legal Obligations
Explainable AI is emerging as core safeguard, ensuring decisions made by AI can be understood and evaluated in courts[xxviii]. Watermarking of AI-generated content has been proposed in EU to counter deepfake misuse, though evidence suggests watermarking alone may not prevent fraud. Algorithmic audits, conducted periodically by independent experts, can detect bias, vulnerabilities, and compliance failures. Embedding such safeguards as legal duties shifts responsibility onto developers and intermediaries to prevent fraud. For India, integrating these mechanisms into IT and data protection laws would embed liability within technological design.
2. Shared Liability Framework
AI fraud involves multiple stakeholders including developers, deployers, intermediaries, and end-users. A shared liability framework distributes accountability among these actors depending on their role. Developers may be held liable for foreseeable design risks, while deployers bear responsibility for misuse in operational contexts. Intermediaries, such as platforms, carry duties of due diligence under IT Act Section 79, which could be expanded to include proactive AI fraud detection. Users may also face liability where fraudulent intent can be proven. Case studies of deepfake impersonation, chatbot fraud, and autonomous trading fraud demonstrate need for multi-actor accountability[xxix].
3. Doctrinal Innovations
Indian law may draw from product liability doctrines, extending strict liability to AI developers for defects that cause fraud. However, fault-based liability remains relevant for intermediaries who fail to act upon knowledge of fraudulent use. A hybrid model combining strict and fault-based liability allows courts to balance fairness with deterrence[xxx]. Comparative examples from EU and US illustrate that reliance on single model may create enforcement gaps. Indian courts could develop doctrines that treat AI fraud as collective responsibility where each actor bears proportionate liability.
4. Proposed Statutory Amendments
Amendments to Information Technology Act, 2000 could introduce specific provisions addressing AI fraud, defining obligations of developers, deployers, and intermediaries. The Bharatiya Nyaya Sanhita could be expanded to explicitly criminalise fraud conducted by autonomous AI systems, bridging gap in mens rea attribution. The Consumer Protection Act, 2019 could incorporate AI-specific product liability provisions, enabling victims of AI fraud to claim compensation. Such reforms would clarify liability distribution and embed accountability within statutory frameworks[xxxi].
VII. Policy Recommendations
1. Legal Reforms
A dedicated AI liability statute is essential to resolve ambiguities and allocate responsibility across stakeholders. Such legislation should codify shared liability principles and define AI-specific fraud offences. This would harmonise Indian law with global standards while preserving constitutional guarantees.
2. Regulatory Oversight Mechanisms
Establishing AI Fraud Prevention Authority could centralise oversight and coordinate with law enforcement. Regulatory sandboxes, similar to those in financial regulation, would allow controlled testing of AI systems for fraud resilience before deployment. These mechanisms ensure adaptability while maintaining accountability[xxxii].
3. Judicial and Administrative Measures
Judges and law enforcement require specialised training to handle AI-driven fraud cases. Enhancing forensic capacities, particularly in digital evidence authentication, is vital for effective prosecution. Administrative measures must focus on equipping regulators with technical expertise and resources.
4. Public-Private Collaboration
Collaboration between government, technology firms, and civil society can create multi-layered safeguards. Industry-led initiatives for responsible AI development, combined with consumer protection mechanisms, can strengthen accountability ecosystem. Public awareness campaigns and grievance redressal mechanisms are also crucial in countering fraud.
VIII. Conclusion
Accountability by design must form foundation of India’s approach to AI fraud. Shared liability offers pragmatic model that balances innovation with responsibility. Harmonisation with global standards and embedding liability across AI lifecycle will secure India’s digital future against fraud while upholding constitutional values.
Endnotes
[i] Dr. Saman Devgan, Cybercrime and Social Media Platforms: Legal Accountability in Indian Context, 4 Int’l J. L. Just. & Juris. 325 (2024).
[ii] Hifajatali Sayyed, Artificial Intelligence and Criminal Liability in India: Exploring Legal Implications and Challenges, 10 Cogent Soc. Sci. 2343195 (2024).
[iii] SEBI, Discussion Paper on Artificial Intelligence in Financial Markets (2023).
[iv] Meghna Bal & N.S. Nappinai, Crafting Liability Regime for AI Systems in India, Esya Centre (2024).
[v] Ayush Gupta, Framework for Addressing Liability and Accountability Challenges due to Artificial Intelligence Agents, 5 Indian J. Integrated Res. L. 910 (2025).
[vi] “FinCEN.Gov” (FinCEN.gov, November 13, 2024) https://www.fincen.gov/news/news-releases/fincen-issues-alert-fraud-schemes-involving-deepfake-media-targeting-financial
[vii] Wakefield J, “Deepfake Presidents Used in Russia-Ukraine War” (March 18, 2022) https://www.bbc.com/news/technology-60780142
[viii] Indian Contract Act, 1872, No. 9, Acts of Parliament, 1872 (India).
[ix] Information Technology Act, 2000, §§ 43, 66D, No. 21, Acts of Parliament, 2000 (India).
[x] Standard Chartered Bank v. Directorate of Enforcement, (2005) 4 SCC 530 (India).
[xi] Priyadarshi Nagda, Legal Liability and Accountability in AI Decision-Making: Challenges and Solutions, 11 Int’l J. Innovative Res. Tech. 1789 (2025).
[xii] OECD, Principles on Artificial Intelligence (2019).
[xiii] European Union, Artificial Intelligence Act, art. 52 (2024).
[xiv] Digital Personal Data Protection Act, 2023, No. 22, Acts of Parliament, 2023 (India).
[xv] K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 (India).
[xvi] Shreya Singhal v. Union of India, (2015) 5 SCC 1 (India).
[xvii] K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 (India).
[xviii] Mata v. Avianca, No. 22-cv-1461, 2023 WL 4114965 (S.D.N.Y. 2023).
[xix] European Union, Artificial Intelligence Act, COM/2021/206 final (2024).
[xx] “High-Level Summary of the AI Act | EU Artificial Intelligence Act” https://artificialintelligenceact.eu/high-level-summary/
[xxi] Communications Decency Act, 47 U.S.C. § 230 (1996) (US).
[xxii] Gonzalez v. Google LLC, 598 U.S. (2023).
[xxiii] Federal Trade Commission, AI and Consumer Protection: Guidance Report (2023).
[xxiv] Cyberspace Administration of China, Deep Synthesis Provisions, Order No. 1 (2022).
[xxv] TOI Business Desk, “MSMEs Burdened by High Compliance Costs; Face over 1,450 Regulations Annually: Report” The Times of India (June 29, 2025) https://timesofindia.indiatimes.com/business/india-business/msmes-burdened-by-high-compliance-costs-face-over-1450-regulations-annually-report/articleshow/122140844.cms
[xxvi] United Nations Office on Drugs and Crime, Artificial Intelligence and Criminal Justice Report (2021).
[xxvii] OECD, Principles on Artificial Intelligence (2019).
[xxviii] Priyadarshi Nagda, Legal Liability and Accountability in AI Decision-Making: Challenges and Solutions, 11 Int’l J. Innovative Res. Tech. 1789 (2025).
[xxix] Meghna Bal & N.S. Nappinai, Crafting Liability Regime for AI Systems in India, Esya Centre (2024).
[xxx] Ayush Gupta, Framework for Addressing Liability and Accountability Challenges due to Artificial Intelligence Agents, 5 Indian J. Integrated Res. L. 910 (2025).
[xxxi] Digital Personal Data Protection Act, 2023, No. 22, Acts of Parliament, 2023 (India).
[xxxii] SEBI, Discussion Paper on Artificial Intelligence in Financial Markets (2023).


Leave a Reply