Recognition of States in International Law

Generative AI vs. Ad Bots: India’s Legal Blind Spot

written by Guest Author

in ,

The digital advertising ecosystem today uses sophisticated data analytics and AI to target consumers far more precisely than traditional one-way media. Platforms like Google and social networks collect demographics and behavior to tailor ads and measure ROI. However, this precision comes at a cost – automated “bots” now generate about half of all internet traffic. Many bots are benign (search crawlers, site monitors, chatbots), but a rising class of malicious bots mimics real users to click ads, inflate metrics or scrape data. This double-edged advance means advertisers must grapple with fraudulent traffic even as they exploit AI for personalization.

Media reports of Uber’s ad-fraud discovery found that suspending ≈$100M of ad spend caused no drop in installs, exposing “fraud, fake apps, phantom clicks and bots” in its campaigns. The incident led to legal action.

Bots are typically categorized as “good” (helpful) or “bad” (malicious). Good bots (Googlebot, chatbots, monitoring agents) follow site rules to improve services. Bad bots, by contrast, violate rules: spam bots flood comment streams, scalper bots hoard inventory, fake social bots generate fraudulent engagement, and ad-fraud bots click ads or fill forms to waste ad budgets for illicit profit. Industry data illustrate the scale: e.g. Imperva reports “bots were responsible for nearly 50% of all internet traffic in 2024, and a global ad-fraud study projects $41.4 billion in losses by 2025. Generative AI has further amplified these threats- impersonation tools and GPT-based scrapers spawn convincing fake sites and profiles. One report warns of an “avalanche of AI-led fake traffic,” noting that even non-malicious crawlers (GPTBot, ClaudeBot, etc.) can distort campaigns.

The True Cost of Bot-Driven Fraud

Bots distort nearly every advertising metric. Advertisers pay for impressions and clicks, but fake views generate no real engagement or sales. For example, industry analysts estimate that as much as 22% of online ad spend ($84B) was stolen by fraud in 2023. This waste is often invisible until conversions are tallied. In one high-profile case, Uber discovered that halting two-thirds of its $150M ad budget did not reduce installs. Investigations uncovered “evidence of fraud, fake apps, phantom clicks and bots”, essentially, machines clicking ads without any human customer. Uber even sued its agency partner (Fetch Media) for this click fraud. Global losses are staggering; Juniper Research predicts ad-fraud costs will rise from $84B in 2023 to $170B by 2028.

In India, the impact is similarly severe. Analysts report that around 20–30% of ad budgets may be wasted on invalid traffic during peak seasons. TrafficGuard and other auditors noted spikes up to 126% in “invalid traffic” around events like the FIFA World Cup 2023, driven by bots faking video views or clicks. Techniques include ad-stacking (multiple ads layered invisibly), click spamming, and fake lead forms filled by bots. One India-based campaigner recounts an OTT platform “crashed due to 100% bot traffic” with a 52% click-through rate which is a clear sign the audience was synthetic. Such phantom engagement creates a vicious illusion of success: advertisers see large view/click numbers, pay for them, and assume their strategy is working-until sales data reveal the truth.

Indian Legal/Regulatory Context

Indian law currently lacks a specific framework addressing digital advertising fraud or bot-generated traffic. Existing instruments principally the Information Technology Act 2000, the Consumer Protection Act 2019, and the Advertising Standards Council of India (ASCI) Code—focus on content regulation, not on the authenticity of ad delivery or audience metrics.

The IT Act 2000 and the Intermediary Guidelines 2021 require due diligence against unlawful content and impose limited liability on intermediaries for hosting illegal material. They do not extend to fraudulent traffic, metric inflation, or automated manipulation. The Consumer Protection Act 2019 prohibits misleading advertisements and unfair trade practices, yet its focus is the truthfulness of representations, not the accuracy of audience measurement. While an advertiser may attempt to sue an agency for “deficiency of service,” Indian courts have little jurisprudence on digital fraud attribution.

The ASCI Code ensures ethical advertising through truthful representation and prohibits impersonation. Its 2024 deepfake advisories addressed fabricated celebrity likenesses but excluded traffic authenticity. The DPDP Act 2023 mandates consent and purpose limitation in data use, indirectly promoting accountability, though ad-fraud detection and verification remain unregulated.

Contrast this with global moves toward digital ad accountability. In the EU, the Digital Services Act (2024) forces platforms to provide transparency on how ads are targeted and measured. Proposed EU and U.S. policies (e.g. requiring labeling of AI-generated content) may eventually touch advertising. India’s new Digital Personal Data Protection Act (2023), once in force, will tighten consent rules for using personal data online. It even bans targeted ads to minors. While aimed at privacy, these rules imply that any AI-based profiling (whether for targeting ads or detecting bots) must respect user consent. This broad data regime is still unfolding, but it underscores that Indian regulators may soon turn attention toward algorithmic fairness and fraud in digital markets.

The authors argue that two key doctrinal issues persist: first, the attribution of liability among advertisers, platforms, and intermediaries purchasing traffic; and second, evidentiary constraints, as establishing automated clicks necessitates complex digital forensics. Absent statutory definitions or standards for “invalid traffic,” enforcement under current Indian evidentiary law remains indeterminate.

Generative AI as a Defense Against Bot Fraud

Just as fraudsters weaponize AI, advertisers are now turning AI back on the problem. Advanced machine learning (ML) models including generative techniques can flag anomalies and filter out bot signals in real time. In essence, AI that once amplified fraud can be retrained to detect it. For example, Google’s Ad Traffic Quality team now uses large language models (LLMs) to analyze page content, placements and user interactions for signs of invalid traffic. Google reports these LLM-powered defenses have cut IVT (invalid traffic) by ~40% on its platforms. In other words, neural networks are now flagging pages or patterns that typically coincide with bot clicks.

More generally, industry literature describes three AI-driven tactics for combatting ad fraud: (1) Anomaly detection – ML systems continuously monitor traffic for unusual spikes or inconsistent behavior and flag them; (2) Adaptive learning – models retrain on fresh data so they can catch new fraud methods as they emerge; (3) Accuracy improvements – by learning features of real versus fake users (mouse movements, click timing, device fingerprints), AI can distinguish bots with high precision. In practice, some ad-tech firms have developed “fraud-adjusted” analytics dashboards that omit suspicious clicks. Decimal Point’s AI examines entropy in click timing and navigation patterns to “provide more accurate ROI” by excluding likely bots.

In the generative AI realm, one can envision several approaches: for instance, using Generative Adversarial Networks (GANs) to simulate both benign and malicious traffic and train detectors, or fine-tuning LLMs on URL/content features to spot “cloaked” ad pages. Even existing LLM-chatbots can assist analysts by summarizing suspicious logs or suggesting rule patterns. The core idea is that generative models, by learning the complex distribution of genuine user behavior, can highlight when sessions fall outside that distribution – i.e. are likely bot-driven. Empirical evidence is beginning to accumulate: Google’s public statements about LLMs, and reports from ad-security firms, validate that AI-based filtering is cutting fraud significantly. Indeed, in a few high-profile tools AI-driven detection already stops billions of fake app-install transactions daily.

Unresolved Challenges and the Way Forward

Even AI has limits. Fraudsters keep changing tactics, and AI models can be fooled or become outdated. Over-reliance on historical data may miss truly novel scams, and ML outputs often lack transparency- a regulatory concern under new AI governance norms. Privacy is another issue: filtering bots may involve profiling users or scanning content, triggering data-protection rules. Global AI regulations (such as the EU AI Act) will soon require explainability and risk mitigation for high-impact systems. In India, the ASCI code already insists any use of AI (say, to generate a celebrity’s likeness) must have written permission. By analogy, deploying AI to track user behavior might be seen as “profiling,” requiring clear user notice or contractual basis under the DPDP Act.

In the absence of specific statutes, India is likely to follow global trends: imposing duties on digital ad intermediaries for transparency , and insisting on robust fraud-countering mechanisms. Indeed, even platforms like Google are self-regulating: in 2024 India, Google suspended 2.9 million advertising accounts and removed 247.4 million ads for policy violations, citing AI-fueled scams like “public figure impersonation”. Google credits improved LLM-based review for suspending 700,000 accounts and cutting scam-ad reports by 90%. This indicates how seriously the tech industry now treats AI-driven fraud but also highlights the gap in formal law: these are corporate policies, not consumer rights enforceable in court.

Authors

  • Isharth Kumar

    Isharth Kumar is a fourth-year B.Sc. LL.B. (Hons.) student at the National Law Institute University, Bhopal. His primary areas of interest include Intellectual Property Rights, Data Protection, and Media Law.

    View all posts
  • Priyanshu Kasliwal

    Priyanshu Kasliwal is a fourth-year B.Sc. LL.B. (Hons.) student at the National Law Institute University, Bhopal. His academic interests lie in Constitutional Law, the Digital Personal Data Protection regime, and Media Law.

    View all posts

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *