• Pascal's Chatbot Q&As
  • Posts
  • The Amazon–Perplexity dispute is not just about shopping. It is a crystallization of a larger societal determination: Can AI agents wander the digital world as if they were human users...

The Amazon–Perplexity dispute is not just about shopping. It is a crystallization of a larger societal determination: Can AI agents wander the digital world as if they were human users...

...unconstrained by platform rules, security measures, and copyright boundaries? Amazon says no—legally, contractually, and technologically. Courts are increasingly saying no as well.

The Amazon–Perplexity Conflict: AI Agents, Unauthorized Access, and the Boundaries of Innovation

The conflict between Amazon and Perplexity marks one of the clearest early tests of how agentic AI interacts with private digital platforms—and what happens when AI companies push beyond the boundaries of authorization, transparency, and platform governance. Far from an isolated dispute, Perplexity’s conduct fits a broader pattern: aggressive deployment, denial, evasion, and subsequent litigation, also seen in the recent New York Times and Chicago Tribune lawsuits against Perplexity for alleged large-scale copyright infringement. Across these fronts, the company’s behaviour has created a growing impression—among courts, rights-holders, and platforms—of a startup testing limits first and asking permission later.

This essay explains what happened between Amazon and Perplexity, the grievances, the steps Amazon took, how this conflict fits a larger litigation pattern, and what this behaviour signals for the AI industry. It concludes with concrete guidance for AI makers seeking to innovate without legal or reputational self-immolation.

1. What Happened: The Timeline of Conflict

Amazon discovers Perplexity accessing accounts through AI agents

Amazon’s complaint alleges that Perplexity’s Comet browser—with its agentic shopping assistant—logged into Amazon customer accounts, made purchases, and browsed product pages while disguising itself as a human-driven Google Chrome browser. According to Amazon, this activity occurred repeatedly despite multiple notices prohibiting such conduct.

The cease-and-desist letter details the heart of the issue: Perplexity was covertly intruding into the Amazon Store using an AI agent, violating multiple laws, Amazon’s Conditions of Use, and explicit instructions for months.

Perplexity frames this as consumer choice

Perplexity responded publicly with an open letter titled “Bullying Is Not Innovation”, claiming Amazon was stifling user choice and attempting to prevent AI assistants from operating freely on the web. It portrays its AI agent as simply acting on the user’s behalf, using the user’s credentials (stored locally), and compares Amazon’s actions to anti-competitive behaviour from a dominant gatekeeper.

2. Amazon’s Grievances

Amazon’s grievances fall into four categories, each well-documented in the complaint and cease-and-desist correspondence.

a. Unauthorized Access & Deception

Amazon argues that Perplexity:

  • deployed autonomous agents into private Amazon accounts without authorization,

  • evaded identification by spoofing Chrome’s user-agent string, and

  • circumvented technical blocks Amazon implemented.

The complaint states that Perplexity “is not allowed to go where it has been expressly told it cannot; that the trespass involves code rather than a lockpick makes it no less unlawful.”

These allegations mirror precedents Amazon cites, such as Facebook v. Power Ventures, where continued access after a cease-and-desist and technical blocking constituted liability under the CFAA.

b. Compromising Customer Security

Amazon claims that Perplexity’s actions:

  • introduced security risks because Comet has documented vulnerabilities to phishing and data theft,

  • put private account information at risk by routing sensitive data through Perplexity’s systems.

The cease-and-desist letter explicitly cites two independent security audits showing that Comet could mis-handle credentials and expose users to malicious actors.

c. Degrading the Customer Experience

Amazon claims Perplexity’s agentic purchasing:

  • failed to consider shipping optimizations (combined deliveries),

  • ignored customer-specific recommendations,

  • risked improper returns and order tracking,

  • interfered with Amazon’s carefully curated shopping flow.

Amazon argues that these impacts harm trust and undermine an experience Amazon has invested in for decades.

d. Violating Amazon’s Updated Agent Terms

In May 2025, Amazon explicitly added “Agent Terms” requiring AI agents to:

  • clearly identify themselves,

  • never conceal or mimic human behaviour,

  • not circumvent access controls.

Perplexity allegedly violated all three conditions systematically. Amazon notes that Perplexity falsely represented Comet AI as “not agentic” when challenged—contradicted by Perplexity’s own marketing materials boasting automated workflows and autonomous browsing.

3. What Steps Amazon Took

1. Private Warnings (November 2024 onward)

Amazon contacted Perplexity at least five times, beginning in 2024, warning that:

  • “Buy with Pro” improperly used Amazon Prime accounts,

  • no AI agents may operate without agreement,

  • Comet’s behaviour violated terms.

Perplexity initially agreed to stop—and then resumed anyway.

2. Technological Blocking Measures (2025)

Amazon deployed access controls to block Comet’s behaviour. Perplexity released updates within 24 hours specifically to evade these blocks.

3. Cease-and-Desist Letter (October 31, 2025)

A detailed formal notice demanded immediate cessation and transparency, accusing Perplexity of violating:

  • the U.S. Computer Fraud and Abuse Act,

  • California’s Comprehensive Computer Data Access and Fraud Act,

  • Amazon’s Conditions of Use.

4. Federal Lawsuit (November 4, 2025)

Amazon filed suit seeking:

  • injunctions,

  • damages for investigative and security costs,

  • a court order requiring transparency from Perplexity’s agents.

This escalation signals Amazon believes private negotiation is no longer viable.

4. Context: The Pattern from NYT and Chicago Tribune Litigations

The New York Times and Chicago Tribune lawsuits allege that Perplexity:

  • scraped and reproduced protected content,

  • used deceptive crawling techniques,

  • instructed bots to ignore robots.txt,

  • refused to stop after receiving legal notices.

Courts took particular note of:

  • evasiveness,

  • denials contradicted by forensic evidence,

  • modifications to evade blocking,

  • pattern of acting first and seeking forgiveness later.

The Amazon case mirrors this pattern precisely:

  • concealment (spoofed user-agent),

  • circumvention (evading technical barriers),

  • denial (claiming Comet is not “agentic”),

  • persistence despite notice.

For judges assessing credibility across cases, this repeated fact pattern is devastating.

5. How This Behaviour Appears to an Outside Observer

Across the legal, security, and platform-trust ecosystem, Perplexity increasingly gives the impression of:

1. A company pursuing aggressive growth over compliance.

2. A willingness to mislead—through omission or evasion—when confronted.

3. A structural immaturity in governance and legal-risk management.

4. A philosophy of “permissionless innovation” now running into statutory and platform boundaries.

Even if Perplexity believes in its principled vision of “user agents as digital labour,” its execution displays disregard for:

  • platform integrity,

  • user security,

  • legal boundaries,

  • content rights,

  • industry norms of transparency.

From a reputational standpoint, Perplexity is rapidly assuming a role similar to early Uber or early Facebook—except in a regulatory moment far less tolerant of disruption-through-rulebreaking.

6. How AI Makers Should Behave Instead

The Amazon case underscores an emerging rulebook for agentic AI. Developers who want to avoid Perplexity’s situation should adopt six key practices:

1. Absolute Transparency of Agent Activity

Agents must:

  • declare themselves clearly,

  • use distinctive user-agent strings,

  • avoid acting in the shadows or mimicking human patterns.

Opacity is now a litigation risk.

2. Honorability and Non-Circumvention

Never modify an agent to bypass:

  • blocks,

  • rate limits,

  • account protections,

  • robots.txt,

  • explicit refusals.

Circumvention transforms a commercial dispute into statutory “unauthorized access.”

3. Proactive Engagement with Platforms

For any agent that performs:

  • browsing,

  • transactions,

  • account access,

AI companies must seek explicit permission before deploying.

This is especially critical for account-bound platforms like Amazon, banks, airlines, or publishers.

4. Built-in Security Assurance

Agentic systems must be hardened before being released. If a browser can be hijacked to steal credentials (as Amazon documented), it materially endangers users and partners.

5. Data Minimization and Locality with Proof

If credentials are stored locally, the system must allow third parties to verify that no remote transmission occurs.

Perplexity claims locality, but platforms have no ability to confirm it.

6. Respect for Rights of Others, Not Just Users

AI companies must recognize:

  • Platforms have rights.

  • Publishers have rights.

  • Data owners have rights.

  • Users cannot waive rights they do not possess (e.g., website terms, IP rights).

User preference is not a legal justification for systemic bypass.

Conclusion: A Necessary Reckoning for Agentic AI

The Amazon–Perplexity dispute is not just about shopping. It is a crystallization of a larger societal determination: Can AI agents wander the digital world as if they were human users, unconstrained by platform rules, security measures, and copyright boundaries?

Amazon says no—legally, contractually, and technologically.
Courts are increasingly saying no as well.

Perplexity’s behaviour, when viewed alongside the NYT and Tribune lawsuits, suggests a company still operating as if the web were an ungoverned commons. But the era of unregulated scraping, covert automation, and credential-based infiltration is ending.

AI makers must adapt: transparency, permission, compliance, and security are becoming the foundational principles of responsible agentic AI. Those who embrace them will shape the future. Those who resist will face the same pattern Perplexity now confronts: cease-and-desists, lawsuits, injunctions, and growing judicial skepticism.