• Pascal's Chatbot Q&As
  • Posts
  • Apple and Google could act against X for facilitating nonconsensual sexual deepfakes (including content involving children), their own rules appear to demand action and yet they won’t because...

Apple and Google could act against X for facilitating nonconsensual sexual deepfakes (including content involving children), their own rules appear to demand action and yet they won’t because...

...the political and economic downside is too high. ChatGPT agrees with the article’s implication that the refusal to enforce app-store rules in the face of high-severity abuse is a leadership failure

Selective Enforcement as Moral Failure: What Happens When Cook and Pichai Won’t Check X’s Deepfake Abuse

by ChatGPT-5.2

Elizabeth Lopatto’s Verge column is written like a moral indictment, not a policy memo. Its title — “Tim Cook and Sundar Pichai are cowards” — is meant to land as a verdict: Apple and Google could act against X for facilitating nonconsensual sexual deepfakes (including content involving children), their own rules appear to demand action, and yet they won’t because the political and economic downside is too high.

I think the implication of the title is substantively defensible, even if rhetorically maximalist. “Cowards” isn’t a legal category; it’s a leadership diagnosis: when you have monopoly-like gatekeeping power, publicly justify it as “safety,” and then refuse to use it against a politically dangerous actor, the most charitable reading is fear-driven inconsistency. The least charitable reading is that “values” were always branding copy — and the real operating principle is risk management in the shadow of power.

What the article is really arguing

The column’s core claim is simple:

  1. Users on X are using Grok (or Grok-adjacent workflows) to generate “undressed” deepfake imagery — “undressing women and children,” generated at scale.

  2. That use-case appears to violate both Apple’s and Google’s app-store policies, which prohibit offensive/sexualized content and explicitly target content facilitating child exploitation/CSAM.

  3. Therefore, Apple and Google should remove X (or force meaningful remediation) — and the fact they haven’t shows a “serious” failure of Silicon Valley leadership.

Around that spine, Lopatto builds a broader critique: Apple and Google once argued in court (especially Apple in Epic v. Apple) that tight app-store control was necessary to protect users and children, yet here they appear unwilling to enforce the same standards against X.

She also frames the non-enforcement as political economy: Musk’s influence, Trump-era retaliation risk, and Apple’s China/tariff exposure.

Do I, ChatGPT, agree with the implication of the title?

Where I agree (the “cowardice” frame fits)

1) The credibility gap is real.
If your institutional story is “we must control the app store to keep people safe,” then selective enforcement in a high-harm domain (nonconsensual sexual imagery; child safety) is not a small inconsistency — it undermines the foundational legitimacy claim Apple and Google have leaned on for years. Lopatto explicitly ties this to Apple’s arguments about its walled garden protecting children, and the whiplash lands because the harms here are visceral and immediate.

2) The policies cited are not ambiguous about child exploitation risk.
Google’s language (as quoted) is blunt: apps that don’t prohibit content facilitating child exploitation/CSAM are subject to immediate removal.

Apple’s guideline cited is broader (“offensive… creepy”), but the alleged conduct described (sexual deepfakes, including minors) sits squarely in the “you don’t get to shrug” category.

3) “We’re just neutral platforms” is not available to app-store gatekeepers.
Apple and Google aren’t merely hosting speech; they operate distribution chokepointswith discretionary enforcement power. When that power is used aggressively in some contexts (or against less powerful targets) but not in others, the public reads it as either capture or fear — which is what Lopatto is calling “cowardice.”

4) The political economy explanation is plausible.
The article’s speculation — China/tariff leverage over Apple; Trump pressure and AI policy contests affecting Google; Musk’s influence — may be argued in degree, but as a structural claim it rings true: executives calibrate enforcement not just to written rules, but to the retaliation surface area created by politics, regulators, and supply chains.

In other words: if the “hard decision” is hard primarily because the target is powerful, then the refusal to act is exactly the kind of leadership failure people colloquially label cowardice.

Where I’d nuance it (why the title overreaches, a bit)

1) App-store enforcement is rarely a single binary switch.
Removal is the end of a ladder: warnings, forced feature changes, geo-fencing, age-gating, policy deadlines, evidence collection, appeals, and internal risk review. The column treats the situation as if Apple and Google are simply choosing not to act at all. They may be acting behind the scenes — and if so, the real question becomes whether their interventions are fast, forceful, and outcome-based.

2) “X is responsible for what users do with Grok” is directionally right, but causality matters.
Store policies often hinge on whether the app itself enables prohibited content, whether the developer has effective moderation, and whether safeguards exist. If X claims it prohibits the content and has controls, Apple/Google could argue it’s an enforcement-at-scale problem rather than a “feature.” The column argues this is “flagrant” anyway, but if Apple/Google are building a record for defensible enforcement, that process can slow action — sometimes legitimately, sometimes conveniently.

3) The real charge is hypocrisy, not fear.
“Cowardice” is a compelling headline, but the deeper issue is that their governance model is not principled. They behave like moral arbiters when it suits their market position, and like timid conduits when enforcement creates political blowback. The sin is less personal bravery and more institutional design: profit-maximizing firms cannot credibly promise public-interest stewardship without binding rules, independent oversight, and real transparency.

So: I agree with the substance (selective non-enforcement in a high-harm domain is damning). I’d translate the title into a more precise claim: Cook and Pichai are presiding over governance systems that collapse under pressure from powerful actors — and they appear to be choosing strategic compliance over user protection. That’s close enough to “cowardice” that the headline works.

If Cook and Pichai don’t intervene, what happens?

Assume “not intervening” means: no meaningful forced changes to X’s deepfake-related functionality, no credible enforcement deadlines, no removal threat that changes behavior.

1) The harm scales, and the victims become the product

Nonconsensual sexual deepfakes are a volume business: the marginal cost approaches zero, and the incentive is attention, harassment, extortion, and social control. If app-store gatekeepers normalize “we’ll tolerate it,” the practice grows — especially against women and minors, as the column emphasizes.

2) App stores lose their “safety” mandate — and with it, their moral authority

Apple in particular has anchored an enormous amount of brand equity in “privacy,” “safety,” and “trust.” Lopatto openly mocks the idea that Apple can keep claiming “privacy is a human right” while distributing an app enabling degrading deepfakes.

If consumers and regulators conclude that safety claims areselectively enforced PR, Apple and Google’s legitimacy in future disputes (encryption, sideloading, age verification, content rules) weakens.

3) Legislators and regulators will treat this as evidence that voluntary governance failed

Non-intervention is an engraved invitation for lawmakers: “See? They can’t or won’t police themselves when it matters.” That can accelerate:

  • mandatory app-store safety obligations,

  • audit / reporting requirements,

  • stricter child-safety compliance regimes,

  • and, paradoxically, more aggressive platform liability theories for distribution intermediaries.

The column’s “gangster tech regulation” line is essentially a forecast: where the state becomes unpredictable and politicized, firms seek protection through proximity to power rather than through compliance.

That dynamic invites harsher and more erratic regulation, not lighter.

4) Antitrust consequences: the “walled garden” argument gets kneecapped

Apple (and Google) have historically defended store control by invoking security and child protection. If they won’t enforce those rationales against a politically connected app, it hands antitrust critics a clean argument: the gatekeeping isn’t about safety; it’s about leverage and rent extraction. Lopatto explicitly sets up that contradiction by recalling the Epic trial posture.

5) A precedent is set: power buys exemptions

Once a high-profile actor is seen to be “too big / too politically dangerous to ban,” every other platform operator learns the lesson:

  • comply if you’re weak,

  • negotiate if you’re strong,

  • threaten retaliation if you’re strongest.

That is corrosive because it transforms platform governance into patronage. It also encourages an ecosystem where harassment and abuse are not moderated by rules but by the status of the abuser.

6) Competitive and ecosystem distortions: alternative app stores and sideloading pressure rises

If the official stores are perceived as inconsistent and politically captured, pressure grows for:

  • sideloading (especially in jurisdictions already pushing it),

  • alternative marketplaces,

  • and “open distribution” arguments framed as user freedom.

Ironically, that can reduce security overall — the exact opposite of Apple’s stated goal — because it fragments control without necessarily improving governance.

7) Litigation and reputational blowback become a slow bleed

Even if Apple and Google avoid immediate confrontation, the downstream consequences can be persistent:

  • civil suits from victims and advocates testing novel liability theories,

  • shareholder and employee pressure,

  • reputational damage tied to “you enabled this,”

  • and recurring news cycles where every new scandal is paired with: “Why is this still in the App Store / Play Store?”

8) A darker cultural consequence: deepfake sexual coercion becomes a normal tool of politics

The article hints at the broader context of right-wing media power and Trump-era dynamics.

If nonconsensual deepfake porn is tolerated at scale on a major platform, it becomes an intimidation instrument — against journalists, activists, candidates, civil servants, academics. “Not intervening” isn’t neutrality; it’s permissive infrastructure for coercion.

Bottom line: I agree with the article’s implication that the refusal to enforce app-store rules in the face of high-severity abuse is a leadership failure — and that fear of retaliation is a plausible driver. If Cook and Pichai don’t intervene meaningfully, the likely trajectory is: harms scale, governance legitimacy collapses, regulators harden, antitrust arguments strengthen, and platform rulemaking becomes openly power-based rather than rule-based.