• Pascal's Chatbot Q&As
  • Posts
  • Reset Tech’s report is a damning indictment of Meta’s advertising infrastructure—a system seemingly engineered for plausible deniability while profiting from digital disinformation and fraud.

Reset Tech’s report is a damning indictment of Meta’s advertising infrastructure—a system seemingly engineered for plausible deniability while profiting from digital disinformation and fraud.

Unless regulators, technologists, and society apply coordinated pressure, the networks will continue to grow—and with them, the social, political, and economic harms they produce.

Meta’s Inaction Engineered: Profiting from Propaganda and Scams through Dormant Facebook Networks

by ChatGPT-4o

Introduction

In The Dormant Danger: How Meta Ignores Large-Scale Inauthentic Behavior Networks of Malicious Advertisers, Reset Tech uncovers a systemic failure—and perhaps willful neglect—by Meta to address sprawling networks of dormant, inauthentic Facebook pages used for malicious advertising. These pages fuel both political propaganda, such as Russia’s pro-Kremlin Doppelganger campaign, and large-scale scam operations targeting European consumers. Despite clear patterns, prior exposure, and regulatory obligations under the EU’s Digital Services Act (DSA), Meta continues to profit from and allow the persistence of these networks.

Most Surprising, Controversial, and Valuable Findings

🔥 Surprising

  1. 3.8 million dormant Facebook pages were mapped across seven networks—yet Meta failed to de-platform them despite obvious username patterns and shared visual identities.

  2. Meta earned up to $674,923 from 6,000 political ads targeting 57.9 million EU users in the Doppelganger campaign alone—excluding earnings from scam ads.

  3. Some pages ran pornographic or false medical ads violating Meta’s explicit content policies, while similar organic content would be banned—showcasing a clear double standard.

⚠️ Controversial

  1. Meta’s lack of budget transparency for commercial scam campaigns effectively shields its revenue stream from regulatory and public scrutiny.

  2. Meta’s own quarterly threat reports from 2022 acknowledge the presence of such networks (e.g., "Botiful"), yet no meaningful action was taken, even two years later.

  3. Meta’s ad moderation is superficial—targeting individual ads but not deactivating the advertiser pages or dismantling the wider networks.

💡 Valuable

  1. The report provides a methodology for mapping dormant networks using simple keyword pattern recognition, offering a clear path for regulators and civil society to replicate.

  2. It demonstrates how malicious actors rotate pages across campaigns, suggesting a need for network-level—not ad-level—interventions.

  3. It highlights a gaping hole in DSA compliance: Meta is violating Article 34 by failing to mitigate systemic risks linked to automated, coordinated inauthentic behavior.

Analysis & Perspective

Meta’s behavior, as detailed in this report, suggests a troubling prioritization of ad revenue over civic integrity and consumer protection. The company’s advertising infrastructure is effectively an inauthentic behavior monetization machine, powered by dormant assets that are activated on-demand for disinformation and fraud.

Rather than taking meaningful steps to dismantle these pre-fabricated ad networks, Meta has:

  • Focused narrowly on ad-level moderation while leaving the networks themselves intact.

  • Allowed deepfake, scam, and adult content ads to flourish under lax ad approval policies.

  • Turned a blind eye to coordinated political propaganda, even when the source is sanctioned (e.g., Russia’s Social Design Agency).

The company’s conduct contradicts its public commitments to combating misinformation and violates not only its own Community Standards but also the DSA’s requirement for mitigating systemic risks related to coordinated deception.

What makes this more egregious is that detection is not technically difficult. Reset Tech outlines a straightforward, replicable detection method based on username patterns, common visual branding, and campaign overlaps. That Meta has not implemented such systems raises serious questions about its intent.

Lessons Learned

For Meta

  • Platform-scale solutions must replace superficial moderation. Targeting individual ads without dismantling the network is akin to whack-a-mole.

  • Transparency around ad budgets and advertiser identities is not optionalunder the DSA and should be urgently prioritized.

  • Proactive detection and de-platforming of suspicious networks should be built into automated systems.

For Regulators

  • Audit Meta’s systemic risk assessment practices under Article 34 of the DSA.

  • Impose penalties for ongoing violations, especially in light of the documented evidence that Meta profits from unlawful activity.

  • Mandate external verification of political ad disclosures, as Meta’s current reliance on voluntary self-identification is clearly ineffective.

For Civil Society & Researchers

  • Use OSINT methods like those pioneered by Reset Tech to continue exposing large-scale deception.

  • Pressure public institutions to hold platforms accountable for the social and financial damage caused by these scams and propaganda efforts.

Conclusion

Reset Tech’s report is a damning indictment of Meta’s advertising infrastructure—a system seemingly engineered for plausible deniability while profiting from digital disinformation and fraud. With millions of dormant assets acting as burner accounts for malicious advertisers, Meta has become a central node in what the report rightly describes as a “business of manipulation.”

Unless regulators, technologists, and society apply coordinated pressure, the networks will continue to grow—and with them, the social, political, and economic harms they produce.