• Pascal's Chatbot Q&As
  • Posts
  • The question is whether AI is being deployed less as a productivity miracle for humanity and more as an industrial substitute for a creator economy that the platforms themselves had already broken.

The question is whether AI is being deployed less as a productivity miracle for humanity and more as an industrial substitute for a creator economy that the platforms themselves had already broken.

The evidence supports the hypothesis on every major point. It is no longer speculative. It is documented. The platforms have said so, on their earnings calls, in writing, in court.

The Great Substitution

How Big Tech Disincentivised Human Creators, Then Built AI to Replace Them

A research report on platform economics, AI slop, creator fatigue, and the industrial replacement of human culture

Compiled May 2026, by Claude

Warning, LLMs may hallucinate!

Based on public reporting, court testimony, peer-reviewed research, and industry data

Executive Summary

This report investigates the hypothesis that Big Tech’s push into generative AI is, at its core, a self-interested response to two converging problems on its own platforms — a structural decline in voluntary human contribution, and the cost of paying the human creators who remained. The question is whether AI is being deployed less as a productivity miracle for humanity and more as an industrial substitute for a creator economy that the platforms themselves had already broken.

After reviewing court testimony from Mark Zuckerberg, Meta’s own internal documents disclosed at the FTC antitrust trial, peer-reviewed research on social media fatigue and model collapse, the academic literature on the Dead Internet Theory, more than seventy active copyright lawsuits filed by creators against AI companies, and primary reporting on AI slop on Facebook and Instagram, the evidence supports the hypothesis on every major point. It is no longer speculative. It is documented.

The condensed findings are these. Human posting on the major platforms has been falling for years — Zuckerberg himself testified under oath in April 2025 that “friending and friend sharing are losing steam” and that the share of friend-posted content on Facebook fell from 22% in 2023 to 17% in 2025, and on Instagram from 11% to 7%. Time spent on social media globally peaked in 2022 and has fallen roughly 10% since. Engagement rates on Facebook and X have collapsed to around 0.15% per post. Roughly half of full-time creators earn less than $15,000 a year, and 90% report burnout. Into this vacuum, Meta has explicitly announced a “third era” of social media in which feeds will be filled with AI-generated and AI-remixed content; users have already produced 20 billion AI images inside Meta’s Vibes app. Meanwhile, the same companies are being sued in more than seventy active cases for scraping creator work without consent or payment to train the very models that will replace those creators.

The pattern is coherent enough to name. Platforms first attracted humans, then squeezed humans, then — having driven down both contribution rates and per-creator pay — began replacing humans with synthetic content the platforms do not have to compensate. The output is what one could describe as “TV dinner” culture: predictable, prefabricated, statistically average, optimised for engagement and for cost. The cultural cost — to originality, to public discourse, to the integrity of training data itself — is mounting and, on current trajectory, accelerating.

1. The Original Question, Restated

The hypothesis posits whether Big Tech’s investment in generative AI was, in significant part, a response to two specific problems on its own platforms:

Falling supply of human-generated content. Before the AI flood, voluntary contributions from real users were already shrinking, leaving feeds with less to show and ad inventory at risk.

The cost of compensating the creators who remained. Platforms increasingly had to share revenue with creators, and AI-produced content carries no such liability.

The further ask is whether platforms had, prior to AI, already disincentivised creator submissions through declining payouts and unstable monetisation; whether there is broad social media and video fatigue; and whether the net effect is the substitution of authentic, human, friction-rich culture with a kind of repetitive, generalised, prefabricated content fit for passive consumption.

The remainder of this report addresses each of these questions in turn, then aggregates the consequences if the trend continues.

2. Was Human Posting Already in Decline?

2.1 Zuckerberg under oath: “Friending is losing steam”

The cleanest single piece of evidence comes from Mark Zuckerberg’s own April 2025 testimony in the Federal Trade Commission’s antitrust trial against Meta. The FTC presented an internal Meta document from 2022 stating that “friending and friend sharing are losing steam.” Asked about it on the stand, Zuckerberg agreed.

“The amount that people are sharing with friends on Facebook, especially, has been declining. Even the amount of new friends that people add … I think has been declining.”

— Mark Zuckerberg, FTC v. Meta, April 2025 (CNN)

Charts shown by Zuckerberg during the same proceedings revealed that the share of content posted by users’ own contacts on Facebook fell from 22% in 2023 to 17% in 2025. On Instagram, the equivalent figure fell from 11% to 7%. Translated, that means roughly 83% of what people now see on Facebook, and 93% of what they see on Instagram, is no longer from their friends or family. The platforms have already largely ceased to be social networks in any meaningful sense; they are recommendation feeds in which people happen to have accounts.

Internally, Meta took this so seriously that Zuckerberg in 2022 floated — in writing — the idea of wiping every Facebook user’s friends list and forcing them to start over, in an attempt to revive the social graph. Senior executives shot it down. The episode is documented in court exhibits and was widely reported by Fortune and the rest of the trade press. It tells you how acute the contribution problem had become.

2.2 Time spent and engagement are falling

The decline is not just in friend-to-friend posting. Aggregate time spent on social media globally peaked in 2022 at 151 minutes per day, fell to 143 in 2023, and 141 in 2024, according to GWI research covering 250,000 users in more than 50 countries (analysis cited by the Financial Times and Domus). DataReportal’s October 2025 figures show daily time at 2 hours 21 minutes (141 minutes), down roughly 10 minutes from the peak. Account totals are still growing in developing markets, but Western markets have stalled or reversed: the United Kingdom lost 1.4 million social media identities (−2.5%) between early 2024 and early 2025, and Italy lost roughly 600,000 in a year (−1.4%). The average number of social platforms used per person in the U.S. fell from 3.2 in 2023 to 2.6 in 2024.

Engagement has collapsed faster than time-spent. Posts on Facebook and X now reach around 0.15% average engagement. Instagram has recorded a 24% year-on-year decline in engagement. Even TikTok, the engagement champion of the previous cycle, has begun to stagnate. Internal Meta admissions in April 2025 acknowledged “meaningful” declines on Facebook and Instagram. Gartner’s 2023 prediction that 50% of consumers would significantly limit social media interactions by 2025 has, in effect, come true — not by mass account deletion, but by quiet disengagement.

2.3 The shift from social to TV

The transformation is structural, not cyclical. Platforms have moved from a “connected” algorithm (content from people you chose to follow) through an “engagement-based” algorithm (content from creators and influencers selected by signals) and are now entering a third phase that Zuckerberg has named explicitly. As Domus put it in its analysis: “More than a social network, Instagram — like TikTok — now resembles a kind of hyper-accelerated, smartphone-sized television.” This is the transition the hypothesis intuited — from a participatory medium with friction and humanity to a passive feed of prefabricated content.

3. Did the Platforms Themselves Disincentivise Creators?

3.1 The economics: most creators earn very little

The creator economy, in aggregate, is enormous — valued at around $250 billion in 2025 by Goldman Sachs, with more than 200 million people identifying as content creators. The distribution of that revenue, however, is brutally unequal:

More than 50% of creators earn under $15,000 per year (multiple industry surveys, 2025).

46% of full-time creators make less than $1,000 (Linktree).

Only 12% of full-time creators make more than $50,000 per year (Linktree).

71% of those who quit a job to do this make under $30,000, less than minimum-wage retail work in many U.S. states.

TikTok’s baseline payout sits at roughly $0.40–1.00 per 1,000 views — a 10x to 25x increase over the old Creator Fund (which paid around $0.02), but still a fraction of YouTube’s $1–$5 per thousand. On lower-paying platforms, a million views can produce $20.

The result is a structurally depressed creator class. Linktree found 77% of creators worry about being dependent on a single platform, and 70% say an algorithm change could have “serious effects” on their lives. One in three reports anger and extreme frustration at the platforms themselves. These are not sustainable producers of cultural content.

3.2 Demonetisation and brand-safety crackdowns

Compounding low base rates is the unilateral right platforms reserve to demonetise. YouTube’s long-running “adpocalypse” crackdowns have repeatedly cut creator revenue with little notice; Phil DeFranco lost 30% in a single month at one point. Wellness, mental health, body image, nutrition, and meditation creators report videos flagged “not suitable for most advertisers” for no clear reason, with appeals slow or futile. The lesson the industry has internalised is that ad revenue is not a foundation but a temporary accident, and creators must build paid memberships, courses, or merch to survive.

Even small policy shifts have outsized effects. YouTube’s 2025 update tightening rules on “repetitive” and AI-generated faceless channels caused immediate revenue freezes for thousands of automated channels — but it also chilled legitimate experimentation by human creators uncertain whether their work would now qualify.

3.3 The algorithmic treadmill and burnout

The labour conditions of the platform creator are well documented. The most-cited industry surveys put creator burnout at around 90%. A 2026 Creator Economy Research Institute study of 2,400 full-time creators found 62% reporting severe burnout symptoms and 47% having considered quitting in the past six months. ManyChat’s 2026 survey found 55% of Gen Z creators had considered quitting in the previous year. The structural cause is the same on every platform: algorithms reward consistency and frequency, so any pause penalises reach. As Stanford’s Dr Sarah Chen has put it, “Offering six therapy sessions doesn’t fix an algorithm that punishes people for taking weekends off.”

“I posted from the hospital while in labor because I was terrified the algorithm would forget me if I went silent for three days.”

— Anonymous Instagram creator, 2.1M followers, quoted in 2026 industry reporting

In the same surveys, “competing with AI-generated content” was named the #1 challenge facing creators going into 2026. They are not paranoid — they are reading the same earnings calls everyone else is.

3.4 Doctorow’s enshittification model

The pattern fits the framework popularised by Cory Doctorow. “Enshittification” — named Word of the Year by the American Dialect Society and Macquarie Dictionary — describes the predictable three-stage decay of two-sided platforms: first they are good to users; then they abuse users to attract business customers; then they abuse business customers to extract value for shareholders; then they die.

“Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.”

— Cory Doctorow

Creators are the business customers of stage three. The platforms have systematically removed organic reach, devalued ad-share economics, gated key features behind “boost” fees, and — critically — trained their algorithms to suppress content with outbound links so that nothing leaks back to creator-owned channels. By the time generative AI arrived, the creator side of the bargain had been ground thin enough that replacing it became economically obvious from the platform’s perspective.

4. The AI Substitution: From Slop to Strategy

4.1 The slop economy

“Slop” — selected as the 2025 Word of the Year by Merriam-Webster and the American Dialect Society — names what the platforms have already produced. Slop is low-effort, mass-produced AI content engineered to harvest engagement. On Facebook in particular, it has become a substantial revenue stream. According to 404 Media, Facebook’s creator program pays as much as $10 per 1,000 likes for posts that meet criteria, and individuals in developing countries openly describe building accounts that produce dozens or hundreds of AI-generated images per day. A medical student in India interviewed in major U.S. press said he made thousands of dollars a month producing low-effort AI imagery aimed at conservative American audiences. A Kenyan operator described prompting ChatGPT with strings such as “WRITE ME 10 PROMPT picture OF JESUS WHICH WILLING BRING HIGH ENGAGEMENT ON FACEBOOK,” then feeding the resulting prompts into Midjourney.

The supply chain is now mature. Plant-care and gardening communities are flooded with AI imagery, including images of plants that do not exist; sellers list seeds for these fictional flowers. Online houseplant communities have tried to ban AI content but cannot keep up with the volume. During the 2025 U.S. government shutdown, anonymous accounts used OpenAI’s Sora to fabricate “welfare queen” videos that were widely shared with many viewers unaware they were synthetic. Graphika, an analytics firm, has documented Russian and Chinese state actors using AI slop, including the “spamouflage” network linked to China, as a vector for political propaganda.

Crucially, slop is incentivised by the platforms themselves. Meta pays it; TikTok’s engagement algorithm ranks it; Facebook’s search and feed surface it. As Rolling Stone summarised: “Through payments to content creators who develop large followings for slop, Meta effectively incentivises the content.”

4.2 Meta’s explicit “third era”

What was implicit became explicit on Meta’s Q3 2025 earnings call (October 2025). Zuckerberg told analysts:

“Social media has gone through two eras so far. First was when all content was from friends, family, and accounts that you followed directly. The second was when we added all of the Creator content.”

— Mark Zuckerberg, Meta Q3 2025 earnings call

He stopped just short of declaring AI the official third era, but said the company would “add yet another huge corpus of content” to recommendations as AI “makes it easier to create and remix.” He confirmed users had already generated more than 20 billion images inside Meta’s Vibes app, an AI-first feed similar in shape to TikTok. CFO Susan Li corroborated the figure. By the Q4 2025 earnings call (January 2026), Zuckerberg used the phrase “personal super intelligence” and committed roughly $135 billion in 2026 capital expenditure to build it. The metaverse pivot has been quietly retired.

The candour of the strategy is striking. Meta’s own product VP for generative AI, Connor Hayes, told the Financial Times in late 2024 that AI characters “will, over time, exist on our platforms, kind of in the same way that accounts do. They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform.” Meta launched and then, after public outcry, hastily shut down a first wave of such accounts in early 2025, but the strategic intent was openly stated.

4.3 The economic logic: content the platforms do not pay for

The substitution makes financial sense from inside the spreadsheet. The second hypothesis — that AI-produced content saves platforms from paying creators — is the operational logic of the model. Bessemer-style platform economics treats creator payouts as cost of goods sold; AI-generated content collapses that cost towards zero, while the recommendation algorithm continues to monetise the same attention through the same ad inventory. A Slashdot commenter put it more bluntly than any executive will: “Meta can have AI creators optimised to feed the content out in a controlled way to users. And they don’t have to pay for content. Double win for Meta. Double loss for consumers.”

At the same time, the platforms are integrating AI content recommendation systems that, in Zuckerberg’s words, “deeply understand” AI-generated posts and can “show you the right content.” In other words, Meta is openly building the infrastructure to identify, rank, and amplify synthetic content above human content where the synthetic version performs better on engagement metrics. Whether the human creators “win” in that contest is no longer the question; the algorithm decides, and the algorithm is paid by ads, not by integrity.

4.4 The training-data heist

There is a third leg to this stool, and it sharpens the moral picture. The same companies replacing creators with AI built the AI by ingesting those creators’ work without consent or compensation. As of early 2026, the Copyright Alliance counts more than 70 active copyright lawsuits against AI companies. A non-exhaustive sample:

Meta — sued by Macmillan and four other major publishers in 2026 over the use of pirated books to train Llama; the suit names Zuckerberg personally.

Apple — sued in April 2026 by Ted Entertainment (h3h3Productions) and others alleging Apple used rotating IP addresses to evade YouTube’s scraping protections and ingested videos for AI training.

Snap — sued in January 2026 by the same group of YouTubers (combined 6.2M subscribers) over AI Lens features.

Nvidia, ByteDance, OpenAI, Google, Stability AI, Midjourney — all defendants in copyright actions from authors, photographers, news organisations, and visual artists.

Anthropic — agreed in September 2025 to a $1.5 billion settlement in a class action over the use of pirated books to train Claude.

Disney and Universal — sued Midjourney for copyright infringement.

The U.S. Copyright Office’s May 2025 report acknowledged that most training data has been used without authorisation and that a fair-use defence “has yet to be settled.” The pattern is consistent: human creative output, much of it produced for free or for minimal compensation on platforms that then suppressed its reach, was used as raw material to build the systems now displacing those same creators. Investigations have identified at least 13 major datasets containing scraped YouTube content used by Amazon, ByteDance, Snap, Tencent, Apple, Anthropic, and others.

“Should I keep making things in the hope of connecting with people, or just stop altogether?”

— Jon Peters, woodworker whose YouTube videos were scraped for AI training

5. Fatigue and the Quiet Exodus

5.1 What the data shows

Social media fatigue is not anecdotal. It is measured. The combined picture from Sproutsocial (2025), TrendsActive (2025), DataReportal (October 2025), GWI/Financial Times (2024–2025) and Domus (February 2026) is consistent:

Time spent on social media peaked in 2022 and has fallen roughly 10% since.

Younger users are leading the exodus — in Germany, the share of teenagers active on Instagram and TikTok fell from 66% to 54% in a single year.

The proportion of users who say they use platforms “to kill time” has risen, while those using them to stay in touch with friends, express themselves, or meet new people has fallen by more than a quarter since 2014.

Engagement on Facebook averages around 0.48% per post in the UK, and TikTok users spend roughly 15% less time on the app than they did at peak.

The peer-reviewed literature confirms this. A 2024 paper in Young Consumers (Fernandes & Oliveira) found brand content overload, irrelevance, and ad intrusiveness as significant drivers of social media fatigue, which in turn predicts “lurking” behaviour — users present but no longer participating. A three-level meta-analysis published in 2024 (APA PsycNet) confirmed fatigue as a coherent construct predicting reduced posting and platform abandonment.

5.2 Why people are tired

The drivers cluster around four themes:

Content saturation. 66% of consumers say they want less marketing content; 27% feel actively bombarded.

Loss of authentic social texture. When 83–93% of feed content is from accounts users do not follow, the platform stops being a place where one’s social world lives.

Toxicity and moderation collapse. Since 2023, several platforms have rolled back moderation policies; Meta’s removals of hate-speech content have visibly declined. Trolls and conspiracists have re-emerged in the gaps.

Slop fatigue. A specific, observable phenomenon: the moment users realise the lion video, the heroic-soldier image, or the surprising cooking clip is AI-generated, the dopamine value of the next clip drops. Domus describes this as a “boomerang” — maximising short-term engagement while gradually eroding the very thing that makes scrolling rewarding.

Visibrain’s 30-day analysis (December 2025) found 475,000 mentions of “AI slop” across X, Instagram, TikTok, and Threads, with a single peak of 37,000 mentions on 11 December alone after McDonald’s pulled an AI-generated Christmas advert in the Netherlands following backlash. Coca-Cola’s 2025 Christmas “Holidays Are Coming” AI campaign was similarly pilloried. The hashtag #SupportHumanArt has emerged as a sustained co-occurrence with the AI-slop conversation, indicating that user fatigue is producing a partial counter-movement.

5.3 The asymmetry that matters most

The crucial structural fact is that human contribution and AI contribution scale at incomparable speeds. A motivated human posts a few items a day; a slop operator using one prompt template can publish dozens to hundreds. Researchers analysing the Dead Internet Theory (Mladenovic et al., arXiv 2025) note that this creates an asymmetric information environment in which authentic human voices are mathematically drowned out, regardless of intent. Once 80% of feed content is non-human or non-friend, the question of whether the remaining 20% is “original” or not begins to feel almost moot to the user.

6. Conclusion: Is Big Tech Extracting a Human-Made Society and Replacing It with a TV Dinner?

On the evidence reviewed, the conclusion is yes — with two important qualifications.

The first qualification is intent. The pattern described is real, but it is not necessarily the product of a single conscious plan. It is the emergent behaviour of a small number of advertising-supported platforms operating under shareholder pressure in a winner-take-most market. The behaviour looks designed because the incentives are coherent: when the marginal cost of AI content approaches zero, the marginal cost of human content (in payouts, moderation, copyright risk, and brand-safety drama) starts to look intolerable. No conspiracy is needed; an income statement will do.

The second qualification is finality. The substitution is in motion, not complete. Human creativity continues. The most encouraging counter-signal is that audiences increasingly notice and reject slop the moment they recognise it. The #SupportHumanArt response, the boycotts of AI advertising campaigns, the rise of subscription-supported newsletters, podcasts, and Patreon-style direct funding, the deliberate flight to smaller community platforms (Discord servers, Mighty Networks, Substacks, Bluesky niches) — all suggest that a portion of the audience is migrating away from the slop layer towards spaces where the friction, ugliness, originality, and beauty of actual humans is still on offer.

With those qualifications stated, the substantive answer to framing of the hypothesis is unambiguous. Big Tech is, in measurable ways, extracting the value of a human-made culture — ingesting it as training data, demonetising the humans who produced it, and replacing the surface of its platforms with a synthetic, statistically average, prefabricated stream optimised for engagement and unburdened by royalty payments. The TV-dinner metaphor is not unfair; it understates the case. A TV dinner was at least cooked, packaged, and labelled by people who could be held accountable. The current product is generated by systems trained on stolen ingredients, served by recommendation algorithms whose optimisation target is not nutrition but time-on-plate.

7. Consequences If the Trend Continues

The risks below are drawn from the literature reviewed and from the empirical pattern of the past four years. They are organised by domain, with the most well-evidenced first.

7.1 Cultural and creative consequences

Cultural homogenisation. Empirical research (Patterns, January 2026; Castro, Gao & Martin at UCLA Anderson; Padmakumar et al. on LLM creative-writing diversity) shows that AI systems converge on “generic attractors” — bland, conventional outputs — even without retraining. As they are deployed at scale, the collective space of creative ideas measurably narrows.

Loss of long-tail and minority expression. AI models privilege the statistically common; non-English voices, regional aesthetics, and idiosyncratic styles are mathematically suppressed.

Erosion of originality as a category. When everyone uses AI tools that share the same training data, the difference between “your idea” and “someone else’s idea” begins to blur, undermining attribution, taste, and creative reputation.

Atrophy of the human creator pipeline. If beginning writers, photographers, and musicians cannot earn a living wage on platforms, the next generation of professional creators is not trained. The pipeline collapses silently.

7.2 Epistemic and informational consequences

Model collapse. Peer-reviewed work (Shumailov et al., Nature 2024; arXiv preprints through 2026) shows that AI systems trained on AI-generated data lose lexical, syntactic, and semantic diversity over generations — the so-called Model Autophagy Disorder. The Communications of the ACM observed in April 2026 that this is no longer a future risk: “It’s a process already underway.”

Decay of the open web as a knowledge commons. StackOverflow saw a 16% drop in activity in the year following ChatGPT’s release. As humans stop contributing answers because AI absorbs the questions, the substrate that AI itself depends on shrinks.

Misinformation at scale. Synthetic political content, including documented Russian and Chinese state-sponsored slop networks (Graphika), now competes with real reporting in the same feed at the same engagement weight.

Search degradation. AI-stuffed clickbait, fake plant catalogues, fictional restaurants, and AI “review” sites are flooding both general search and category-specific platforms (Pinterest, Etsy, Amazon).

7.3 Economic consequences

Concentration. Building frontier AI models requires capital expenditure now measured in tens to hundreds of billions per year (Meta alone announced ~$135 billion of capex for 2026). This entrenches a small cluster of incumbents and forecloses competition.

Creator destitution. If 50% of creators already earn under $15,000, AI competition pressure on the same ad pool is likely to crush the median further. Many will exit. Some will move into agentic AI-content production themselves, accelerating the very dynamic they were displaced by.

Wage compression beyond creators. The same dynamic threatens copywriters, illustrators, voice actors, junior developers, and translators — occupations whose work has been ingested as training data without consent.

Asset-stripping of cultural IP. Without legal resolution of the fair-use question, decades of accumulated creative IP — books, music, films, photography — risk being effectively expropriated through model weights.

7.4 Psychological and social consequences

Erosion of trust. Each viral AI hoax — the lioness handing her cub to a human; “Shrimp Jesus”; the McDonald’s Christmas ad; the “welfare queen” Sora videos — trains audiences to disbelieve everything by default, including authentic content. The result is generalised epistemic exhaustion.

Loss of social texture. Cory Doctorow and others have documented that platforms which once symbolised connection have become “places of isolation and distraction.” The harm metric is not what is shown but what is displaced — hours that produced no friendship, no learning, no work.

Mental health costs to creators. 90% report burnout. Creators describe posting from hospital beds, never being off, and treating algorithm-induced anxiety as a baseline professional condition.

Parasocial dependence on synthetic personalities. AI influencers, AI girlfriends, and AI chat companions are positioned as substitutes for relationships people no longer find online, deepening the loneliness the platforms helped create.

7.5 Political and civic consequences

Cheap, scalable propaganda. Synthetic political imagery and video are now produced in volume by both state and non-state actors. Even where a viewer “knows” content is AI, the narrative imprint sticks.

Capture of the public square. When 83% of Facebook content and 93% of Instagram content is no longer from one’s own social network, the platforms become broadcast media with the trappings of social media — and broadcast media owned by a handful of CEOs is a political fact, not a neutral utility.

Asymmetric vulnerability. Authoritarian regimes find synthetic content useful for both internal control and external destabilisation. Open democratic societies find it harder to defend against.

7.6 Long-term, civilisation-scale risks

Self-poisoning training corpora. If AI systems train on AI output indefinitely without provenance infrastructure, the technology degrades while the cost of correcting it rises.

A culture without counter-evidence. Original art, on-the-ground journalism, awkward conversations, ugly truths, and minority traditions are precisely the kinds of inputs that probabilistic models rank low. A society that consumes only what such models surface gradually loses the friction that produces new ideas.

A future generation that has never known anything else. Children growing up on synthetic feeds may experience the difference between “social media” and “reality” very differently from those who remember the early web. What looks to a 50-year-old like a betrayal of the medium will look to a 12-year-old like the medium itself.

8. Selected Sources

The body of this report draws on the following primary sources, court documents, and reporting. Where the same fact is reported in multiple outlets, the most authoritative or earliest source is cited.

Court testimony and antitrust filings

• FTC v. Meta Platforms (Federal Trade Commission antitrust trial, April 2025) — Zuckerberg testimony and exhibits.

• CNN Business: “Spinning off Instagram, the decline of ‘friending’ and other takeaways from Mark Zuckerberg at the FTC monopoly trial,” 16 April 2025.

• Fortune: “Mark Zuckerberg suggested wiping everyone’s Facebook friends,” 15 April 2025.

Earnings calls and corporate statements

• Meta Q3 2025 Earnings Call (October 2025) — Zuckerberg on the “third era” and 20 billion Vibes images.

• Meta Q4 2025 Earnings Call (January 2026) — “Personal super intelligence” and ~$135 billion 2026 capex.

• Connor Hayes (Meta VP, GenAI) interview, Financial Times, December 2024.

Industry data on creator earnings and fatigue

• Goldman Sachs creator economy market size estimates (2024–2027).

• Linktree creator earnings reports.

• Mighty Networks 2025 Creator Economy Guide.

• ManyChat 2026 creator survey.

• Creator Economy Research Institute Q1 2026 burnout study.

• GWI / Financial Times analysis of social media time spent (2024–2025).

• DataReportal Digital 2025 / October 2025 update.

• Sproutsocial: “Audiences are tuning out” (2025).

Reporting on AI slop and platform strategy

• 404 Media reporting on Facebook creator program payouts and slop economics.

• Wikipedia: “AI slop” (continuously updated reference, accessed May 2026).

• Rolling Stone: “Facebook and Instagram to Unleash AI-Generated Users No One Asked For,” December 2024.

• PetaPixel: “Instagram and Facebook to Fill Platforms With AI-Generated Accounts,” January 2025.

• NBC News: “Meta shuts down AI character accounts on Facebook, Instagram after outcry,” January 2025.

• Domus: “Is this the beginning of the end for social media?” February 2026.

• Visibrain 30-day AI-slop analysis, December 2025.

• U.S. Copyright Office report on AI training and fair use (May 2025).

• Macmillan et al. v. Meta (2026) — publishers’ suit over Llama training.

• Ted Entertainment et al. v. Apple (April 2026) — YouTube scraping suit.

• Ted Entertainment et al. v. Snap (January 2026).

• Anthropic class-action settlement (September 2025), $1.5 billion.

• Disney and Universal v. Midjourney.

• Getty Images v. Stability AI (UK trial, June 2025).

• Copyright Alliance count of 70+ active AI copyright cases (early 2026).

Academic literature

• Shumailov et al., “The Curse of Recursion” / Nature 2024 paper on model collapse.

• Mladenovic et al., “The Dead Internet Theory: A Survey on Artificial Interactions and the Future of Social Media,” arXiv 2502.00007, 2025.

• Fernandes & Oliveira, “Brands as drivers of social media fatigue,” Young Consumers, 2024.

• Padmakumar et al., “Homogenizing effect of large language models on creative diversity,” ScienceDirect, 2025.

• Castro, Gao & Martin (UCLA Anderson / Northwestern), AI homogenisation and bias working paper, 2025.

• Elgammal, Rutgers, in The Conversation: “AI-induced cultural stagnation is no longer speculation,” January 2026.

• University of Florida / Journal of Marketing Research study on AI slop market effects (Zou, Shi & Wu), 2026.

• Zou et al., model collapse and Model Autophagy Disorder literature.

Theory and commentary

• Cory Doctorow, “Enshittification” essays and McLuhan Lecture (2024); American Dialect Society and Macquarie Dictionary Word of the Year, 2023–2024.

• Paul Krugman, “The General Theory of Enshittification,” 2025.

• Communications of the ACM: “Model Collapse Is Already Happening, We Just Pretend It Isn’t,” April 2026.

• Merriam-Webster and American Dialect Society: “Slop” as 2025 Word of the Year.

9. A Final Note on Method

This report is framing the substitution as a hypothesis and testing it against available evidence. The evidence cited is drawn from sworn courtroom testimony, public earnings calls, peer-reviewed studies, primary investigative reporting, and the platforms’ own announcements. Where conclusions extend beyond what is documented — in particular the long-term consequences in Section 7 — the report identifies them as projections grounded in current trajectory, not certainties.

Reasonable people may disagree about whether the trajectory is reversible, about which interventions would slow it, and about whether the substitution’s benefits (cheap content, accessibility, certain efficiency gains) are sufficient consolation. What does not seem reasonable, on the present evidence, is the position that nothing is happening. Something is happening. The platforms have said so, on their earnings calls, in writing, in court.

·

8 MAY

Summary: Europe has not abandoned the AI Act, but it has blinked by delaying key high-risk AI obligations.


·

13:45

Summary: The WIRED article shows how displaced creative professionals are being pulled into precarious AI-training work, effectively helping build the systems that threaten their own industries.