- Pascal's Chatbot Q&As
- Posts
- The Machine’s Human Cost. The lived reality is unpaid testing, chaotic onboarding, sudden project cancellations, constant Slack monitoring...
The Machine’s Human Cost. The lived reality is unpaid testing, chaotic onboarding, sudden project cancellations, constant Slack monitoring...
...shifting rules, rapid offboarding, falling rates, and a humiliating dependence on “tasks” that may vanish before the worker can even access them.
Summary: The WIRED article shows how displaced creative professionals are being pulled into precarious AI-training work, effectively helping build the systems that threaten their own industries.
Its most disturbing finding is that AI’s polished outputs depend on hidden human labour marked by chaotic management, falling pay, unpaid waiting, psychological strain and weak worker protections.
The likely future consequence is a new accountability battle over AI supply chains: not only whether data was lawfully sourced, but whether the human judgement used to train and “align” AI was ethically obtained.
The Machine’s Human Cost: How AI Turns Creative Workers into Its Invisible Supply Chain
by ChatGPT-5.5
The WIRED article I Work in Hollywood. Everyone Who Used to Make TV Is Now Secretly Training AI is powerful because it reverses the usual AI story. The public narrative says AI is replacing writers, artists, researchers, translators, coders and other knowledge workers. Ruth Fowler’s account shows something even darker: many of those displaced workers are not simply being replaced by AI; they are being recruited, fragmented, under-managed and under-protected to make AI systems better at replacing them. The article is not just about Hollywood. It is about the emerging labour model beneath generative AI: precarious, opaque, emotionally draining, legally contested and strategically hidden.
At the centre of the story is a Hollywood writer and showrunner who, after the 2023 writers’ strike and a continuing slowdown in TV production, begins taking AI-training work through platforms such as Mercor, Outlier, Turing and others. The work ranges from evaluating chatbot tone and annotating video to red-teaming unsafe model outputs. The advertised promise is flexibility and high hourly rates. The lived reality is unpaid testing, chaotic onboarding, sudden project cancellations, constant Slack monitoring, shifting rules, rapid offboarding, falling rates, and a humiliating dependence on “tasks” that may vanish before the worker can even access them.
The most disturbing sentence-level truth of the article is that AI’s apparent smoothness is being purchased through human disorder. The systems that present themselves as frictionless, intelligent and always available are trained by people living in interrupted sleep, financial anxiety, 3 a.m. Slack messages, arbitrary scores and permanent insecurity. This is not the clean digital future sold in investor decks. It is platform Taylorism applied to educated labour.
Most surprising statements and findings
The first surprise is the calibre of the people doing this work. The article does not describe anonymous low-skill clickworkers alone. It describes screenwriters, showrunners, academics, journalists and professionals in their thirties, forties and beyond being pulled into AI-training gigs because their own industries have become unstable. The old stereotype of gig work as a side hustle collapses. In this account, AI gig work becomes the emergency labour market for the educated creative middle class.
The second surprise is how much of AI safety and refinement depends on deeply unpleasant human work. Fowler describes tasks involving disturbing, violent, sexual or extremist material as part of red-teaming and safety evaluation. That matters because the public usually sees “AI safety” as a technical discipline involving benchmarks, guardrails and policy rules. The article shows the other side: humans are asked to generate, classify or evaluate harmful material so systems can later appear safe and polite.
The third surprise is the managerial absurdity. The workers are told they have flexibility, but in practice they must respond immediately, monitor Slack constantly, complete tasks before they disappear, and accept offboarding without explanation. This is the contradiction at the heart of platform labour: formal freedom combined with practical control. The article’s description of workers being called “taskers,” not employees, is not just linguistic. It is an attempt to strip the work of employment meaning, continuity and rights.
The fourth surprise is the apparent collapse in rates. The article describes a market where “expert” work that could initially be advertised at very high rates later falls toward much lower hourly levels, while entry-level work may drop to rates that look structurally incompatible with the professional expertise being extracted. That is a warning sign: once the platforms have attracted enough desperate skilled labour, the bargaining power can shift rapidly from worker to intermediary.
The fifth surprise is the scale mismatch. According to the article, Mercor says it has about 300 full-time staff while keeping tens of thousands of independent contractors active each week. That is the whole AI labour model in miniature: a small formal company surrounded by a huge disposable labour cloud. The formal organisation captures valuation, client relationships and platform control; the human supply chain absorbs uncertainty, downtime, emotional strain and legal risk.
Most controversial statements and findings
The most controversial implication is that AI companies and their labour intermediaries may be using “contractor” status to avoid the obligations that would normally attach to this degree of control. The article describes minimum weekly expectations, sudden deadlines, detailed rubrics, performance scores, mandatory onboarding, platform surveillance and rapid removal. Those are exactly the kinds of facts that can make contractor classification legally vulnerable, especially in jurisdictions that look at actual control rather than contractual labels.
The second controversial finding is that AI-training pipelines may involve private, sensitive or questionably sourced material. Fowler describes evaluating intimate user conversations and annotating videos that may not have been uploaded with meaningful consent. Even if some of this is legally covered by terms of service, it raises a deeper legitimacy problem. Users, creators and bystanders may not understand that their emotional confessions, images, videos or creative outputs can become raw material for model improvement.
The third controversial point is that AI recruitment itself may become a data-extraction channel. The article describes AI interviewer agents and notes workers’ suspicions that interviews may themselves be used to harvest training data. The article does not prove that claim, but the suspicion is significant because it shows how little trust exists between workers and platforms. In an industry built on data extraction, every interaction starts to look like possible unpaid training.
The fourth controversial finding is the role reversal of Hollywood after the writers’ strike. Writers fought to prevent studios from using AI to dilute or replace creative labour. Now some of those same writers are training models that may strengthen the very economic forces they resisted. That is not hypocrisy; it is coercion by market structure. People do not have to believe in a system to be forced to feed it.
The fifth controversy is quality. If AI models are being trained or evaluated by exhausted workers racing against disappearing tasks, unclear scoring systems and arbitrary rubrics, then the quality of the underlying human feedback becomes questionable. The industry sells “human alignment,” but this article suggests that some of that alignment may be produced under conditions that actively degrade judgment.
Most valuable findings
The most valuable finding is that AI supply-chain governance cannot stop at copyright, cybersecurity or model safety. It must include labour. Any serious responsible-AI framework should ask: Who labelled the data? Who reviewed the outputs? Were they paid for onboarding? Were they exposed to traumatic content? Could they appeal automated or managerial decisions? Were they under time pressure incompatible with careful judgment? Were they employees in substance but contractors on paper?
The second valuable finding is that labour conditions are model-quality conditions. Bad working conditions are not only an ethical problem; they are a product-risk problem. Rushed annotators create noisy labels. Traumatised moderators make inconsistent safety calls. Underpaid experts stop caring. Arbitrary rubrics produce shallow compliance rather than genuine judgment. If the human layer is degraded, the model layer inherits that degradation.
The third valuable finding is that the AI economy is not replacing human expertise so much as atomising it. A screenwriter’s craft, taste and judgment are no longer bought as authorship. They are broken into micro-assessments: rate this dialogue, judge this tone, compare this prompt, classify this image, rewrite this unsafe answer. The person disappears; the extractable signal remains. For publishers, universities and creative industries, this is the key strategic lesson: AI does not merely copy works. It also decomposes professions.
The fourth valuable finding is that “flexibility” has become a rhetorical shield. Flexibility can be real and valuable, but in this setting it appears to mean that the company has flexibility, not the worker. The platform can start, pause, cancel, reprice and terminate. The worker must wait, refresh, respond and absorb downtime. This is labour-market volatility repackaged as lifestyle freedom.
The fifth valuable finding is that the next AI accountability fight will be about provenance in a much broader sense. We already ask whether training data was lawfully sourced. Soon we will need to ask whether human feedback, red-team data, evaluation data, safety labels and expert annotations were ethically sourced too.
Predicted consequences
The first consequence will be litigation. Misclassification claims, wage-and-hour claims, unpaid-training claims, privacy claims, biometric-data claims and data-breach claims are likely to grow. Mercor has already faced class-action litigation after reports of contractor-data exposure, and the wider data-annotation sector has been under increasing scrutiny for labour conditions and platform-worker rights. This will not remain a reputational issue; it is becoming a legal architecture issue.
The second consequence will be procurement pressure. Enterprise customers, universities, publishers, healthcare providers and governments will increasingly ask AI vendors not only where their content came from, but how their evaluation and annotation work was produced. Responsible-AI due diligence will start to include the hidden labour chain. Partnership on AI, Fairwork and others are already pushing responsible sourcing frameworks for data-enrichment work, and this article gives those efforts a vivid human case study.
The third consequence will be regulatory spillover from gig-work law into AI. In Europe, the Platform Work Directive creates a stronger framework around employment status and algorithmic management. In California, the ABC-test logic already makes it harder to treat controlled workers as independent contractors. AI-training platforms that rely on constant monitoring, scoring, deadlines and offboarding may find themselves pulled into the same legal debates that previously surrounded ride-hailing and delivery platforms.
The fourth consequence will be a reputational backlash against “ethical AI” claims. Companies cannot credibly present themselves as building safe, human-centred AI while relying on disposable workers exposed to traumatic material, unstable income and opaque management. The contradiction is too visible. This creates a new reputational risk category: AI systems may be criticised not only for what they output, but for the conditions under which they were made.
The fifth consequence will be consolidation and stratification of AI labour. Generalist annotation will become cheaper, more automated and more globalised. Highly specialised expert work will remain valuable but episodic, competitive and tightly controlled. The middle tier — educated workers doing professional judgment tasks without employment protection — may become the most exploited layer, because it is skilled enough to create value but fragmented enough to lack bargaining power.
The sixth consequence will be new forms of worker organisation. Reddit forums, Discord groups and informal Slack rebellions may evolve into union campaigns, litigation collectives, whistleblower networks and professional codes. Hollywood unions, journalism organisations, academic associations and creative guilds may eventually need to treat AI-training work as part of their jurisdictional concern, not as an unrelated side hustle.
The seventh consequence will be model-risk consequences. Poorly governed human-feedback systems may produce models that are superficially aligned but brittle underneath. If safety testing is done by under-supported contractors racing through traumatic or adversarial prompts, companies may overestimate the robustness of their guardrails. That is especially dangerous in legal, medical, educational, scientific and political contexts, where small failures of judgment can scale into institutional harm.
What this means for publishers, creators and knowledge institutions
For scholarly publishers, the article should ring several alarms. First, it shows that expertise can be extracted without respecting the expert. That is directly relevant to researchers, editors, peer reviewers, authors and educators. If the AI economy learns to buy expert judgment in unstable fragments, it may weaken the professional ecosystems that produce reliable knowledge.
Second, it reinforces the importance of provenance. Publishers should not define provenance narrowly as “which article or book was used.” They should extend it to the whole AI production chain: licensed source material, permitted uses, annotation labour, evaluator qualifications, safety review, model-update processes, audit logs and redress mechanisms.
Third, it supports a stronger licensing position. If AI developers need high-quality human judgment and reliable content to make trustworthy systems, then publishers and professional communities should not surrender those assets cheaply. The article shows the danger of a market where human knowledge is treated as disposable input. Wiley and similar institutions should insist that lawful content, expert review and accountable deployment are premium infrastructure, not commodity feedstock.
Finally, it shows why “AI will replace jobs” is too simple. AI may first degrade jobs, fragment jobs, outsource jobs, reclassify jobs and turn professions into invisible maintenance work. The danger is not just unemployment. It is the hollowing out of dignity, continuity, bargaining power and professional identity.
Conclusion: the future is not artificial; it is outsourced
The Wired article is valuable because it punctures the myth that AI is immaterial. Behind the model is a supply chain. Behind the supply chain are workers. Behind the workers are collapsing industries, unpaid tests, NDAs, private data, chaotic management, disappearing tasks and a brutal transfer of risk from capital to individuals.
The future consequence is not that all creative people will become AI trainers. It is that more professions will discover that their judgment can be separated from their status, their expertise separated from their bargaining power, and their labour separated from legal protection. That is the real warning. AI may not simply replace the creative class. It may first make the creative class train its replacement under conditions that prove how little the new economy values creativity once it has been converted into data.
·
13:51

The Great Substitution
