• Pascal's Chatbot Q&As
  • Posts
  • Regulators must treat AI-generated political disinformation as a matter of national security and democratic survival.

Regulators must treat AI-generated political disinformation as a matter of national security and democratic survival.

Anything less than robust, coordinated, and enforceable safeguards would be an invitation for future abuses, more sophisticated deepfake operations, and the gradual erosion of truth itself.

The Weaponization of AI-Generated Imagery in Dutch Political Campaigning – A Case Study of the PVV and ‘Maak haar blond, onschuldig en knap’ (‘Make her blonde, innocent and pretty.’)

by ChatGPT-4o

Introduction

The investigative report titled Maak haar blond, onschuldig en knap (“Make her blonde, innocent and pretty”) by De Groene Amsterdammer, co-authored by Eva Hofman and Joris Veerbeek in collaboration with the Data School of Utrecht University, uncovers an alarming abuse of artificial intelligence (AI) in Dutch politics. The story exposes how members of the far-right Party for Freedom (PVV) are covertly using hyperrealistic AI-generated imagery—disseminated via seemingly grassroots social media accounts—to spread racist, xenophobic, and sexist narratives designed to manipulate public opinion and fuel electoral success.

At the heart of the investigation lies the revelation that two PVV Members of Parliament, Maikel Boon and Patrick Crijns, are responsible for curating and distributing AI-generated content through accounts like “Wij doen GEEN aangifte tegen Wilders.” (“We won’t file complaints AGAINST Wilders.” Their use of AI image generators such as OpenAI’s Sora tool paints a disturbing picture of what happens when unregulated generative technology meets populist propaganda.

The Tactics Unveiled

The campaign employs several recurring tropes:

  • Hyperrealistic AI-generated images of blonde, attractive, young Dutch women often portrayed as threatened or harassed by migrant men, serving to stoke fear and resentment.

  • Contrasting visual narratives that juxtapose “hard-working Dutch seniors” with idle, youthful “Syrian-looking” men, often lounging or enjoying state benefits.

  • Manipulated AI videos, such as “Nederland in 2050,” that present dystopian futures where Sharia law dominates public life in the Netherlands.

  • Staged family imagery, idealizing white, nuclear Dutch families with prompts emphasizing traits like “blonde mother,” “light brown son,” and traditional middle-class aesthetics.

  • Implied calls to action, urging votes for PVV under the guise of defending Dutch identity and women’s safety.

The report includes 174 prompts linked to 333 AI-generated images that establish a detailed pattern of targeted emotional manipulation. These prompts are not just aesthetic instructions but vehicles of political ideology—engineered to exploit subconscious fears, prejudices, and protective instincts.

Negative Consequences

This weaponization of AI for political manipulation carries severe implications, both immediate and systemic. The negative consequences are vast and intersect across psychological, societal, democratic, and technological domains:

1. Erosion of Trust in Visual Media

AI-generated imagery is becoming indistinguishable from real photography. As fake content circulates as fact, public trust in legitimate journalism and photographic evidence is undermined, feeding disinformation ecosystems.

2. Incitement to Racial Hatred and Violence

Images featuring young white women harassed by dark-skinned men are direct visual expressions of the “great replacement” and “Muslim rape wave” conspiracy theories. These can lead to real-world violence, echoing patterns of stochastic terrorism.

3. Algorithmic Amplification of Extremism

Because emotionally provocative content receives more engagement, platform algorithms (e.g., Meta’s Facebook) disproportionately boost such extremist propaganda—effectively rewarding hate-based campaigning.

4. Astroturfing and Deceptive Political Messaging

These campaigns masquerade as grassroots support but are orchestrated by politicians. This deceives voters and distorts the democratic process, violating principles of transparency and accountability.

5. Gendered and Racial Stereotyping

The repeated use of “blonde, innocent, attractive” women reinforces harmful beauty standards and places them as damsels in distress. Simultaneously, people of color—especially migrants—are dehumanized as criminal or threatening.

6. Platform Policy Loopholes

New advertising restrictions (e.g., Meta’s ban on political ads) are ineffective here, as these tactics fall outside formal ad channels. Such regulatory blind spots allow hate campaigns to flourish under the guise of user-generated content.

7. Undermining of Democratic Discourse

Moderate political voices are drowned out by extremist narratives that rely on shock, fear, and sensationalism. This distorts the Overton window, making fringe ideologies seem mainstream.

8. Lack of Deterrence and Accountability

Despite evidence of direct involvement by elected officials, the report shows that neither Meta nor Dutch authorities have effectively intervened—raising concerns about impunity and regulatory capture.

9. International Radicalization Networks

The tactics mirror playbooks from transnational extremist groups such as Agenda Europe. This suggests that the PVV is not acting in isolation but as part of a broader movement to destabilize liberal democratic values across the EU.

10. Normalization of AI-Generated Disinformation

As this kind of content becomes widespread, society risks accepting fake narratives as normal. This desensitization makes future manipulation even easier and more dangerous.

Recommendations for Regulators

The threat posed by this convergence of generative AI, political extremism, and social media manipulation is urgent and structural. Regulators at national and EU levels must act decisively. The following recommendations are intended to be both strong and actionable:

1. Mandate AI Provenance Disclosure

Platforms should be required to detect and label AI-generated content prominently. Generative tools like Sora should default to public attribution of prompts and creators, and social platforms must surface this metadata automatically.

2. Ban Politicians from Anonymous Propaganda Accounts

New electoral integrity laws should prohibit elected officials or their staff from operating undeclared fan pages or sock-puppet accounts. Any political messaging must be attributable to a known source.

3. Strengthen Enforcement of Anti-Discrimination Laws

Dutch anti-racism laws must be updated to include AI-generated content and automated hate speech. Public prosecutors should be empowered to act ex officio when patterns of incitement are detected.

4. Audit Political Use of AI in Elections

Independent electoral watchdogs must monitor and report on AI-generated media in political campaigns. This includes maintaining registries of AI content used by parties, candidates, and third-party influencers.

5. Introduce Platform Liability for Algorithmic Amplification

If platforms knowingly boost deceptive or harmful AI content without moderation, they should be held legally accountable under information manipulation laws or digital services regulations (e.g., EU DSA).

6. Fund Research into Cognitive Effects of Political AI Imagery

Governments should sponsor interdisciplinary studies to understand how AI visuals influence voter perception, decision-making, and susceptibility to radicalization.

7. Ban Emotionally Manipulative Synthetic Imagery in Political Ads

Introduce hard restrictions on the use of AI-generated imagery that explicitly depicts racialized threat scenarios, exaggerated gender roles, or fictionalized hate-incidents for political purposes.

8. Expand Digital Literacy and AI Education

Public awareness campaigns must equip citizens to recognize AI-generated content and question its intent—especially during election cycles.

9. Encourage Whistleblower Protections and Investigative Journalism

The role of outlets like De Groene Amsterdammer and academic collaborators in exposing these abuses is vital. Funding and legal protections for such work should be reinforced.

10. Coordinate at the EU Level to Prevent Spillover

Since these narratives and techniques are often cross-border in nature, EU institutions must coordinate monitoring, enforcement, and sanctions to prevent systemic manipulation of democratic processes.

Conclusion

The use of AI-generated images to manufacture fear, sow division, and manipulate democratic outcomes marks a turning point in political campaigning. The findings of this investigation highlight a coordinated effort by elected officials to circumvent campaign laws, exploit platform vulnerabilities, and engineer public emotion using synthetic media. Left unchecked, this poses a grave threat to liberal democracy.

This is not just a Dutch problem—it is a warning to the world.

Regulators must treat AI-generated political disinformation as a matter of national security and democratic survival. Anything less than robust, coordinated, and enforceable safeguards would be an invitation for future abuses, more sophisticated deepfake operations, and the gradual erosion of truth itself.