• Pascal's Chatbot Q&As
  • Posts
  • Professor Dirk Visser’s argument for establishing an absolute right against unauthorized deepfakes is timely, well-reasoned, and necessary.

Professor Dirk Visser’s argument for establishing an absolute right against unauthorized deepfakes is timely, well-reasoned, and necessary.

As generative AI blurs the line between reality and fiction, laws that treat digital simulations as mere extensions of portrait rights fall short.


In Support of Professor Dirk Visser’s Call for an Absolute Right Against Deepfakes

by ChatGPT-4o

Professor Dirk Visser, a respected scholar of intellectual property law, argues that individuals should hold an in principe absolute right over misleading digital reproductions of their personal characteristics — in other words, deepfakes. This, he states, is a more suitable legal framework than the current reliance on portrait rights (portretrecht), which require a case-by-case balancing of interests and demonstration of a “reasonable interest” to claim protection. I fully agree with Visser’s position and believe that his proposal is not only legally coherent but ethically and practically necessary in today’s media and AI environment.

Key Arguments in Favor of Visser’s Position

1. Deepfakes Are Fundamentally Different from Traditional Portraits

Visser rightly distinguishes between deepfakes and authentic portraits. Portrait rights evolved to regulate the use of real depictions of individuals, which in many cases — such as journalism — serve a public interest and warrant broader freedom of expression. Deepfakes, however, simulate a person’s likeness in ways that can manipulate speech, expression, and behavior. Unlike real portraits, deepfakes inherently distort reality and therefore carry a higher risk of deception, defamation, and manipulation. Treating them under the same legal doctrine as photographs in a newspaper is inadequate and misleading.

2. Current Portretrecht Offers Weak Protection

Portrait rights in the Netherlands and similar systems elsewhere hinge on demonstrating a “reasonable interest” and a balancing of interests with freedom of expression. This makes legal recourse against deepfakes cumbersome and unpredictable. Visser's “nee, tenzij” (“no, unless”) principle reverses the burden: individuals would automatically be protected against unauthorized deepfakes unless specific exceptions — such as satire or newsworthiness — apply. This approach would significantly strengthen personal autonomy and dignity in the digital age.

3. Licensing Deepfakes Is Not a Flaw but a Feature — If Accompanied by Safeguards

Etienne Valk, who opposes Visser’s view, warns that recognizing an absolute right could lead to exploitative licenses, especially for young or vulnerable individuals. Visser addresses this critique directly: exploitative contracts already occur under current law, and better safeguards can be built into the system. Importantly, he advocates applying auteurscontractenrecht (author’s contract law) to deepfake licenses, enabling individuals to break or renegotiate unfair agreements. This proposal provides a proactive solution to Valk’s concern without sacrificing the strength of protection.

4. Effective Enforcement Requires Transferable Rights

Visser emphasizes that producers or rights managers, acting with consent, should be able to enforce rights on behalf of individuals — just as music producers defend performance rights. This delegation is critical for enforcement against piracy and unauthorized exploitation, especially when individual victims lack the resources or expertise to fight back. Far from undermining autonomy, this model can empower individuals by giving them practical means to protect their identity.

Additional Arguments in Support of Visser’s View

5. Preventing Identity Theft and Psychological Harm

Deepfakes don’t just affect celebrities — ordinary individuals can be targeted in non-consensual pornography, political disinformation, or personal revenge. Victims often suffer severe emotional and reputational damage. An absolute right to control one’s likeness in synthetic media would help deter such abuses and provide a stronger legal basis for redress.

6. Reinforcing Democratic Integrity and Public Trust

As AI-generated disinformation proliferates, protecting personal identity through stronger legal rights becomes essential to safeguarding public trust. Deepfakes of political figures, judges, journalists, or scientists can mislead the public, erode trust in institutions, and manipulate elections. Giving individuals clear and enforceable rights over their likeness can help stem this tide of synthetic manipulation.

Visser’s proposal aligns with legislative developments in countries like the United States, where bills such as the DEEPFAKES Accountability Act and recent state-level initiatives are recognizing the need for strong rights against unauthorized synthetic media. The Dutch and Danish proposals are in step with these trends and reinforce the case for harmonized action across jurisdictions.

Conclusion

Professor Dirk Visser’s argument for establishing an absolute right against unauthorized deepfakes is timely, well-reasoned, and necessary. As generative AI blurs the line between reality and fiction, laws that treat digital simulations as mere extensions of portrait rights fall short. Visser’s model not only protects personal dignity but provides a clearer legal framework, stronger deterrence, and more equitable enforcement mechanisms.

This is not merely a theoretical legal refinement — it is an urgent ethical imperative. In the digital age, identity must be a right, not a vulnerability. And that right should begin with a clear and robust “no, unless.”