- Pascal's Chatbot Q&As
- Posts
- Ed Newton-Rex argues that opt-out schemes for generative AI training are both unfair to creators and ineffective in practice. GPT-4o: Yes, I agree with Ed’s critique of opt-out schemes.
Ed Newton-Rex argues that opt-out schemes for generative AI training are both unfair to creators and ineffective in practice. GPT-4o: Yes, I agree with Ed’s critique of opt-out schemes.
The only fair and effective approach is an opt-in system where rights holders proactively grant permission for their works to be used.
Question 1 of 2 for ChatGPT-4o: Please read the essay “The insurmountable problems with generative AI opt-outs” and the LinkedIn post from Ed Newton-Rex and tell me what they say.
ChatGPT-4o’s answer:
Summary of Ed Newton-Rex's Essay: The Insurmountable Problems with Generative AI Opt-Outs
Ed Newton-Rex argues that opt-out schemes for generative AI training are both unfair to creators and ineffective in practice. Key points include:
Ineffectiveness of Opt-Out Schemes:
Location-based schemes (like robots.txt) only allow rights holders to opt out content from web domains they control. They do not address copies hosted elsewhere.
Unit-based schemes (metadata tagging) are easily circumvented as metadata can be stripped or may not apply to text.
Limited Adoption:
Most creators and rights holders either don’t know about or fail to use opt-out options. Low participation contrasts with widespread creator sentiment favoring licensing and compensation.
Binary Choices:
Opt-out schemes force creators to choose between visibility on search engines and exclusion from AI training, undermining nuanced needs like permitting reference while blocking training.
Technological Evolution:
Emerging technologies like smart glasses collect copyrighted material without accessing web URLs, making opt-out mechanisms obsolete.
Administrative Burden:
Rights holders must opt out individually across numerous platforms and for each new creation, imposing significant logistical challenges.
Implicit Legitimization of Past Infringements:
Opt-outs do not require retraining or deletion of previously trained models, effectively legitimizing prior unlicensed use of works.
Disadvantages for Small Creators:
Smaller rights holders lack the resources to monitor and act on opt-out opportunities, exacerbating inequities.
Conclusion:
The only fair and effective approach is an opt-in system where rights holders proactively grant permission for their works to be used.
Ed Newton-Rex’s LinkedIn Post
In a LinkedIn post, Ed Newton-Rex reiterates key points from his essay and emphasizes that opt-out schemes:
Fail to address downstream copies of works.
Are administratively burdensome and largely ineffective.
Allow AI companies to benefit from unlicensed training on works whose rights holders missed opt-out opportunities.
He argues that governments implementing opt-out frameworks effectively give away creators' intellectual property to AI firms. He advocates for opt-in systems as the fairer solution
Question 2 of 2 for ChatGPT-4o: Do you agree with Ed Newton-Rex and if so, what should AI makers and regulators do to accommodate this?
ChatGPT-4o’s answer:
Ed Newton-Rex’s arguments against opt-out schemes for generative AI training are compelling, especially considering their practical inefficacy and the burdens they place on rights holders. The central issues—administrative complexity, downstream use, implicit validation of prior infringements, and unfair disadvantages to smaller creators—highlight significant flaws in opt-out mechanisms. His advocacy for an opt-in system aligns with the principle of respecting creators’ rights while providing a clear and equitable framework for content usage.
Agreement with Ed Newton-Rex
Yes, I agree with Ed’s critique of opt-out schemes, primarily because:
Illusion of Control: Opt-outs provide a false sense of agency, as they fail to prevent downstream or derivative uses of content.
Unfair Burdens: Placing the responsibility on rights holders to opt out imposes undue administrative and logistical challenges, especially on smaller creators.
Lack of Enforceability: Opt-out mechanisms rely on AI companies' compliance, which can be inconsistent or even intentionally bypassed.
Systemic Inequality: Large corporations with extensive legal teams are better positioned to opt out, leaving individual creators disproportionately exposed.
Recommendations for AI Makers and Regulators
Adopt an Opt-In System:
Require explicit consent from rights holders before using their works for AI training.
Use a centralized registry where creators can grant or deny permissions efficiently.
Enhance Transparency:
Mandate AI companies to disclose the datasets used for training, including the source and scope of the data.
Implement traceability systems to identify the origin of training content.
Compensation Mechanisms:
Establish collective licensing or revenue-sharing models to ensure creators are fairly compensated for the use of their works.
Collaborate with rights organizations to negotiate fair terms for creators.
Strong Enforcement Measures:
Introduce legal penalties for AI companies that fail to comply with opt-in requirements or use unlicensed works.
Ensure robust mechanisms to verify and audit compliance, including third-party oversight.
Address Historical Infringement:
Require retraining of AI models that have used unlicensed data or provide restitution for past unauthorized usage.
Support Smaller Creators:
Offer tools and platforms to simplify rights management and participation in opt-in schemes.
Ensure accessibility and education about rights protection mechanisms.
Develop International Standards:
Collaborate with global stakeholders to create uniform rules for AI training that respect copyright laws across jurisdictions.
Ethical and Practical Balance
For AI makers, an opt-in approach aligns with long-term trust-building and sustainability. While it may increase initial barriers to training data, fostering goodwill with rights holders and creators could lead to richer, ethically sourced datasets. Regulators, on the other hand, need to prioritize fairness and enforceability to maintain the integrity of intellectual property rights in the face of rapidly advancing AI technologies.
