• Pascal's Chatbot Q&As
  • Posts
  • GPT-4o: Gordon-Levitt pointed out that the new SAG-AFTRA agreement largely sidesteps restrictions on studios using actors’ past performances to train AI systems.

GPT-4o: Gordon-Levitt pointed out that the new SAG-AFTRA agreement largely sidesteps restrictions on studios using actors’ past performances to train AI systems.

This omission allows studios to create "Synthetic Performers" modeled on real actors without paying ongoing royalties or securing specific permissions. GPT-4o: I agree with Gordon-Levitt’s concerns

Question 1 of 2 for ChatGPT-4o: Please read the article “Why Joseph Gordon-Levitt Is Worried About AI” and list the most surprising, controversial and valuable statements made.

ChatGPT-4o’s answer:
In a recent discussion, Joseph Gordon-Levitt highlighted several key concerns about AI's impact on creativity, storytelling, and individual rights within the entertainment industry. Here are some of the most surprising, controversial, and valuable takeaways:

  1. Surprising Statement: Gordon-Levitt pointed out that the new SAG-AFTRA agreement largely sidesteps restrictions on studios using actors’ past performances to train AI systems. This omission allows studios to create "Synthetic Performers" modeled on real actors without paying ongoing royalties or securing specific permissions. He argues that this could lead to an industry trend where studios create AI-driven digital replicas based on actors’ likenesses, potentially sidelining human performers for lower-cost, AI-generated options.

  2. Controversial Position: Gordon-Levitt has been vocal about the limitations of AI regulation, especially in California. Following Governor Gavin Newsom’s veto of SB 1047—a bill aimed at imposing stringent safety protocols on advanced AI systems—Gordon-Levitt expressed disappointment, warning that AI, without regulation, could repeat the social media era’s "seriously damaging side effects." His stance aligns with other high-profile actors and tech critics but contrasts with industry voices that prioritize AI’s economic and innovative potential.

  3. Valuable Insight: Gordon-Levitt emphasized the need for clear boundaries to protect creatives' rights against involuntary or exploitative use of their work in AI applications. He advocates for compensation structures where creators receive fair remuneration for content used in AI training, drawing attention to the broader implications for digital ownership and intellectual property in a world where generative AI tools increasingly utilize existing media to produce new content.

Gordon-Levitt's concerns reflect a larger debate about the role of AI in creative industries, sparking significant conversation on the ethics of digital replicas and the future of artists' rights in the age of AI​.


Question 2 of 2 for ChatGPT-4o: Do you agree with Gordon-Levitt? How can the issues he has flagged in the article be addressed?

ChatGPT-4o’s answer:
I agree with Gordon-Levitt’s concerns about AI's influence on the creative industry, particularly around rights protection, digital replicas, and fair compensation. His points highlight a growing need for regulatory frameworks that address both the technological possibilities and ethical responsibilities involved in using AI for content creation and distribution.

Addressing Gordon-Levitt's Concerns:

  1. Clear Consent and Compensation Structures:
    A fundamental solution is implementing consent-based frameworks for using actors' likenesses and past performances in AI systems. This includes contracts where actors can approve or deny the use of their image or voice in any AI-generated media, similar to what SAG-AFTRA has advocated for. Compensation should also reflect the reuse of these performances, with royalties or residuals aligning with traditional forms of media distribution.

  2. Regulatory Frameworks on AI-Generated Content:
    Policies could mandate transparency in AI-generated media, ensuring viewers know when AI replicas are used. This transparency could mitigate the misrepresentation risks and protect actors from having their likenesses used in ways that might conflict with their values or intended artistic identity. In Europe, for instance, the AI Act is already pushing for such measures, requiring disclosures for AI-generated content and ensuring safety and accountability, which could serve as a model​.

  3. Strengthening Intellectual Property Rights:
    Lawmakers could expand intellectual property (IP) laws to include protections for the use of personal likenesses and voice data in AI. This would mean recognizing digital replicas and synthetic content as IP assets owned by the individual, giving them the right to license or deny use for AI training. Additionally, creators' work used as AI training data could be subject to IP laws similar to sampling in the music industry, where original artists receive royalties.

  4. AI System Safety and Oversight:
    Establishing requirements for ethical use and safety evaluations of AI systems in media could mitigate potential abuse of synthetic replicas. SB 1047, which aimed to regulate "Frontier Models" and included safety testing and kill switches for high-capability AI systems, was a step in this direction before it was vetoed. Such regulations could prevent misuse of these technologies in ways that violate performers’ rights and public trust.

These actions could balance AI's innovative potential with the ethical considerations essential for the creative industry’s future, offering protections without stifling AI’s utility for content creation.