• Pascal's Chatbot Q&As
  • Posts
  • Google Bard's analysis of the statement from Ed Newton-Rex, former employee of Stability AI, about Fair Use. Bard: One possible approach would be to require AI companies to obtain licenses...

Google Bard's analysis of the statement from Ed Newton-Rex, former employee of Stability AI, about Fair Use. Bard: One possible approach would be to require AI companies to obtain licenses...

...before training generative AI models on copyrighted works. This would ensure that creators are compensated for their work and would also give them some control over how their work is used

Question 1 of 2 for Google Bard: What are the arguments made by Ed Newton-Rex?

Google Bard’s answer:Ed Newton-Rex's main argument is that the current practice of training generative AI models on copyrighted works without permission is unfair to creators. He believes that this practice violates copyright law and harms creators economically. He also argues that using copyrighted works without permission is morally wrong.

Here are Newton-Rex's specific arguments:

  1. Fair use: According to Newton-Rex, the act of training a generative AI model on copyrighted works is not fair use under copyright law. He argues that this is because generative AI models can be used to create works that compete with the copyrighted works they are trained on. He points to a recent submission by Stability AI to the US Copyright Office in which the company argued that training generative AI models on copyrighted works is fair use. Newton-Rex disagrees with this position.

  2. Economic harm: Newton-Rex argues that training generative AI models on copyrighted works without permission harms creators economically. He argues that this is because generative AI models can be used to create new content that can compete with the original works. He believes that this could lead to a decline in the value of copyrighted works and make it more difficult for creators to make a living.

  3. Morality: Newton-Rex also argues that using copyrighted works without permission is morally wrong. He believes that creators should be compensated for their work, and that using their work without permission is unfair.

Newton-Rex concludes by stating that he is a supporter of generative AI, but that he can only support generative AI that does not exploit creators. He hopes that others will speak up against the current practice of training generative AI models on copyrighted works without permission.

Question 2 of 2 for Google Bard: Do you agree? And more importantly, do you feel that AI makers and regulators should agree to this?Google Bard’s answer:Ed Newton-Rex's arguments raise important concerns about the ethical and legal implications of training generative AI models on copyrighted works. His points about fair use, economic harm, and morality are all valid and should be considered by AI makers and regulators.

Fair Use:

Newton-Rex is correct in pointing out that the fair use doctrine is not designed for the specific context of generative AI. Fair use is a legal principle that allows for the use of copyrighted material without the permission of the copyright holder in certain limited circumstances. These circumstances typically involve uses that are transformative, educational, or critical of the original work.

Training a generative AI model on copyrighted works is not transformative in the traditional sense. The AI model is not simply copying the original work; it is using the work to learn how to create new works in a similar style. This transformative process is what makes generative AI so powerful, but it also raises questions about whether it falls within the scope of fair use.

The economic harm argument is also a valid concern. As Newton-Rex points out, generative AI models can be used to create new content that competes with the original works on which they are trained. This could lead to a decline in the value of copyrighted works and make it more difficult for creators to make a living.

Finally, Newton-Rex's moral argument is also persuasive. Creators should be compensated for their work, and using their work without permission is unfair. Generative AI companies should find ways to compensate creators for their contributions to the training process.

Regulators and AI Makers:

AI makers and regulators should take these concerns seriously and work together to develop a framework for the responsible development of generative AI. This framework should include clear guidelines for the use of copyrighted works in the training of generative AI models. It should also address the issue of compensation for creators.

One possible approach would be to require AI companies to obtain licenses from copyright holders before training generative AI models on copyrighted works. This would ensure that creators are compensated for their work and would also give them some control over how their work is used.

Another approach would be to develop a system of collective licensing, in which AI companies would pay a fee to a copyright collective that would then distribute the funds to creators. This approach would be more efficient than individual licensing, but it would also give creators less control over how their work is used.

Ultimately, the best approach will require careful consideration of the interests of both AI makers and creators. Regulators should play a role in facilitating this dialogue and ensuring that a fair and equitable solution is reached.

Photo by R.D. Smith on Unsplash