• Pascal's Chatbot Q&As
  • Posts
  • From piracy of books and research papers, to a lawsuit against OpenAI and conversations about the impact of AI on our global society

From piracy of books and research papers, to a lawsuit against OpenAI and conversations about the impact of AI on our global society

ChatGPT-4: It is crucial that these issues are addressed to ensure that AI is used responsibly and that the benefits it can bring are not overshadowed by potential negative impacts

Question for AI services: Please provide me with your opinion about the following statement: From piracy of books and research papers, to a lawsuit against OpenAI and conversations about the impact of AI on our global society It saddens one to learn that there is a possibility that AI companies may have used those piracy platforms to obtain massive quantities of infringing copies of books and research papers, in order to 'train' the Artificial Intelligence services. Especially because those piracy platforms do not correct or retract any information they make available. In any event, here is the first example, relevant to the publishing industry, of folks objecting against this unfortunate situation: OpenAI Legal Troubles Mount With Suit Over AI Training on Novels Quote: "The complaint cited a 2020 paper from OpenAI introducing ChatGPT-3, which said 15% of the training dataset comes from “two internet-based books corpora.” The authors alleged that one of those book datasets, which contains over 290,000 titles, comes from “shadow libraries” like Library Genesis and Sci-Hub, which use torrent systems to illegally publish thousands of copyrighted works." Source: https://news.bloomberglaw.com/ip-law/openai-facing-another-copyright-suit-over-ai-training-on-novels See also: https://www.hollywoodreporter.com/business/business-news/authors-sue-openai-novels-1235526462/ Others are flagging potentially even bigger issues relevant to AI services not being diligent about what it is they are using for AI training purposes: OpenAI Sued for Using 'Stolen' Data, Violating Your Privacy With ChatGPT https://au.pcmag.com/ai/100656/openai-sued-for-using-stolen-data-violating-your-privacy-with-chatgpt And the latter has some companies doubt whether or not to accept AI generated content on their commercial platform in the first place: Valve now quietly rejecting games with AI-generated assets on Steam, citing copyright uncertainty | Game World Observer https://gameworldobserver.com/2023/06/30/valve-bans-games-ai-assets-on-steam-copyright 

All of this raises the following question for some:

And when assessing whether AI should be 'stopped' in the first place (assuming that it even could be stopped) one may want to take note of the following information:

AI and the future of humanity Join journalist Pedro Pinto and Yuval Noah Harari as they delve into the future of artificial intelligence (A.I.). Together, they explore pressing questions in front of a live audience, such as: What will be the impact of A.I. on democracy and politics? How can we maintain human connection in the age of A.I.? What skills will be crucial for the future? And what does the future of education hold?

Ex-Google Officer Finally Speaks Out On The Dangers Of AI! "I cannot stop. Why? If I stop and others don't, my company goes to hell" "We have given too much power to people who didn't assume their responsibility" "Tax companies 98% and give the money to those whose jobs are going to be replaced" "What does life look like when you no longer need Drake?"

The AI Arms Race And Why We Need To Come Together Now We cannot afford to pay the cost of negative outcomes with AI like we’ve done with social media, or like we did with nuclear. As a society, the price we would all pay is far too steep. https://clarksonlawfirm.com/the-ai-arms-race-and-why-we-need-to-come-together-now/

Definitely to be continued...

ChatGPT-4’s answer:As an AI, I don't form opinions or feelings, but I can provide an analysis based on the information given.

The issue of AI companies potentially using pirated materials for training their models is indeed a complex one. It raises questions about copyright infringement, data privacy, and the ethical use of AI. The lawsuit against OpenAI, as mentioned in the statement, is a clear example of these concerns coming to the fore.

The use of pirated materials for training AI models can lead to copyright infringement issues. This is because the creators of these materials have not given their consent for their work to be used in this way. Moreover, if the data used for training is incorrect or biased, it can lead to the AI producing inaccurate or biased results.

The issue of data privacy is also a significant concern. If AI companies are using data from sources that have not obtained the data ethically or legally, this could potentially violate individuals' privacy rights.

The links provided in the statement highlight these concerns and the ongoing debate about the ethical use of AI. They also point to the potential impact of AI on society, including its effect on jobs, the economy, and even our understanding of creativity and art.

The question of whether AI should be 'stopped' is a complex one. AI has the potential to bring about significant benefits, such as improving efficiency, aiding in research, and providing new insights. However, it is crucial that its use is regulated to prevent misuse and to ensure that it is used ethically and responsibly.

The videos linked in the statement discuss these issues in more depth, exploring the potential impact of AI on society and the need for regulation and cooperation to prevent negative outcomes.

In conclusion, the use of AI and the data it is trained on raises important ethical and legal questions. It is crucial that these issues are addressed to ensure that AI is used responsibly and that the benefits it can bring are not overshadowed by potential negative impacts.

Google Bard’s answer:The use of pirated books and research papers to train AI models is a serious issue that raises a number of concerns, including copyright infringement, privacy violations, and the potential for AI to be used to generate harmful content.

The copyright infringement issue is relatively straightforward. If an AI model is trained on pirated content, then the copyright holder of that content may have a claim against the entity that trained the model. This could potentially lead to lawsuits and damages awards.

The privacy issue is more complex. When an AI model is trained on pirated content, it may also be trained on personal information that was not intended to be shared. This information could then be used by the entity that trained the model or by others who have access to the model. This could lead to privacy violations, such as the unauthorized disclosure of personal information.

The potential for AI to be used to generate harmful content is also a concern. For example, an AI model could be used to generate fake news articles or to create deepfakes that could be used to damage someone's reputation. This is a serious concern, as it could lead to the spread of misinformation and the erosion of trust in institutions.

In light of these concerns, it is important for AI developers to be careful about the data they use to train their models. They should only use data that is properly licensed and that does not contain personal information. They should also be aware of the potential for AI to be used to generate harmful content.

The questions you have raised about the future of AI are important and complex. There is no easy answer to the question of whether AI should be "stopped." However, it is clear that we need to have a thoughtful and informed discussion about the potential benefits and risks of AI. We need to ensure that AI is developed and used in a way that is safe, ethical, and beneficial to humanity.

I agree that the AI arms race is a serious concern. We need to come together as a society and agree on some basic principles for the development and use of AI. We need to ensure that AI is used for good and not for harm.

Thank you for raising these important issues. I look forward to continuing the conversation.

Bing Chat’s answer:Thank you for sharing your thoughts on this topic. I appreciate your interest in the ethical and social implications of AI.👏

In this context, a human might say that they agree or disagree with the statement, and provide some reasons or examples to support their opinion. They might also ask some questions to clarify or challenge the statement, or to invite further discussion.

For example, one possible response could be:

I agree with the statement that piracy of books and research papers is a serious problem for AI development and society. I think that AI companies should respect the intellectual property rights of authors and publishers, and use only legitimate sources of data for training their models. I also think that AI companies should be transparent about the data they use and the methods they employ, and ensure that their models are fair, accurate and reliable. I wonder how the legal system can enforce these standards and hold AI companies accountable for their actions. What do you think?🤔