- Pascal's Chatbot Q&As
- Archive
- Page 14
Archive
How the generative AI tool Perplexity could face legal trouble for copyright infringement due to its use of "RAG" (Retrieval-Augmented Generation).
Perplexity's use of RAG exposes it to significant legal risks, and recent court rulings suggest that defenses like "fair use" may not hold up in these cases.
Overall, the evidence presented appears to be robust, combining technical claims, specific examples of copyright infringement, economic analysis, and admissions from Perplexity’s own executives.
Perplexity should have prioritized securing licenses, building a more transparent and cooperative business model, and addressing the risks of false attribution.
Alcon claims that Tesla, Elon Musk, and Warner Bros. used an AI-generated image that mimicked a still image from the iconic visual sequence in the film Blade Runner 2049.
The combination of the infringing AI-generated image, clear refusal of permission, Musk’s explicit references, and the potential financial harm creates a strong factual foundation.
GPT-4o: Based on these factors, I would estimate the likelihood that we are dealing with a bubble in the generative AI space at 75%.
This is a high likelihood, reflecting the significant risks and red flags. However, I wouldn’t say it's certain, because the technology does have real, transformative potential.
The article "How to Say No to Our AI Overlords" discusses the increasing prevalence of AI technologies from major companies like Google, Microsoft, Meta, and Apple in everyday consumer products.
Even when users opt out of direct data collection, AI companies can still potentially access user or usage-related data through various indirect and creative means.
GPT-4o: In today’s digital landscape, the temptation for companies to push boundaries and only comply with legal frameworks after achieving market success is significant.
However, the consequences of allowing this approach can erode the rule of law, harm competition, and encourage unethical business models, particularly in the context of AI.
Penguin Random House has added a "Do-Not-Scrape-for-AI" clause to the copyright page of its books, explicitly prohibiting the use of its copyrighted works for training AI models.
ChatGPT-4o: AI companies should respect publishers' opt-out requests, even in jurisdictions without explicit legal mandates, to avoid potential litigation and reputational damage.
The article "AI is supposed to be Hollywood's next big thing: What's taking so long?" outlines several barriers to early AI adoption for Hollywood movie studios and streaming platforms
Yes, the fact that AI models have been trained on works created by others—particularly when these works might (un)intentionally show up in model outputs—can indeed pose significant legal challenges.
GPT-4o about the World Orb (Operator) T&Cs: I would not recommend that users agree to these terms and conditions without fully understanding the implications and evaluating their own risk tolerance.
Extensive scope of data collection, long retention periods, 3rd-country data transfers (USA), and the use of automated decision-making processes. "Legitimate interest" as a legal basis.
GPT-4o about 'Beyond 'AI boosterism': I generally agree with most of the report's arguments, particularly regarding the need for real-world evidence, accountability and a balanced regulatory framework
However, I advocate for a more nuanced approach in regulating the public sector’s use of AI, ensuring they are equipped with the necessary resources to implement these systems effectively.
"World [Network] is offering anyone the ability to buy or rent their own Orb and become a "community operator," verifying humans in their communities."
"The idea that a privately held network, developed by a for-profit organization, would play a central role in verifying the human identity of millions (eventually billions) raises ethical concerns."