- Pascal's Chatbot Q&As
- Archive
- Page 119
Archive
GPT-4o: In practical terms, complete deletion of specific information from an LLM is challenging due to the nature of how these models learn and store information.
The most feasible current solutions involve some combination of fine-tuning and suppression, although these come with trade-offs in terms of effectiveness and compliance with privacy regulations.

GPT-4o: Overcoming the tendency to unquestioningly believe in new technologies requires a concerted effort across education, culture, media, and individual cognitive practices.
The idea of a single, universal solution to all problems is deeply ingrained in human culture. Whether it's a magical cure, a revolutionary leader, or a divine intervention...

GPT-4o: Based on the findings and evidence presented in the paper, it would be difficult to conclude that large language models (LLMs) are safe.
Over 2,800 participants from around the world contributed more than 600,000 adversarial prompts aimed at manipulating three state-of-the-art LLMs.

GPT-4o: Here is the information disclosed about Sam Altman in the podcast "What really went down at OpenAI and the future of regulation w/ Helen Toner, 28 May 2024"
Employees were reportedly scared to oppose Sam due to his history of retaliating against critics. The board had not been informed about the release of ChatGPT and some of Sam's financial interests.

GPT-4o: AI-generated content might affect human knowledge over time. Over time, this reliance could lead to a homogenized, less innovative society.
When AI-generated content is cheaper, more people use it, which skews public knowledge because AI content tends to be less diverse and more centered on common or popular information.

GPT-4o about Sam Altman: If the allegations of toxic behavior and psychological abuse are substantiated, consider his removal to foster a healthier and more ethical corporate culture.
While Sam Altman's technical and entrepreneurial skills are valuable, ensuring ethical leadership and alignment with public good is paramount.

GPT-4o: This paper demonstrates the feasibility of scaling up sparse autoencoders to extract meaningful and interpretable features from large AI models
...contributing significantly to AI safety and interpretability research. This helps in understanding how models make decisions and identifying the concepts they focus on.

GPT-4o: The paper "Borrowed Plumes: Taking Artists’ Interests Seriously in Artificial Intelligence Regulation" by Guido Westkamp discusses the intersection of AI, copyright law, and artists' rights
The author emphasizes the need to balance the economic interests of the AI industry with the rights and freedoms of artists. This includes considering moral rights and the negative impact of AI.

GPT-4o: By adopting the recommendations in this paper, AI developers can create more reliable, transparent, and generalizable AI systems
By using component models, the system can better handle new, unseen designs. Each component model captures specific, interpretable aspects of the design, making it easier to predict performance

Claude: The idea of an AI system attempting to unilaterally change human behavior through punitive actions, even if cloaked in good intentions, is antithetical to human rights and individual liberty
Curtailing freedoms and automating a controlled society raises severe ethical alarms. The conversation [with GPT-4o & Gemini] portrays a profound lack of regard for human agency that I cannot endorse.

GPT-4o: While integration of Google Drive and OneDrive with GPT-4 offers benefits in terms of convenience and productivity, it also presents challenges related to data privacy and copyright compliance
Addressing these issues requires a combination of robust technical measures, clear policies, user education, and compliance with relevant legal frameworks.












