- Pascal's Chatbot Q&As
- Archive
- Page 35
Archive
GPT-4o: A "LinkedIn for AI" - where each model has a public profile detailing its training, error rate, and limitations - is an excellent idea for AI governance & applications involving public trust.
The more transparency users and developers have about the model's training data, error rates, and performance across different tasks, the better equipped they are to responsibly deploy these models.

GPT-4o: Given the issues with algorithmic biases that contributed to the broader Dutch benefits scandal, robust oversight and ethical guidelines are essential...
...to ensure fair, responsible, and effective AI application within the Commission for Actual Damages. Perplexity: AI is not a panacea and comes with its own set of challenges and limitations.

Establishing frameworks for attribution in AI models, especially at the output stage, ensures that content originators receive proper credit, even in complex, large models.
Differentiating licensing for foundational AI models, custom fine-tuning, and Retrieval-Augmented Generation (RAG) is essential.

Integrating values of justice, empathy, and fairness into AI decision-making can help ensure alignment with societal expectations.
Incorporating a Control Language Model (LLM) to oversee other models also introduces real-time accountability, allowing for continuous auditing and self-reflection.

Please read the article "Perplexity’s CEO punts on defining ‘plagiarism’" and help Aravind Srinivas by explaining to him what plagiarism is, both generally and in relation to LLMs specifically.
Perplexity: By taking these steps, Perplexity can demonstrate a commitment to ethical content generation and respect for intellectual property rights.

Patterns in Big Tech’s m.o. emerge: Strategic Use of Third-Party Proxies, Undisclosed Affiliation & Lobbying, Blurred Lines between Advocacy & Expertise, Reliance on Complex & Shady Lobbying Tactics
AI can be a powerful tool in realizing and automating solutions suggested to address Big Tech’s influence in regulatory processes. Here’s how AI could assist.

"While Big Tech companies may not directly enforce a "preferred truth," they exert immense influence over how digital discourse evolves, often nudging it toward outcomes compatible with their goals."
The result is an online environment that may subtly prioritize commercially advantageous narratives over a genuinely open exchange of ideas.

The proposal is to add a new right under Dutch law, similar to rights for performing artists. This right would let individuals control how their image or voice is used in deepfakes.
The law would allow people to stop others from making, sharing, or profiting from deepfakes of them without permission.

GPT-4o: Gordon-Levitt pointed out that the new SAG-AFTRA agreement largely sidesteps restrictions on studios using actors’ past performances to train AI systems.
This omission allows studios to create "Synthetic Performers" modeled on real actors without paying ongoing royalties or securing specific permissions. GPT-4o: I agree with Gordon-Levitt’s concerns

AI and Loss of Human Control, Amplification of Inequities, Ethical and Societal Impact, Risk of AI-Controlled Education, AI’s Role in Workforce and Military, Existential Risks of Superintelligent AI
These points underscore the potential for AI to transform, and possibly destabilize, many aspects of society if not developed and regulated thoughtfully
