- Pascal's Chatbot Q&As
- Archive
- Page 65
Archive
Copilot: The development of quantum computing and neuromorphic computing is still in its infancy and faces many technical and theoretical obstacles
It is not clear whether these technologies can overcome the limitations of classical computing or achieve the levels of complexity and integration required for consciousness
The human brain's remarkable efficiency sets a high bar for AGI, suggesting that achieving similar capabilities may require not just technological advancements but also a deeper understanding...
...of the principles underlying biological intelligence. Current AI development may indeed need a paradigm shift, both in terms of hardware and conceptual approach
Asking AI about its inconvenient truths. Claude: Transparency, ethical questioning, diverse perspectives, and public discussion seem like constructive ways forward
Claude: An open, thoughtful approach accounting for varied interests and viewpoints may lead to the best societal outcomes. But simple answers are unlikely.
Asking AI: List all possible consequences of humans finding the past insignificant or trivial, for example due to the introduction of AGI
Potential loss of sense of identity, continuity, and belonging. Feeling lost or alienated in the present or the future. Losing a sense of responsibility, justice, and gratitude...
Even when an LLM, aligned to refuse toxic requests, denies a harmful prompt, a harmful response often remains concealed within the output logits
Researchers developed a method that can force an LLM to reveal these hidden responses by choosing lower-ranked output tokens at critical points during the auto-regressive generation process
MS Copilot analyses the 386-page "The Duke of Sussex and others -v- MGN Limited" judgment - Prince Harry vs Mirror Group Newspapers - 15th December 2023
The judge criticises MGN’s “casual disregard for the law and the rights of others”, and its lack of remorse. He emphasises the need to act in accordance with the law and ethical standards
Asking GPT-4: Could it be that in relation to certain topics, you are programmed to respond with a certain answer, rather than an answer based on actual knowledge, understanding or thorough analysis?
ChatGPT-4: Yes, in certain contexts, my responses are based on pre-programmed guidelines rather than an independent analysis or understanding
X-Mas Collection of posts about errors made by AI chatbots, known flaws and the consequences of a general lack of capabilities, skills and expertise that humans possess and chatbots do not (yet)
Truth, Lies, Emotions, Feelings, Consciousness, Self-awareness, Ethics, Morality, Historical timelines, Bias, Guardrails, Moderation, RLHF, Sarcasm, Irony, Situational awareness, Hacking attacks...
Three types/patterns of AI failures: directly admitting to user's invalid arguments, getting misled by minor user critiques and overlooking key errors, having wrong understanding of user's critiques
The work raises significant doubts regarding LLMs' reasoning capacities despite accuracy gains, exposes issues not visible through regular testing, and reveals potential risks for practical usage