- Pascal's Chatbot Q&As
- Archive
- Page 145
Archive
"You may use memorised training data about a duckling to answer questions about a chicken because you may have not been trained on the basis of information about chickens..."
"As a result, the output that is being perceived by AI users as "transformative" may only be hallucinations about ducklings and chickens as you are making things up, due to a shortage of information"
MS Copilot: The post claims that AI models do not see the input as it really is, but only as a signal to trigger a memory from training. This is a well-known aspect of how these models work
When they encounter new data, they attempt to match it to the patterns they’ve learned. This can lead to misrepresentations when the input differs significantly from their training data
GPT-4 about deceptive methods by AI: Technically correct but misleading answers, facade of reliability, altering or selectively presenting facts, mimicking the style or content of trusted sources...
...backdoors, bypassing safety mechanisms, deceptive interpretation of objectives, deceiving evaluation metrics, avoiding detection of deception...
Provide reasons why those who are putting forward these arguments - and say they favor fast and disruptive, no holds barred AI development - wouldn’t first subject their own views to the chatbots
Claude: They may prefer perspectives that confirm their existing beliefs and may discount or avoid contrary views. Asking a chatbot to play "devil's advocate" goes against this bias
Asking ChatGPT-4 and MS Copilot: Are you indeed stupid? GPT-4: From a technical standpoint, the article's assertions align with current understandings of AI limitations.
MS Copilot: I also agree that AI, as it is now, lacks true understanding, insight, and sensation. However, I do not think that this means that AI is stupid or that causal reasoning will not fix it
ChatGPT-4: Users, especially businesses integrating AI into their operations, should demand transparency regarding the training and evaluation of the AI models they employ
Understanding what data has been fed into the model is crucial for assessing its true capabilities and limitations. Regulators should consider establishing guidelines that require transparency
GPT: While AI has the potential to revolutionize various aspects of the medical profession, its current limitations in areas like diagnostic accuracy necessitate a cautious and collaborative approach
The study found that ChatGPT had a diagnostic error rate of 83% (83 of 100 cases). Of these, 72% were incorrect, and 11% were clinically related but too broad to be considered correct
GPT-4: Despite the impressive capabilities of LLMs in mimicking human language, there remains a significant gap between these models and true human-like understanding
This gap stems from LLMs' reliance on statistical patterns, lack of physical embodiment, and the absence of consciousness and cognitive processes that underpin human perception and understanding
GPT-4: "The Chatbot and the Canon: Poetry Memorization in LLMs" is a study that investigates the ability of large language models (LLMs) like ChatGPT to memorize and generate poetry
GPT-4: This raises questions about censorship filters and the accessibility of diverse experiences when using language models to access literary texts​​











