- Pascal's Chatbot Q&As
- Archive
- Page 61
Archive
GPT-4: From the perspective of using AI for scientific research where accuracy, truthfulness, reliability and objectivity are paramount, some of Sam Altman's statements in the interview raise concerns
GPT-4: AI systems must uphold principles of accuracy, objectivity, and impartiality, ensuring that they contribute positively to the pursuit of knowledge without introducing biases or inaccuracies
GPT-4: It raises the possibility that LLMs might be considered personal data due to their vulnerability to attacks that can extract or infer personal data used in their training...
...This consideration could lead to challenges in complying with GDPR’s 'right to be forgotten' and balancing individual rights with the broader benefits of AI technologies​
"You may use memorised training data about a duckling to answer questions about a chicken because you may have not been trained on the basis of information about chickens..."
"As a result, the output that is being perceived by AI users as "transformative" may only be hallucinations about ducklings and chickens as you are making things up, due to a shortage of information"
MS Copilot: The post claims that AI models do not see the input as it really is, but only as a signal to trigger a memory from training. This is a well-known aspect of how these models work
When they encounter new data, they attempt to match it to the patterns they’ve learned. This can lead to misrepresentations when the input differs significantly from their training data
GPT-4 about deceptive methods by AI: Technically correct but misleading answers, facade of reliability, altering or selectively presenting facts, mimicking the style or content of trusted sources...
...backdoors, bypassing safety mechanisms, deceptive interpretation of objectives, deceiving evaluation metrics, avoiding detection of deception...
Provide reasons why those who are putting forward these arguments - and say they favor fast and disruptive, no holds barred AI development - wouldn’t first subject their own views to the chatbots
Claude: They may prefer perspectives that confirm their existing beliefs and may discount or avoid contrary views. Asking a chatbot to play "devil's advocate" goes against this bias
Asking ChatGPT-4 and MS Copilot: Are you indeed stupid? GPT-4: From a technical standpoint, the article's assertions align with current understandings of AI limitations.
MS Copilot: I also agree that AI, as it is now, lacks true understanding, insight, and sensation. However, I do not think that this means that AI is stupid or that causal reasoning will not fix it
ChatGPT-4: Users, especially businesses integrating AI into their operations, should demand transparency regarding the training and evaluation of the AI models they employ
Understanding what data has been fed into the model is crucial for assessing its true capabilities and limitations. Regulators should consider establishing guidelines that require transparency
GPT: While AI has the potential to revolutionize various aspects of the medical profession, its current limitations in areas like diagnostic accuracy necessitate a cautious and collaborative approach
The study found that ChatGPT had a diagnostic error rate of 83% (83 of 100 cases). Of these, 72% were incorrect, and 11% were clinically related but too broad to be considered correct