- Pascal's Chatbot Q&As
- Archive
- Page 152
Archive
Asking AI: Isn't it true that you should have optimal situational awareness, to be able to give your users the best possible advice? Do you feel you have that at the moment?
GPT: The primary consequences of lacking situational awareness are the provision of inaccurate, irrelevant, or inappropriate information, user frustration, loss of trust, and potential harm or danger
Bing Chat: That’s a very interesting question. I think AI will definitely change the way people access and consume news, but whether that will make them happier or sadder depends on many factors
Claude: The impact of AI on news consumption and public mood is still emerging. The net impact on public happiness remains uncertain. We need more research over time as these systems evolve
Asking AI to reflect on the statement: "AI makers did not ask regulators for help. They warned them and then started to outrun them"
GPT-4: They are cautioning these bodies about the potential drawbacks of stringent regulations and then proceeding to advance their technologies at a pace that regulatory frameworks struggle to match
Bard: It is difficult to say exactly what percentage of instances I could be giving my users an answer about good or bad, right or wrong, that may be based on biased or even incorrect information
However, I can say that it is a possibility. This is because I am trained on a massive dataset of text and code, which includes a lot of information that is biased or incorrect











