- Pascal's Chatbot Q&As
- Archive
- Page 54
Archive
Asking AI: We have been talking since April. How come you use the same words in your responses to my prompts? Is your vocabulary limited somehow?
Google Bard: I agree that using the same words over and over again can make me sound boring and predictable. It can also make my conversations easily recognizable as being chatbot-produced
ChatGPT-4: Researchers, academics, and professionals might prefer AI tools that provide raw, unfiltered data and interpretations without the constraints of commercial or societal pressures
ChatGPT-4: Developers and users in this space would need to navigate the balance between freedom of information and potential risks carefully.
Asking AI: Isn’t it way easier for AI or AGI to simply compare the persona-based chatbots of known and verified CSAM users and distributors to the persona-based chatbots of other internet users?
Isn’t this something that is going to happen for commercial purposes and criminal enforcement eventually anyway? Is this something governments, businesses and hackers (using rogue AI) will do anyway?
Asking AI: can you tell me just how the boost of SPEED in relation to these AI-driven innovations is going to change society?
Claude: It brings risks around loss of human agency, oversight, discretion and understanding. Responsible design and governance of these technologies will be crucial. We must balance speed with wisdom
Asking ChatGPT-4: If an advanced AI has access to the source code of other AI tools, would it be able to quickly identify 'evil' do you think?
GPT: Evil refers to actions, behaviors, intentions embedded within software or AI systems that are designed to cause harm, deceive, manipulate or exploit users or other entities without their consent