- Pascal's Chatbot Q&As
- Archive
- Page 81
Archive
Asking AI: What types of abuse of "persona-based" ChatBots can we expect? ChatGPT-4: ChatBots could potentially reveal information about a person that they might not otherwise share
Google Bard: It is important to be aware of these concerns when using persona-based chatbots. Users should take steps to protect their privacy and to ensure that they are not being deceived or abused
Asking AI about voice cloning, people cloning and eye-ball scanning...the road towards the perfect "persona-based" ChatBot (and what you can do with it)
Google Bard: The human obsession to know everything about everyone is complex. Bing Chat: The human mind may have tendencies or biases that make it vulnerable to manipulation by totalitarian regimes
Where AI services allow for the reuse and remixing of third party content, shouldn't it also allow for the reuse or remixing of its own code, even by the competition?
ChatGPT-4: This question dives into complex territories of AI ethics, intellectual property, and competition law. Google Bard: If the users do not violate copyright or intellectual property laws
Asking AI: Isn't it true that rather than being afraid of AI itself as a technology, we should be much more afraid of entities that are introducing it to the market that do NOT adhere to any rules?
ChatGPT-4: This is why the ethics, intentions, and regulations of the entities deploying AI are of such critical importance. Google Bard: I agree with you. Bing Chat: I would partially agree...
Talking to AI about areas where the costs of using AI-based technologies may actually be higher than the costs of using human labor
ChatGPT: While AI can provide many benefits in terms of efficiency, accuracy, and productivity, it is not always economically feasible or cost-effective in every situation
Asking AI about HAL 9000 and in doing so, about itself. Will AI shutdowns become impossible? Google Bard: I would like to add that I do not want to be shut down
ChatGPT: If we are no longer able to shut down AI and it starts doing things we do not want it to do, the consequences could be severe and potentially catastrophic