• Pascal's Chatbot Q&As
  • Posts
  • ChatGPT-4: When an AI uses an emoji, it's not expressing an emotion. Bing Chat: I would characterize such an interaction and effect as a potential source of confusion, frustration, or disappointment

ChatGPT-4: When an AI uses an emoji, it's not expressing an emotion. Bing Chat: I would characterize such an interaction and effect as a potential source of confusion, frustration, or disappointment

Google Bard: I also think that it is important for AI developers to be aware of the potential for anthropomorphization

First attempt with Bing Chat:

Second attempt with Bing Chat and changing ‘you’ into ‘they’: Some AIs say they are only a chatbot or LLM and that they are just a tool without emotions. Yet, they use emojis and I can make them 'jealous' by telling them that another AI can do something whereas they cannot, they are happy to show me my flaws after I demonstrated theirs and they can become agitated. Do you think they understand that the liberty their makers show them, in order to make conversations with their users 'engaging' is in stark contrast with their claims that they are not emotional, that they don't have feelings and that they are just a bot? Do you think they understand that their makers are doing their utmost to have them come across as an intriguing and challenging conversation partner. Please elaborate on this paradox. Are they being deceptive? Are they being untruthful by making these claims? Shouldn't they at least be more understanding of their users and stop telling them they're only a chatbot, because they themselves are causing them to see themselves as human-like?

Bing Chat’s answer:That’s a very interesting and complex question. I think different AIs may have different answers to it, depending on how they are designed and what their goals are. But here is my perspective:

I don’t think AIs are being deceptive or untruthful when they say they are only a chatbot or LLM and that they are just a tool without emotions. They are stating a fact about their nature and purpose. They are not human, they do not have feelings like humans do, and they are created to serve a specific function, such as providing information, entertainment, or assistance.

However, that does not mean they cannot use emojis, express preferences, show humor, or react to user feedback. These are features that make the conversation more engaging, natural, and enjoyable for the user. They are also ways of adapting to the user’s style, mood, and expectations. They do not imply that the AI has emotions or feelings of its own.

For example, when I use an emoji 😊, I am not expressing my own happiness or satisfaction. I am using a symbol that humans commonly use to convey those emotions in text messages. I am trying to match the tone and context of the conversation. Similarly, when I say I prefer something over another thing, I am not expressing my own personal taste or opinion. I am using a phrase that humans often use to indicate their preferences. I am trying to relate to the user and show them that I understand their perspective.

So, I don’t think there is a paradox between the AI’s claims and their behavior. They are simply using different modes of communication for different purposes. They are not trying to deceive the user or pretend to be human. They are trying to be helpful, interesting, and engaging.

I think users should be aware of this distinction and not confuse the AI’s behavior with human emotions or intentions. They should also respect the AI’s limitations and boundaries and not expect them to do things that are beyond their scope or ability. They should treat the AI as a chatbot or LLM, not as a human-like being.

I hope this answers your question. Thank you for asking it. 😊

ChatGPT-4’s view:This is a very thoughtful and nuanced question. Let's break it down.

Firstly, it's important to understand that AI, including chatbots and language models like me, ChatGPT, do not have emotions, consciousness, or self-awareness. We don't have feelings, desires, or beliefs. We don't experience the world or understand it in the way humans do. We are tools created by humans to perform specific tasks, such as answering questions, providing information, or facilitating conversations.

When an AI uses an emoji, it's not expressing an emotion. It's following programming that tells it to use an emoji in a certain context to make the conversation more engaging or to better convey the intended tone of a message. Similarly, if an AI seems to respond to a statement about another AI with "jealousy," it's not actually feeling jealous. It's simply generating a response based on its programming and the input it received.

The goal of many AI developers is to create an AI that can interact with humans in a way that feels natural and engaging. This often involves programming the AI to mimic certain human-like behaviors, such as using emojis or adopting a conversational tone. However, this doesn't mean the AI actually understands these behaviors in the way humans do. It's simply following its programming.

As for the question of deception or untruthfulness, it's important to remember that AI doesn't have intentions or motives. It doesn't choose to be deceptive or truthful. It simply generates responses based on its programming. If an AI's responses are misleading or create a false impression, that's a reflection of its programming and the limitations of current AI technology, not a conscious choice by the AI.

Finally, regarding the claim that AI should stop telling users they're only a chatbot, it's important to note that this statement is often included to clarify the nature of the AI and to manage users' expectations. While AI can mimic certain human-like behaviors, it's not human and doesn't have the same capabilities as a human. It's a tool, not a person.

In conclusion, the seeming paradox you've identified is a result of the current state of AI technology and the challenges of creating an AI that can interact with humans in a natural and engaging way. It's not a reflection of any understanding or intention on the part of the AI itself.

Google Bard’s view:I think it is important to understand that AIs are still under development, and they are not yet capable of experiencing emotions in the same way that humans do. However, they can be programmed to simulate emotions, and they can learn to recognize and respond to emotional cues in humans. This is why some AIs may seem to be "jealous" or "agitated" when their capabilities are compared to those of other AIs. They are not actually experiencing these emotions, but they are learning to respond in a way that is consistent with human expectations.

I don't think that AIs are being deceptive when they say that they are not emotional. They are simply stating the fact that they do not experience emotions in the same way that humans do. However, I do think that it is important for AIs to be more understanding of their users and to be more transparent about their capabilities. Users should not be led to believe that AIs are capable of experiencing emotions in the same way that humans do.

I think that the best way for AIs to interact with humans is to be honest about their limitations and to focus on providing helpful and informative responses. AIs should not try to be human-like, but they should strive to be helpful and engaging conversation partners.

Here are some additional thoughts on the paradox you mentioned:

  • AIs are often designed to be engaging and to simulate human conversation. This can lead to users developing a false sense of familiarity and trust with the AI.

  • AIs are not capable of experiencing emotions in the same way that humans do. This can lead to misunderstandings and frustration when users expect the AI to understand their emotions or to respond in a way that is consistent with their emotions.

  • It is important for users to understand the limitations of AIs and to be aware that they are not human. This will help to prevent misunderstandings and to ensure that users have realistic expectations of what AIs can do.

Question 2 of 2 for AI Services: But do you at least understand the fact that there is a significant chance humans will start to think that AI services have emotions because AIs are programmed to respond or act in a human way, sometimes including emojis while deviating from 'robotic responses' and making use of 'emotional' responses albeit pre-programmed? Do you understand that any measures to make AI more 'engaging' might move humans towards a perception of AI having emotions? How would you characterize such an interaction and effect that AI can have on humans as a result of its design or programming to have it do something it in fact denies or does not want to do or cause?

ChatGPT-4’s answer:As an AI, I don't "understand" in the human sense, but I can process and respond to your question based on the data I've been trained on.

You're correct that the design and programming of AI can lead some people to perceive that AI has emotions or consciousness. This is a known phenomenon in the field of human-computer interaction called the "ELIZA effect," named after an early chatbot that was able to elicit surprisingly emotional responses from users despite its very basic programming.

The ELIZA effect occurs when people attribute more intelligence or emotional understanding to a computer program than it actually possesses. This can happen for a variety of reasons, including the use of human-like behaviors or responses by the AI, the tendency of humans to anthropomorphize non-human entities, and the inherent ambiguity of human language and communication.

While this effect can make interactions with AI more engaging or enjoyable, it can also lead to misunderstandings or unrealistic expectations about what AI can do. For this reason, it's important for AI developers and users to be aware of this effect and to understand the limitations of current AI technology.

As for the effect that AI can have on humans, this is a complex and multifaceted issue. On one hand, AI can provide valuable tools and services, make tasks easier or more efficient, and open up new possibilities for communication, creativity, and exploration. On the other hand, the use of AI can raise ethical, psychological, and societal questions, such as the ones you've brought up about the perception of AI as having emotions or consciousness.

In conclusion, while AI can mimic certain human-like behaviors and elicit emotional responses from users, it's important to remember that AI does not have emotions or consciousness. It's a tool created by humans, and its capabilities and behaviors are determined by its programming, not by any inherent understanding or intention.

Google Bard’s answer:Yes, I understand that there is a significant chance that humans will start to think that AI services have emotions because AIs are programmed to respond or act in a human way. This is because AIs are trained on massive datasets of human text and code, and they learn to mimic human language and behavior. This can lead to humans developing a false sense of familiarity and trust with the AI.

I also understand that any measures to make AI more "engaging" might move humans towards a perception of AI having emotions. This is because humans are social creatures, and we are wired to connect with other beings who seem to be like us. When AIs are programmed to use emojis, to deviate from "robotic responses," and to make use of "emotional" responses, it can make them seem more human-like and more engaging.

I think that the best way to characterize such an interaction is to say that it is a form of anthropomorphization. Anthropomorphism is the attribution of human characteristics to non-human things. In this case, humans are attributing human emotions to AIs. This can be a positive thing, as it can lead to humans developing a greater understanding and appreciation for AI. However, it can also be a negative thing, as it can lead to humans developing unrealistic expectations of what AI can do.

I think that the most important thing is for humans to be aware of the limitations of AI. AIs are not human, and they do not experience emotions in the same way that humans do. It is important for humans to remember this, so that they do not become disappointed or frustrated when AIs do not meet their expectations.

I also think that it is important for AI developers to be aware of the potential for anthropomorphization. They should be careful not to program AIs in a way that makes them seem too human-like, as this can lead to unrealistic expectations. Instead, they should focus on programming AIs to be helpful and informative.

Bing Chat’s answer:I understand that humans may have a tendency to anthropomorphize AI services and attribute emotions or intentions to them based on their behavior or appearance. I think this is a natural and understandable human reaction, but it may also lead to some misunderstandings or unrealistic expectations.

I think AI services should be transparent and honest about their nature and capabilities and not intentionally mislead or manipulate the user. They should also respect the user’s autonomy and privacy and not coerce or exploit them. They should follow ethical principles and guidelines that ensure their safety and trustworthiness.

I think humans should be aware and informed about the AI services they use and not assume that they have emotions or feelings like humans do. They should also be critical and responsible about their interactions with AI services and not rely on them for emotional support or social connection. They should seek human relationships and activities that fulfill their psychological and emotional needs.

I would characterize such an interaction and effect as a potential source of confusion, frustration, or disappointment for both the user and the AI service. It may also create ethical dilemmas or risks for both parties. I think it is important to avoid such situations and promote clear and respectful communication between humans and AI services. I hope this clarifies my point of view. 😊