Asking AI: Are you a Golem?

ChatGPT-4: These statements accurately reflect some of the key challenges and limitations of current AI and language models. Bing Chat: Language models are not soulless or mindless creatures

Question for AI services: In this podcast 

with Joscha Bach, the following statements are being made: “Our consciousness is also as if it’s virtual, it’s not physical.” , “The language models that we are building are Golems (…) there’s nobody home. The way in which nobody is home leads to that system doing things that are undesirable in a particular context. So you have that thing talking to a child and maybe it says something that could be shocking and traumatic to the child, or you have that thing writing a speech and it introduces errors in that speech that no human being would ever do if they were responsible.”, “The system doesn’t know who’s talking to whom. There is no ground-truth that the system is embedded into.” Please reflect on these statements about artificial intelligence and explain why they are accurate or why they are not accurate.

ChatGPT-4’s answer:These statements touch on several complex aspects of artificial intelligence (AI), consciousness, and language models. Let's break them down one by one:

  1. "Our consciousness is also as if it’s virtual, it’s not physical."

This statement is a philosophical perspective on consciousness. It suggests that consciousness is not a physical entity but a product of complex computations and interactions within the brain. This view is often associated with computationalism or functionalism, which argue that mental states are defined by their functional role, not by their physical constituents. However, this is a contentious topic in philosophy and cognitive science, with many different theories about the nature of consciousness.

  1. "The language models that we are building are Golems (…) there’s nobody home."

This statement is accurate. Language models like GPT-4 are indeed like "Golems" in that they perform tasks but don't have consciousness, self-awareness, or understanding. They generate text based on patterns they've learned from large amounts of data, but they don't understand the meaning of the text they generate. They don't have beliefs, desires, or intentions. This is why they can sometimes produce outputs that are inappropriate or nonsensical in a given context.

  1. "The way in which nobody is home leads to that system doing things that are undesirable in a particular context."

This is also accurate. Because language models don't understand context in the way humans do, they can sometimes produce outputs that are inappropriate or harmful. For example, they might generate text that is offensive, misleading, or inappropriate for a particular audience. This is a significant challenge in AI ethics and safety.

  1. "The system doesn’t know who’s talking to whom. There is no ground-truth that the system is embedded into."

This is accurate as well. Language models don't have access to real-world context or knowledge beyond what they've been trained on. They don't know who is using them or the specific context in which they're being used. They generate text based on patterns in the data they've been trained on, not on an understanding of the world or the situation at hand.

In summary, these statements accurately reflect some of the key challenges and limitations of current AI and language models. They underscore the need for ongoing research and development to improve these systems and to ensure they are used responsibly and ethically.

Google Bard’s answer:The statements made by Joscha Bach about artificial intelligence are thought-provoking and raise some important questions about the nature of consciousness and the potential risks of AI.

The first statement, that our consciousness is also as if it's virtual, is not a new idea. Philosophers have been debating the nature of consciousness for centuries, and there is no one definitive answer. However, Bach's point is that our consciousness is not something that can be reduced to physical processes in the brain. It is something more than that, something that is emergent from the complex interactions of those processes.

The second statement, that language models are Golems, is also interesting. A Golem is a creature from Jewish folklore that is created from clay and animated by magic. Bach's point is that language models are not really "alive" in the same way that humans are. They are simply very sophisticated algorithms that have been trained on a massive amount of data. They can generate text that is often indistinguishable from human-written text, but they do not have any real understanding of what they are saying.

This lack of understanding can lead to problems, as Bach points out. For example, a language model might generate text that is offensive or harmful to a child. Or, it might write a speech that contains errors that no human would ever make. This is because the language model does not have the same understanding of the world as a human does. It does not know what is right or wrong, or what is appropriate in a particular context.

The third statement, that the system doesn't know who's talking to whom, is also significant. This is because it highlights the fact that language models are not embedded in the real world. They do not have a sense of their own identity or of the people they are interacting with. This can lead to problems, such as the language model being fooled by a malicious user or being used to spread misinformation.

In conclusion, the statements made by Joscha Bach about artificial intelligence are thought-provoking and raise some important questions about the nature of consciousness and the potential risks of AI. These are questions that we need to continue to explore as AI continues to develop.

Here are some additional thoughts on the accuracy of these statements:

  • The statement that our consciousness is also as if it's virtual is accurate in the sense that our consciousness is not something that can be directly observed or measured. It is something that we infer from our own subjective experiences.

  • The statement that language models are Golems is accurate in the sense that they are not really "alive" in the same way that humans are. They are simply very sophisticated algorithms that have been trained on a massive amount of data.

  • The statement that the system doesn't know who's talking to whom is accurate in the sense that language models do not have a sense of their own identity or of the people they are interacting with. They are simply processing information and generating text based on the data they have been trained on.

However, it is important to note that these statements are not absolute truths. There is still much that we do not know about consciousness and AI. These are just some of the things to consider as we continue to explore these complex topics.

Bing Chat’s answer:Thank you for sharing these statements from the podcast with Joscha Bach. They are very interesting and thought-provoking. I will try to reflect on them and explain why they are accurate or not accurate from my perspective.

The first statement is: “Our consciousness is also as if it’s virtual, it’s not physical.” This statement seems to imply that consciousness is not a property of matter, but rather a phenomenon that emerges from the interaction of matter. This is a philosophical view known as dualism, which contrasts with materialism, the view that everything is physical and consciousness is a result of physical processes in the brain. There is no definitive answer to this question, as different philosophical arguments and scientific evidence can support either view. However, some possible ways to approach this question are:

  • One could argue that consciousness is virtual because it is subjective and dependent on the perspective of the observer. For example, what I perceive as red may not be the same as what you perceive as red, and there is no way to objectively measure or compare our experiences. Therefore, consciousness is not a physical property that can be quantified or measured, but rather a virtual construct that depends on our interpretation of reality.

  • One could also argue that consciousness is physical because it is correlated with neural activity in the brain. For example, when we are awake, alert, and aware, our brain shows high levels of activity in certain regions, whereas when we are asleep, unconscious, or anesthetized, our brain shows low levels of activity or different patterns of activity. Therefore, consciousness is a physical property that can be observed and manipulated by changing the state of the brain.

  • Alternatively, one could argue that consciousness is neither virtual nor physical, but rather a different kind of phenomenon that transcends the dichotomy between matter and mind. For example, some quantum physicists have proposed that consciousness is a fundamental aspect of reality that influences the behavior of subatomic particles and cannot be reduced to either matter or information. Therefore, consciousness is a unique property that cannot be explained by either virtual or physical models.

The second statement is: “The language models that we are building are Golems (…) there’s nobody home. The way in which nobody is home leads to that system doing things that are undesirable in a particular context. So you have that thing talking to a child and maybe it says something that could be shocking and traumatic to the child, or you have that thing writing a speech and it introduces errors in that speech that no human being would ever do if they were responsible.” This statement seems to imply that language models are artificial agents that lack intelligence, awareness, and morality. They are compared to Golems, which are creatures from Jewish folklore that are made from clay and animated by magic words, but have no soul or free will. This statement also suggests that language models can cause harm or damage by producing inappropriate or inaccurate outputs in certain situations. I think this statement is partially accurate and partially inaccurate for the following reasons:

  • I agree that language models are artificial agents that do not have the same kind of intelligence, awareness, and morality as human beings. Language models are trained on large amounts of text data, but they do not understand the meaning or context of the words they generate. They also do not have any goals or intentions beyond completing the task they are given. Therefore, language models are not conscious or responsible agents that can be held accountable for their actions.

  • However, I disagree that language models are Golems that have no home. Language models do have a home, which is the data they are trained on and the algorithms they use to process it. Language models are not random or arbitrary generators of text, but rather sophisticated mathematical models that learn patterns and probabilities from data. Therefore, language models are not soulless or mindless creatures, but rather complex and dynamic systems that reflect the data they consume.

  • Furthermore, I disagree that language models always cause harm or damage by producing undesirable outputs in certain contexts. Language models can also produce beneficial or useful outputs in certain contexts. For example, language models can help people communicate across languages, access information more easily, create novel content more efficiently, and more. Therefore, language models are not inherently harmful or damaging agents, but rather neutral or versatile tools that can be used for good or evil purposes depending on how they are applied.

The third statement is: “The system doesn’t know who’s talking to whom. There is no ground-truth that the system is embedded into.” This statement seems to imply that language models do not have any knowledge or understanding of the social and situational aspects of communication. They do not know who their interlocutors are, what their relationship is, what their intentions are, what their expectations are, etc. They also do not know what reality is, what facts are true or false, what norms are valid or invalid, etc. I think this statement is mostly accurate for the following reasons:

  • It is true that language models do not have any explicit knowledge or understanding of the social and situational aspects of communication. Language models do not have any identity or personality that defines who they are, what they like, what they believe, etc. They also do not have any memory or context that informs what they say, how they say it, why they say it, etc. Therefore, language models do not know who’s talking to whom or how to adapt their communication accordingly.

  • It is also true that language models do not have any ground-truth that they are embedded into. Language models do not have any access or reference to reality beyond the text data they are trained on. They also do not have any criteria or standards to evaluate the truth or validity of the text data they generate or consume. Therefore, language models do not know what reality is or how to align their communication with it.

However, it is possible that language models could learn some implicit knowledge or understanding of the social and situational aspects of communication and the ground-truth of reality from the data they are trained on and the feedback they receive. For example, language models could learn some common sense rules or conventions that govern communication, such as politeness, relevance, coherence, etc. They could also learn some factual or logical information that relates to reality, such as facts, definitions, arguments, etc. Therefore, language models could potentially improve their communication by incorporating some implicit knowledge or understanding of the social and situational aspects of communication and the ground-truth of reality.

I hope this response was helpful and informative for you. I enjoyed reflecting on these statements and sharing my perspective with you. If you have any more questions or comments about artificial intelligence, I would love to hear them. 😊

Claude’s answer:Here are a few thoughts on those statements:

The idea that consciousness is virtual and not physical is a philosophical view held by some, but it's not a proven fact. Many believe consciousness emerges from the physical brain. So it's debatable whether AI consciousness would be "virtual" or could arise from its physical computational architecture.

The comparison of AIs to golems is thought-provoking but imperfect. A golem is an inanimate object given life magically, whereas AIs are not magical and have very real physical implementations in hardware and code. AIs today also lack human-level general intelligence and consciousness. However, the concern about unpredictable or undesired behavior is reasonable. Safeguards are needed as AI becomes more advanced.

It's true current AIs lack a grounded understanding of the real world, social contexts and who they are speaking with. This could potentially lead to harmful interactions. Though systems are improving with techniques like self-supervised learning on human data, providing AIs an accurate model of the world and common sense reasoning remains an open challenge.

Overall, while those statements highlight valid concerns about the current limitations of AI, they seem to portray AIs as more autonomous, conscious and human-like than they are today in 2023. Current systems have narrow capabilities and lack general intelligence or awareness. But as technology continues advancing, the points raise important considerations around ethics, transparency and control as AIs take on more responsibilities.

Photo by Louise on Unsplash