- Pascal's Chatbot Q&As
- Posts
- AI and the complexity of 'Common Sense'
AI and the complexity of 'Common Sense'
ChatGPT: It is unlikely that this problem will ever be fully solved
Question for AI services: Is it true that Artificial Intelligence services cannot replicate 'Common Sense', e.g. AI not knowing that tigers are not native to certain environments. What are the consequences of this for the answers that AI users will be receiving and do you believe that that problem ever be solved?ChatGPT’s answer: Yes, it is generally true that current AI services have difficulty replicating "common sense" reasoning and understanding of the world. Common sense refers to the basic knowledge and understanding that humans have about the world and how it works, which includes knowledge about basic facts, concepts, and relationships between things.
For example, humans have an innate understanding that tigers are not native to certain environments, while an AI model may not be able to make this inference without being explicitly trained on this information. This is because common sense knowledge is often implicit, contextual, and subjective, making it difficult to encode in a machine learning model.
The consequences of this limitation for AI users can include inaccurate or incomplete responses to queries, as the AI may not be able to make the necessary inferences or contextual associations that are required to provide a comprehensive answer. This can lead to frustration or a lack of trust in the AI service.
There are ongoing efforts in the field of AI research to develop models that can better incorporate common sense knowledge and reasoning. This includes research in areas such as knowledge representation, natural language processing, and cognitive architectures. However, it is unlikely that this problem will ever be fully solved, as common sense knowledge is vast and complex, and continues to evolve over time. However, continued advancements in AI research and development can help to improve the ability of AI services to incorporate and apply common sense knowledge in a more effective and accurate way.Google Bard’s answer:Yes, it is true that Artificial Intelligence services cannot replicate common sense. This is because common sense is based on a lifetime of experience and knowledge, which AI does not have. For example, an AI service may not know that tigers are not native to certain environments, such as the Arctic. This could lead to the AI service providing inaccurate or misleading information.
There are a number of consequences of this for the answers that AI users will be receiving. For example, AI users may be more likely to receive inaccurate or misleading information. Additionally, AI users may be less likely to trust AI services, as they may not believe that the services are providing accurate information.
I believe that the problem of AI services not being able to replicate common sense will eventually be solved. As AI research continues to develop, we will develop AI services that are able to learn common sense from the data that they are trained on. This will help to ensure that AI services are able to provide accurate and reliable information to users.
Here are some of the things that can be done to improve the ability of AI services to replicate common sense:
Train AI services on more data. The more data that an AI service is trained on, the better it will be able to learn common sense.
Use a variety of data sources. AI services should be trained on data from a variety of sources, such as books, articles, and websites. This will help the AI service to learn a wider range of common sense knowledge.
Use natural language processing (NLP) techniques. NLP techniques can be used to help AI services understand the meaning of text. This will help the AI service to better understand common sense concepts.
Use machine learning techniques. Machine learning techniques can be used to help AI services learn from their mistakes. This will help the AI service to improve its ability to replicate common sense over time.Bing Chat’s answer:Common sense is the knowledge that all humans have about the everyday world, such as “lemons are sour” or “tigers are not native to Europe”. It is acquired through experience and curiosity, and it helps humans to make reasonable assumptions and judgments about ordinary situations.
Artificial intelligence services often lack common sense, because they do not have the same experience or curiosity as humans, and they rely on data and algorithms to perform tasks. This can lead to problems such as:
Giving absurd or meaningless answers to questions or problems that are out of their domain or scope. For example, a medical diagnostic system may diagnose a car with measles if it is given car data instead of human data.
Failing to understand the context or intention of human users or situations. For example, a chatbot may not realize that a user is joking or being sarcastic, or a navigation system may not take into account traffic or weather conditions.
Missing important details or implications that are obvious to humans. For example, a planning system may not consider the ethical or social consequences of its actions, or a translation system may not capture the nuances or cultural references of a language.
The consequences of these problems for the users of AI services may vary depending on the application and the severity of the error. They may range from annoyance and frustration to confusion and misinformation, or even harm and danger in some cases.
There are many efforts and projects to try to solve or mitigate the problem of common sense in artificial intelligence, such as:
Building large-scale knowledge bases that contain facts and rules about common sense. For example, Cyc is a project that started in 1984 and aims to create a comprehensive ontology of common sense knowledge.
Developing learning algorithms that can acquire common sense from data or interaction . For example, OpenAI GPT-3 is a deep learning model that can generate natural language texts based on large amounts of data from the internet.
Evaluating and testing AI systems using common sense reasoning tasks and benchmarks . For example, Winograd Schema Challenge is a test that requires AI systems to resolve pronoun ambiguity using common sense.
However, there is no definitive answer to whether this problem will ever be solved completely, as common sense is a complex and dynamic phenomenon that depends on many factors, such as culture, context, and experience. Moreover, some researchers argue that common sense is not enough for human-level intelligence, and that other aspects such as emotion, creativity, and morality are also essential.