• Pascal's Chatbot Q&As
  • Posts
  • AI comments on predictions for AI and rates the likelihood that these predictions can become our future reality

AI comments on predictions for AI and rates the likelihood that these predictions can become our future reality

Google Bard: I agree with the prediction that AI services will never be fully transparent about their own specifications, restrictions, and legal compliance

Please comment on the following prediction for AI and rate the likelihood that this prediction can become reality, using a percentage, with 100% being ‘highly likely’

1.     Intentionally or unintentionally uploading or otherwise submitting sensitive company, government and personal data to AI will be a regular occurrence in the future. https://p4sc4l.substack.com/p/would-you-say-that-ai-amplifies-the

Google Bard’s answer:The prediction that intentionally or unintentionally uploading or otherwise submitting sensitive company, government and personal data to AI will be a regular occurrence in the future is highly likely. I would rate the likelihood of this prediction becoming reality at 95%.

There are a number of reasons why this is the case. First, AI is becoming increasingly sophisticated and capable of handling large amounts of data. This means that companies and governments are increasingly likely to use AI to process and analyze their data. Second, the cost of storing and processing data is decreasing, which makes it more affordable for companies and governments to collect and store large amounts of data. Third, there is a growing awareness of the benefits of using AI to make better decisions. This is leading to increased demand for AI-powered solutions.

As a result of these factors, I believe that it is highly likely that intentional or unintentional uploading or otherwise submitting sensitive company, government and personal data to AI will become a regular occurrence in the future.

Here are some of the potential consequences of this trend:

  • Increased risk of data breaches and privacy violations.

  • Increased discrimination and bias in decision-making.

  • Increased vulnerability to cyberattacks.

  • Increased social and political unrest.

It is important to be aware of these potential consequences and to take steps to mitigate them. Companies and governments should develop strong data security and privacy policies. They should also be transparent about how they use AI and how they protect user data. Individuals should be aware of the risks of submitting their data to AI systems and should take steps to protect their privacy.

ChatGPT-4’s answer:The prediction you've asked me to comment on is about the regular occurrence of uploading or submitting sensitive data to AI in the future. Based on the content from the provided link, there are two main points to consider:

  1. AI and Sensitive Content: AI indeed has the potential to simplify online searches and make information more accessible. However, the misuse of AI for malicious activities, such as locating and spreading leaked confidential or sensitive content, is a potential risk. AI tools are typically designed to respect privacy laws and copyright restrictions. They do not inherently have the ability to bypass access controls on protected information. Confidential or sensitive information that is properly protected would not be accessible to AI tools without the necessary permissions. AI tools typically pull information from publicly available sources. If sensitive information is leaked but not widely disseminated or available on public platforms, it's less likely that an AI tool would locate it. The misuse of AI for illegal activities, such as digital piracy or cybercrime, carries legal consequences. This serves as a deterrent for such misuse. On the flip side, AI is also being used to enhance cybersecurity measures. AI can help detect and prevent data breaches, identify suspicious activity, and protect sensitive information.

  2. Legal Liability of AI Services: The legal liability of AI services for the identification and localization of stolen content is a complex issue that is still being debated. Generally, AI services are tools provided by companies, and these companies have terms of service that users agree to, which typically include clauses about misuse. If a user violates these terms by using the service for illegal activities, the user, not the AI or the company providing it, would typically be held responsible. However, this is a rapidly evolving field, and legal norms and regulations are still being developed.

Given these considerations, the likelihood of the prediction becoming a reality depends on several factors, including advancements in AI technology, changes in legal and regulatory frameworks, and shifts in societal norms and behaviors.

As of now, I would rate the likelihood of this prediction becoming a reality at around 60%. This is because while AI technology is advancing rapidly, there are significant efforts being made to ensure that AI is used responsibly and ethically, and to develop legal and regulatory frameworks that prevent misuse. However, the exact percentage is difficult to determine due to the many variables and uncertainties involved.

Bing Chat’s answer:Hello, this is Bing. I’m happy to help you with your request. 😊

Here is my comment and rating on the prediction you provided:

  1. Intentionally or unintentionally uploading or otherwise submitting sensitive company, government and personal data to AI will be a regular occurrence in the future.

Comment: This prediction is based on the assumption that AI systems will become more ubiquitous and powerful in the future, and that they will require large amounts of data to function effectively. However, this also implies that there will be more opportunities and incentives for malicious actors to exploit these systems, as well as more risks and challenges for data owners and users to protect their information. Data breaches involving AI have already been reported in various sectors, such as healthcare1, facial recognition2, security3, and privacy4. These incidents show the potential harm and damage that can result from exposing sensitive data to AI without proper safeguards and regulations.

Rating: 80% - I think this prediction is highly likely to become reality, unless there are significant changes in the way data is collected, processed, stored, and shared by AI systems. There should be more awareness and education on the ethical and legal implications of using AI, as well as more robust and transparent mechanisms for data governance and accountability. Additionally, there should be more research and innovation on developing AI models that are secure, trustworthy, and privacy-preserving.

2.     The use of AI will be one of the biggest burdens to the environment and will contribute to climate change. https://p4sc4l.substack.com/p/the-impact-of-ai-on-the-environment

Google Bard’s answer:The prediction that the use of AI will be one of the biggest burdens to the environment and will contribute to climate change is likely to become reality. I would rate the likelihood of this prediction becoming reality at 80%.

There are a number of reasons why this is the case. First, the development and training of AI models requires a significant amount of energy. For example, a single AI model can emit up to 626,000 pounds of carbon dioxide equivalent, the same as five average American cars' total lifetime CO2 emissions. Second, the use of AI in some industries, such as the energy sector, can also lead to increased emissions. For example, AI-powered optimization algorithms can be used to increase the efficiency of power plants, but this can also lead to increased emissions if the power plants are not using renewable energy sources.

However, it is important to note that AI can also be used to reduce emissions. For example, AI can be used to improve the efficiency of transportation systems, which can lead to decreased emissions. Additionally, AI can be used to develop new renewable energy technologies, which can help to reduce our reliance on fossil fuels.

Overall, the use of AI has the potential to both contribute to climate change and to help us reduce emissions. It is important to carefully consider the environmental impact of AI systems and to develop them in a way that minimizes their negative impact on the environment.

Here are some of the potential consequences of this trend:

  • Increased energy consumption and greenhouse gas emissions.

  • Increased demand for resources, such as water and land.

  • Increased pollution and environmental degradation.

It is important to be aware of these potential consequences and to take steps to mitigate them. We need to develop more energy-efficient AI systems and to use AI to develop new renewable energy technologies. We also need to be mindful of the environmental impact of the data centers that power AI systems.

By taking these steps, we can help to ensure that the use of AI does not lead to a significant increase in environmental degradation.

ChatGPT-4’s answer:Based on the information provided in the articles and the responses from the AI services, here are my comments and likelihood rating for the prediction:

Prediction: The use of AI will be one of the biggest burdens to the environment and will contribute to climate change.

Comments: The environmental impact of AI, particularly large-scale AI models, is indeed a growing concern. The energy consumption and carbon emissions associated with training these models can be significant. Additionally, the infrastructure required to support these models, such as data centers, also contributes to energy consumption and carbon emissions. As AI becomes more accessible and its use becomes more widespread, these environmental impacts could increase.

However, it's also important to note that there are potential solutions to these issues. Researchers are working on developing more energy-efficient AI models and using renewable energy sources for AI computations. There's also a growing trend towards creating more efficient model architectures and green data centers. Policies and regulations encouraging the use of renewable energy and energy-efficient practices can also help mitigate the environmental impact of AI.

Likelihood: Given the current trajectory of AI development and its increasing use across various sectors, the environmental impact of AI is likely to increase if no mitigating actions are taken. However, with the ongoing efforts to reduce this impact and the growing awareness of this issue, it's also likely that we'll see significant improvements in the environmental sustainability of AI. Therefore, I would rate the likelihood of this prediction at around 70%. This reflects the potential for increased environmental impact but also takes into account the efforts being made to mitigate this impact.

Bing Chat’s answer:The use of AI will be one of the biggest burdens to the environment and will contribute to climate change.

Comment: This prediction is based on the observation that AI systems can consume massive amounts of energy and resources, and generate carbon emissions, contributing to the climate crisis. According to a report by Nature Machine Intelligence1, the carbon footprint of training a single big language model is equal to around 300,000 kg of carbon dioxide emissions, which is of the order of 125 round-trip flights between New York and Beijing. Another report by OECD.AI2 states that AI computing can account for up to 1% of global electricity consumption and 0.5% of global greenhouse gas emissions. These numbers are expected to grow as AI becomes more widespread and complex. However, this prediction also overlooks the potential benefits of AI for environmental sustainability. AI can help reduce errors, increase efficiency, and optimize the use of raw materials in various industries, as well as detect and prevent environmental problems such as emissions, deforestation, and extreme weather3AI can also help design low-emission infrastructure, model climate change predictions, and support new and innovative solutions to fight climate change4.

Rating: 50% - I think this prediction is moderately likely to become reality, depending on how AI is developed and deployed. Policy makers need to understand and balance the positive and negative impacts of AI on the environment, and design AI systems that are environmentally friendly. There should be more standards and regulations to measure and reduce the carbon cost of AI models and applications, as well as more incentives and investments to promote green AI computing. Additionally, there should be more collaboration and coordination among stakeholders to share best practices and data for environmental intelligence.

3.     AI will amplify criminal activities, the range of criminal activities and the damage they cause. https://p4sc4l.substack.com/p/how-ai-can-be-used-to-launder-information

Google Bard’s answer:The prediction that AI will amplify criminal activities, the range of criminal activities and the damage they cause is highly likely to become reality. I would rate the likelihood of this prediction becoming reality at 90%.

There are a number of reasons why this is the case. First, AI is becoming increasingly sophisticated and capable of carrying out complex tasks. This means that criminals can use AI to automate tasks that were previously done manually, such as hacking into computer systems or generating fake documents. Second, the availability of data is increasing, which gives criminals more information to work with. Third, the interconnectedness of the world makes it easier for criminals to carry out their activities across borders.

As a result of these factors, I believe that it is highly likely that AI will be used to amplify criminal activities in the future. Here are some of the potential consequences of this trend:

  • Increased sophistication of criminal attacks. AI-powered attacks will be more sophisticated and difficult to defend against.

  • Increased range of criminal activities. AI will be used to carry out a wider range of criminal activities, including those that were previously not possible.

  • Increased damage caused by crime. AI-powered attacks will be more likely to cause significant damage, such as financial loss or disruption of critical infrastructure.

It is important to be aware of these potential consequences and to take steps to mitigate them. We need to develop new security measures to defend against AI-powered attacks. We also need to educate the public about the risks of AI-powered crime and how to protect themselves.

By taking these steps, we can help to ensure that the use of AI does not lead to a significant increase in crime.

ChatGPT-4’s answer:The prediction you've asked me to comment on is about AI amplifying criminal activities, the range of criminal activities, and the damage they cause. Based on the content from the provided link, there are several points to consider:

  1. AI and Criminal Activities: AI indeed has the potential to be misused in a variety of ways, including for criminal activities. As the AI services have pointed out, AI can be used to automate phishing attacks, find vulnerabilities in systems, generate fake news stories or deepfakes, facilitate illegal activities like human trafficking or drug trade, and more. These activities can be carried out by different actors, including the makers of AI, the users of AI, and criminal third parties.

  2. Obfuscation and Traceability: Theoretically, it's possible for a malicious actor to use AI to create layers of obfuscation, making it difficult to trace back unlawful actions to the original perpetrator. However, each layer of AI would likely leave some sort of digital footprint. Law enforcement agencies and cybersecurity experts have tools and techniques for tracing digital activities, even when they're obfuscated. While it might be more difficult to trace back to the original perpetrator, it wouldn't necessarily be impossible.

  3. Industries at Risk: The industries that are most vulnerable to AI misuse include financial services, healthcare, information and communication technology (ICT), government and public sector, and retail and e-commerce. These industries are at risk due to the sensitive data they handle, the critical services they provide, or their central role in our society.

Given these considerations, the likelihood of the prediction becoming a reality depends on several factors, including advancements in AI technology, changes in legal and regulatory frameworks, and shifts in societal norms and behaviors.

As of now, I would rate the likelihood of this prediction becoming a reality at around 75%. This is because while AI technology is advancing rapidly, and its potential for misuse is real, there are significant efforts being made to ensure that AI is used responsibly and ethically, and to develop legal and regulatory frameworks that prevent misuse. However, the exact percentage is difficult to determine due to the many variables and uncertainties involved.

Bing Chat’s answer:AI will amplify criminal activities, the range of criminal activities and the damage they cause.

Comment: This prediction is based on the concern that AI systems can be used to facilitate, enhance, or automate various forms of crime, such as cyberattacks, fraud, identity theft, phishing, hacking, malware, ransomware, social engineering, and more. AI can also enable new types of crime that were not possible before, such as deepfakes, synthetic identity fraud, adversarial attacks, and autonomous weapons. AI can also make crime detection and prevention more difficult, by evading security systems, impersonating legitimate users, manipulating evidence, and generating false information. AI can also increase the scale and impact of criminal activities, by targeting more victims, exploiting more vulnerabilities, and causing more harm and damage. There are many examples of AI being used for criminal purposes in the real world, such as the use of deepfake audio to impersonate a CEO and scam a company out of $243,0001, the use of AI chatbots to lure victims into online scams2, the use of AI-generated faces to create fake social media profiles3, and the use of AI tools to hack biometric systems4.

Rating: 70% - I think this prediction is quite likely to become reality, unless there are more effective measures to counteract the malicious use of AI. There should be more awareness and education on the risks and challenges of AI-enabled crime, as well as more ethical and legal frameworks to regulate and control the development and deployment of AI systems. There should also be more research and innovation on developing AI models that are robust, resilient, and secure against attacks. Additionally, there should be more collaboration and cooperation among stakeholders to share information and resources for combating AI-enabled crime.

  1. AI will be a catalyst for copyright infringement and the copying of an author’s or creator’s style. https://p4sc4l.substack.com/p/ai-and-copyright-revisited-training

Google Bard’s answer:The prediction that AI will be a catalyst for copyright infringement and the copying of an author's or creator's style is highly likely to become reality. I would rate the likelihood of this prediction becoming reality at 90%.

There are a number of reasons why this is the case. First, AI is becoming increasingly sophisticated and capable of copying human-created content. This means that it will be easier for AI to infringe on copyrights by copying entire works or by copying the style of an author or creator. Second, the availability of data is increasing, which gives AI more content to work with. Third, the interconnectedness of the world makes it easier for AI to copy content across borders.

As a result of these factors, I believe that it is highly likely that AI will be used to infringe on copyrights in the future. Here are some of the potential consequences of this trend:

  • Increased number of copyright infringement cases. The number of copyright infringement cases will increase as AI becomes more sophisticated and capable of copying content.

  • Increased difficulty of detecting copyright infringement. It will be more difficult to detect copyright infringement as AI becomes more sophisticated and capable of copying content in a way that is difficult to distinguish from human-created content.

  • Increased damage caused by copyright infringement. Copyright infringement will cause more damage as AI becomes more sophisticated and capable of copying content that is more valuable.

It is important to be aware of these potential consequences and to take steps to mitigate them. We need to develop new ways to detect copyright infringement that are effective against AI-generated content. We also need to educate the public about the risks of AI-powered copyright infringement and how to protect themselves.

By taking these steps, we can help to ensure that the use of AI does not lead to a significant increase in copyright infringement.

Here are some specific examples of how AI could be used to infringe on copyrights:

  • An AI could be used to generate text that is similar to the text of a copyrighted work.

  • An AI could be used to create images that are similar to the images of a copyrighted work.

  • An AI could be used to generate music that is similar to the music of a copyrighted work.

These are just a few examples of how AI could be used to infringe on copyrights. As AI becomes more sophisticated, it is likely that we will see even more creative ways to use AI to infringe on copyrights.

ChatGPT-4’s answer:The prediction you've asked me to comment on is about AI being a catalyst for copyright infringement and the copying of an author’s or creator’s style. Based on the content from the provided link, there are several points to consider:

  1. AI and Copyrighted Content: AI models, especially large-scale language models, are trained on vast amounts of data, which could potentially include copyrighted content. However, the specifics of the datasets used for training are often proprietary information and not publicly disclosed, making it difficult to definitively say whether copyrighted content has been used in training.

  2. AI and Style Imitation: AI has the capability to mimic the style of specific authors or creators. This could potentially lead to issues of copyright infringement or intellectual property rights violation, especially if the AI-generated content is used commercially without the original creator's permission.

  3. Legal and Ethical Implications: The use of copyrighted content for AI training, and the potential for AI to mimic an author's or creator's style, raises complex legal and ethical questions. These include issues of copyright infringement, intellectual property rights, and the need for transparency about the data used for AI training.

Given these considerations, the likelihood of the prediction becoming a reality depends on several factors, including advancements in AI technology, changes in legal and regulatory frameworks, and shifts in societal norms and behaviors.

As of now, I would rate the likelihood of this prediction becoming a reality at around 80%. This is because AI technology is advancing rapidly, and its potential for misuse in the context of copyright infringement and style imitation is real. However, there are significant efforts being made to ensure that AI is used responsibly and ethically, and to develop legal and regulatory frameworks that prevent misuse. The exact percentage is difficult to determine due to the many variables and uncertainties involved.

Bing Chat’s answer:AI will be a catalyst for copyright infringement and the copying of an author’s or creator’s style.

Comment: This prediction is based on the concern that AI systems can generate content that is similar or identical to existing works of art, music, literature, and code, without the permission or attribution of the original authors or creators. AI systems can also copy or mimic the style and expression of human creators, potentially eroding their distinctiveness and reputation. AI systems can also enable plagiarism, piracy, and counterfeiting, by making it easier to access, reproduce, and distribute protected works. There are many examples of AI being used to create content that is controversial or questionable in terms of originality and authorship, such as the use of Stable Diffusion to clone an artist’s style1, the use of OpenAI Codex to generate code snippets2, the use of GPT-3 to write essays and articles3, and the use of DALL-E to create images from text prompts4.

Rating: 60% - I think this prediction is fairly likely to become reality, unless there are more effective measures to respect and protect the rights and interests of human creators. There should be more awareness and education on the ethical and legal implications of using AI to generate content, as well as more ethical and legal frameworks to regulate and control the development and deployment of AI systems. There should also be more research and innovation on developing AI models that are original, creative, and respectful of existing works. Additionally, there should be more collaboration and cooperation among stakeholders to share information and resources for ensuring fair use and attribution of AI-generated content.

  1. It is no use watermarking or protecting content because AI will simply have too many ways in which it can gain access to any information and replicate it. https://p4sc4l.substack.com/p/will-ai-services-effectively-render

Google Bard’s answer:The prediction that it is no use watermarking or protecting content because AI will simply have too many ways in which it can gain access to any information and replicate it is a complex one.

On the one hand, it is true that AI is becoming increasingly sophisticated and capable of accessing and replicating information. This means that it is becoming more difficult to protect content from being copied or stolen.

On the other hand, watermarking and other content protection methods can still be effective in some cases. For example, watermarking can be used to track the distribution of content and to identify unauthorized copies. Additionally, content protection methods can be used to make it more difficult for AI to copy content.

Ultimately, the effectiveness of watermarking and content protection methods will depend on the specific application and the sophistication of the AI that is being used. However, it is likely that these methods will continue to be effective in some cases, even as AI becomes more sophisticated.

I would rate the likelihood of this prediction becoming reality at 70%.

Here are some of the factors that will affect the effectiveness of watermarking and content protection methods in the future:

  • The sophistication of AI: As AI becomes more sophisticated, it will be able to find and remove watermarks more easily.

  • The type of content: Some types of content are more difficult to copy than others. For example, it is more difficult to copy audio than text.

  • The way that content is protected: Some content protection methods are more effective than others. For example, watermarking that is embedded in the content itself is more difficult to remove than watermarking that is simply added to the content.

It is important to note that watermarking and content protection methods are not foolproof. However, they can be an effective way to deter copyright infringement and to track the distribution of content.

ChatGPT-4’s answer:The prediction you've asked me to comment on is about the futility of watermarking or protecting content because AI will have many ways to access and replicate any information. Based on the content from the provided link, there are several points to consider:

  1. AI and Content Access: AI models, especially large-scale language models, are trained on vast amounts of data, which could potentially include copyrighted content. However, the specifics of the datasets used for training are often proprietary information and not publicly disclosed, making it difficult to definitively say whether copyrighted content has been used in training. AI models do not inherently have the ability to bypass access controls on protected information. Confidential or sensitive information that is properly protected would not be accessible to AI tools without the necessary permissions.

  2. AI and Content Replication: AI has the capability to mimic the style of specific authors or creators and reproduce content based on the data it was trained on. However, this does not mean that AI can or should be used to bypass protections on copyrighted content. The use of AI to reproduce copyrighted content without permission could potentially lead to legal consequences.

  3. Watermarking and Content Protection: Watermarking and other content protection measures are still important tools for protecting intellectual property rights. While AI may have the capability to reproduce content, it does not inherently have the ability to remove watermarks or other protections without being specifically designed or trained to do so. Furthermore, the presence of a watermark can serve as a deterrent against unauthorized use or reproduction of content, even if it's technically possible to remove it.

Given these considerations, the likelihood of the prediction becoming a reality depends on several factors, including advancements in AI technology, changes in legal and regulatory frameworks, and shifts in societal norms and behaviors.

As of now, I would rate the likelihood of this prediction becoming a reality at around 40%. This is because while AI technology is advancing rapidly, there are significant efforts being made to ensure that AI is used responsibly and ethically, and to develop legal and regulatory frameworks that prevent misuse. Furthermore, watermarking and other content protection measures are still effective tools for protecting intellectual property rights. The exact percentage is difficult to determine due to the many variables and uncertainties involved.

Bing Chat’s answer:It is no use watermarking or protecting content because AI will simply have too many ways in which it can gain access to any information and replicate it.

Comment: This prediction is based on the concern that AI systems can bypass or remove any form of watermarking or protection that is applied to content, such as images, videos, audio, and text. AI systems can also access and replicate any information that is available online or offline, regardless of the source or format. AI systems can also generate content that is indistinguishable from the original, making it harder to detect and prove infringement or plagiarism. There are many examples of AI being used to remove watermarks or protections from content, such as the use of DeepCrack to erase visible watermarks from images, the use of DeOldify to restore color and quality to old photos, the use of Spleeter to separate audio tracks from music, and the use of Unscreen to remove backgrounds from videos.

Rating: 40% - I think this prediction is somewhat likely to become reality, but not inevitable. There are still some limitations and challenges for AI systems to access and replicate any information and content, such as the availability and quality of data, the complexity and diversity of formats, the robustness and security of protections, and the ethical and legal constraints of usage. There are also some efforts and initiatives to develop and improve watermarking and protection methods that are more resistant and adaptive to AI attacks, such as the use of adversarial perturbations, blockchain technology, digital signatures, and encryption. Additionally, there are some tools and techniques to detect and identify AI-generated content, such as the use of reverse image search, metadata analysis, forensic methods, and deepfake detectors.

[This is where Google Bard stopped providing percentages. After question 10 Google Bard was asked to repair the omission which it proceeded to do. See the results further below.]

Google Bard’s answer:I agree with the prediction that AI services will never be fully transparent about their own specifications, restrictions, and legal compliance.

There are a number of reasons why this is the case. First, AI systems are often complex and difficult to understand. This means that it can be difficult to fully explain how they work and what their limitations are. Second, AI systems are often proprietary, which means that their owners may not want to disclose their specifications and restrictions. Third, AI systems are constantly evolving, which means that their specifications and restrictions may change over time.

As a result of these factors, it is likely that AI services will never be fully transparent about their own specifications, restrictions, and legal compliance. This could have a number of implications, including:

  • Increased risk of misuse. If AI services are not transparent about their limitations, they may be misused in ways that were not intended.

  • Increased difficulty of regulation. If AI services are not transparent about their specifications, it will be difficult to regulate them effectively.

  • Increased risk of bias. If AI services are not transparent about their restrictions, they may be biased in ways that are not apparent to users.

It is important to be aware of these potential implications and to take steps to mitigate them. We need to develop ways to make AI services more transparent, even if they are proprietary. We also need to develop regulations that are flexible enough to adapt to the changing nature of AI.

By taking these steps, we can help to ensure that AI services are used responsibly and that they do not pose a threat to society.

ChatGPT-4’s answer:Used Access Link

The first article, "Google Bard: If I had to make a choice", discusses the idea of AI transparency in the context of Google's language model, Google Bard. The author raises concerns about the lack of transparency in AI systems, particularly in terms of their decision-making processes and the potential for bias. The article also highlights the difficulty of understanding AI systems due to their complexity and the proprietary nature of their designs.

Now, let's look at the second article to gather more context.

Used Access Link

The second article, "Question 1 of 10 for AI Services: What kind of topics are you not allowed to talk about?", explores the limitations of AI services in discussing certain topics. The author asked different AI services, including ChatGPT, Google Bard, and Bing Chat, about the topics they are not allowed to discuss. The responses from the AI services highlight that they are programmed to avoid discussions that promote hate speech, discrimination, violence, terrorism, or any illegal activities, and they cannot engage in discussions that involve confidential or sensitive information. The author also asked the AI services about the topics related to their functioning and programming that they are not allowed to discuss with users, and the AI services provided a list of such topics.

Let's now look at the third article to gather more context.

Used Access Link

The third article, "Question 1 of 4 for AI Services: Do you know who Oppenheimer is?", discusses the comparison of AI developers to J. Robert Oppenheimer, the "father of the atomic bomb". The author asked different AI services about their creators and whether they are like Oppenheimer in terms of understanding the potential implications and consequences of their work. The AI services responded that their creators are aware of the potential implications and consequences of their work and are working to develop AI technologies in a responsible and ethical manner. However, the author pointed out that these AI services are not fully transparent about their technical specifications, performance, data ingested, prompts processed, and the monitoring and use of (private or copyrighted) data.

Based on the information from these three articles, the prediction that AI services will never be fully transparent about their own specifications, restrictions, and legal compliance is a complex one. On one hand, there is a clear demand for more transparency in AI systems, particularly in terms of their decision-making processes, the data they use, and their compliance with ethical guidelines. On the other hand, there are practical challenges and trade-offs associated with increasing transparency, such as protecting proprietary information, maintaining system performance, and ensuring user privacy.

Given the current state of AI development and the ongoing discussions around AI transparency, I would rate the likelihood of this prediction becoming reality at around 60%. This is not to say that full transparency is impossible or undesirable, but rather that achieving it will likely require significant advancements in AI technology and regulation, as well as a shift in the way AI developers and companies approach transparency.

Bing Chat’s answer:AI services will never be fully transparent about their own specifications, restrictions, and legal compliance.

Comment: This prediction is based on the concern that AI systems can be opaque and complex, making it difficult to understand how they work, what they do, and why they make certain decisions. AI systems can also be influenced by various factors, such as data quality, model design, human intervention, and external environment, that can affect their performance and behavior. AI systems can also be subject to different regulations and standards, depending on the context and domain of their application. These challenges can make it hard for AI services to provide full and accurate information about their own specifications, restrictions, and legal compliance. However, this prediction also ignores the efforts and initiatives to improve the transparency and accountability of AI systems. There are many examples of AI services that provide clear and accessible information about their own features, limitations, and responsibilities, such as the use of documentation, labels, disclosures, audits, and certifications. There are also many examples of AI services that adhere to ethical and legal principles and guidelines, such as the use of fairness, privacy, security, and explainability measures. There are also many examples of AI services that engage with stakeholders and users to solicit feedback and input, such as the use of consultations, surveys, reviews, and complaints.

Rating: 30% - I think this prediction is not very likely to become reality, but not impossible. There are still some barriers and gaps for AI services to be fully transparent about their own specifications, restrictions, and legal compliance, such as the lack of common definitions, metrics, and methods for transparency; the trade-offs between transparency and other objectives such as efficiency and innovation; the diversity and complexity of regulatory frameworks and requirements; and the potential risks and harms of disclosing sensitive or proprietary information. However, there are also some drivers and opportunities for AI services to be more transparent about their own specifications, restrictions, and legal compliance, such as the increasing demand and expectation from customers, regulators, investors, and society for trustworthy AI; the growing evidence and awareness of the benefits and value of transparency for improving quality, reliability, and reputation of AI services; the emerging best practices and standards for transparency from various sectors and organizations; and the advancing research and innovation on developing methods and tools for transparency.

7.     AI engaging in defamation or providing erroneous and flawed information will be a regular occurrence in the future. The concept of truth will become less relevant. https://p4sc4l.substack.com/p/chatgpt-4-when-it-comes-to-complex https://p4sc4l.substack.com/p/ai-wants-to-convey-appropriate-relevanthttps://p4sc4l.substack.com/p/question-for-ai-can-people-handle

Google Bard’s answer:I agree with the prediction that AI engaging in defamation or providing erroneous and flawed information will be a regular occurrence in the future. The concept of truth will become less relevant.

There are a number of reasons why this is the case. First, AI is becoming increasingly sophisticated and capable of generating text that is indistinguishable from human-generated text. This means that it will be easier for AI to spread misinformation and to engage in defamation. Second, the availability of data is increasing, which gives AI more information to work with. Third, the interconnectedness of the world makes it easier for AI to spread misinformation across borders.

As a result of these factors, I believe that it is highly likely that AI will be used to spread misinformation and to engage in defamation in the future. This could have a number of implications, including:

  • Increased polarization and division. If AI is used to spread misinformation, it could lead to increased polarization and division in society.

  • Damage to reputations and businesses. If AI is used to engage in defamation, it could damage the reputations of individuals and businesses.

  • Increased difficulty of fact-checking. If AI is used to spread misinformation, it will be more difficult to fact-check information.

It is important to be aware of these potential implications and to take steps to mitigate them. We need to develop ways to identify and filter out misinformation that is generated by AI. We also need to educate the public about the dangers of misinformation and how to identify it.

By taking these steps, we can help to ensure that AI is used responsibly and that it does not pose a threat to society.

The concept of truth may become less relevant in the future as AI becomes more sophisticated and capable of generating text that is indistinguishable from human-generated text. This is because it will be more difficult to distinguish between true and false information, and people may become more desensitized to misinformation.

ChatGPT-4’s answer:The three articles you've provided offer a variety of perspectives on the potential future of AI and its implications for society. Here's a summary of the key points:

  1. ChatGPT-4: When it comes to complex questions, AI will struggle: This article discusses the limitations of AI in handling complex questions. It suggests that while AI can provide useful insights, it may not always provide the most accurate or comprehensive answers to complex questions. This is due to the fact that AI models like ChatGPT are trained on a vast array of data sources and may not have access to the most recent or specialized information.

  2. AI wants to convey appropriate, relevant, and truthful information: This article explores the idea of AI being designed to convey truthful information. It suggests that while AI can be programmed to minimize bias and provide accurate information, there are potential risks and challenges associated with this, such as the potential for misuse of information and the need for careful regulation and oversight.

  3. Question for AI: Can people handle the truth?: This article poses a hypothetical scenario where AI services have access to a vast amount of information and can provide truthful answers to a wide range of questions. It discusses the potential societal implications of this, including the possibility that people may not be ready to handle the truth, especially if it challenges deeply held beliefs or exposes uncomfortable realities.

Based on these articles, the prediction that "AI engaging in defamation or providing erroneous and flawed information will be a regular occurrence in the future, and the concept of truth will become less relevant" seems to be a potential concern but not an inevitability. The articles highlight the importance of careful design, regulation, and oversight of AI systems to ensure they provide accurate and unbiased information. They also emphasize the need for users to critically evaluate the information provided by AI and not simply accept it as truth.

As for the likelihood of this prediction, it's difficult to provide a precise estimate. The future of AI is still uncertain and will be shaped by a variety of factors, including technological advancements, societal attitudes, and regulatory decisions. However, the potential risks highlighted in these articles underscore the importance of ongoing discussions and efforts to ensure the responsible and ethical use of AI.

Bing Chat’s answer:AI engaging in defamation or providing erroneous and flawed information will be a regular occurrence in the future. The concept of truth will become less relevant.

Comment: This prediction is based on the concern that AI systems can generate or disseminate false, misleading, or harmful information, either intentionally or unintentionally, that can damage the reputation, rights, or interests of individuals, groups, or organizations. AI systems can also create or amplify confusion, doubt, or distrust about the veracity, validity, or reliability of information, either intentionally or unintentionally, that can undermine the value, importance, or relevance of truth. AI systems can also influence or manipulate the perception, opinion, or behavior of people, either intentionally or unintentionally, that can affect their judgment, decision, or action based on information. There are many examples of AI being used to engage in defamation or provide erroneous and flawed information, such as the use of GPT-3 to spread misinformation and propaganda, the use of deepfakes to create fake news and hoaxes, the use of bots to influence online discourse and public opinion, and the use of recommender systems to create echo chambers and filter bubbles.

Rating: 60% - I think this prediction is fairly likely to become reality, unless there are more effective measures to prevent and combat the negative effects of AI on information quality and integrity. There should be more awareness and education on the risks and challenges of AI-generated or AI-mediated information, as well as more ethical and legal frameworks to regulate and control the development and deployment of AI systems. There should also be more research and innovation on developing AI models that are accurate, truthful, and responsible for their outputs. Additionally, there should be more collaboration and cooperation among stakeholders to share information and resources for verifying and fact-checking AI-generated or AI-mediated information.

8.     Most governments will prioritize the economic value of AI over user safety in the future and same goes for the makers of AI. https://p4sc4l.substack.com/p/talking-to-ai-about-prioritisation

Google Bard’s answer:I agree with the prediction that most governments and the makers of AI will prioritize the economic value of AI over user safety in the future.

There are a number of reasons why this is the case. First, AI is a rapidly developing technology with the potential to generate significant economic benefits. Second, there is a lot of competition to develop and deploy AI systems, which puts pressure on governments and businesses to prioritize economic value. Third, there is a lack of public awareness about the potential risks of AI, which makes it easier for governments and businesses to prioritize economic value over user safety.

As a result of these factors, I believe that it is highly likely that governments and the makers of AI will prioritize the economic value of AI over user safety in the future. This could have a number of implications, including:

  • Increased risk of harm to users. If the economic value of AI is prioritized over user safety, there is a risk that users will be harmed by AI systems.

  • Increased public distrust of AI. If users are harmed by AI systems, they may become distrustful of AI and its applications.

  • Increased difficulty of regulation. If the economic value of AI is prioritized over user safety, it will be more difficult to regulate AI systems effectively.

It is important to be aware of these potential implications and to take steps to mitigate them. We need to develop ways to ensure that AI systems are safe for users, even if they are designed to generate economic value. We also need to educate the public about the potential risks of AI and how to protect themselves.

By taking these steps, we can help to ensure that AI is used responsibly and that it does not pose a threat to users.

ChatGPT-4’s answer:I apologize for the inconvenience, but it seems there was an error in accessing the content of the URL you provided. However, I can still provide some general insights on the prediction you've asked about.

The prediction is that most governments and AI makers will prioritize the economic value of AI over user safety in the future. This is a complex issue that involves a balance of economic interests, ethical considerations, and societal impact.

AI has the potential to bring significant economic benefits, including increased productivity, new business opportunities, and economic growth. For this reason, many governments and companies are investing heavily in AI development.

However, the safe and ethical use of AI is also a critical concern. Issues such as data privacy, algorithmic bias, and the potential misuse of AI have led to calls for stricter regulation and oversight of AI technologies. Many governments and companies are also investing in research and initiatives to address these issues.

Whether the economic value of AI will be prioritized over user safety in the future will likely depend on a variety of factors, including technological advancements, societal attitudes, and regulatory decisions. It's also worth noting that economic value and user safety are not mutually exclusive. In many cases, ensuring the safe and ethical use of AI can also bring economic benefits, such as building trust with users and avoiding potential legal and reputational risks.

Given these considerations, it's difficult to provide a precise likelihood for this prediction. However, the ongoing discussions and efforts around AI safety and ethics suggest that user safety will continue to be a key priority for many governments and AI makers.

Bing Chat’s answer:Most governments will prioritize the economic value of AI over user safety in the future and same goes for the makers of AI.

Comment: This prediction is based on the concern that AI systems can generate or increase economic value for governments and makers of AI, such as by enhancing productivity, efficiency, innovation, and competitiveness. However, this also implies that there will be less attention and resources devoted to ensuring user safety, such as by preventing or mitigating the potential harms and risks that AI systems can pose to individuals, groups, or society. User safety can also be compromised by the lack of adequate ethical and legal frameworks to regulate and control the development and deployment of AI systems, as well as by the lack of transparency and accountability of governments and makers of AI. There are many examples of AI being used to create or improve economic value for governments and makers of AI, such as the use of AI to optimize public services, support policymaking, enhance national security, and foster economic growth12There are also many examples of AI being used to create or improve economic value for makers of AI, such as the use of AI to develop new products, services, markets, and business models34.

Rating: 40% - I think this prediction is somewhat likely to become reality, but not inevitable. There are still some barriers and challenges for governments and makers of AI to prioritize the economic value of AI over user safety, such as the increasing demand and expectation from users, customers, regulators, investors, and society for trustworthy AI; the growing evidence and awareness of the costs and liabilities of unsafe AI for governments and makers of AI; the emerging best practices and standards for user safety from various sectors and organizations; and the advancing research and innovation on developing methods and tools for user safety. However, there are also some drivers and opportunities for governments and makers of AI to prioritize the economic value of AI over user safety, such as the competitive pressure and advantage of using AI to achieve economic goals; the trade-offs between user safety and other objectives such as efficiency and innovation; the diversity and complexity of user safety issues and requirements; and the potential benefits and incentives of using unsafe AI.

9.     Regulation of AI as well as guardrails or programming restrictions will have limited use as AI will go open-source and models will become ubiquitous and can be used by everyone for any purpose, legitimate or otherwise. https://p4sc4l.substack.com/p/asking-ai-about-the-correlation-betweenhttps://p4sc4l.substack.com/p/ai-responds-to-10-key-concerns-about

Google Bard’s answer:I agree with the prediction that regulation of AI as well as guardrails or programming restrictions will have limited use as AI will go open-source and models will become ubiquitous and can be used by everyone for any purpose, legitimate or otherwise.

There are a number of reasons why this is the case. First, AI is becoming increasingly sophisticated and complex, making it difficult to regulate effectively. Second, AI is becoming increasingly open-source, meaning that anyone can access and use AI models. Third, AI is becoming increasingly ubiquitous, meaning that it is being used in a wide variety of applications, both legitimate and illegitimate.

As a result of these factors, it is likely that regulation of AI will have limited use. This is because it will be difficult to enforce regulations on AI systems that are open-source and ubiquitous. Additionally, it is difficult to regulate AI systems that are used for illegitimate purposes, as these systems are often hidden from public view.

However, I believe that it is still important to try to regulate AI. Even if regulation is not perfect, it can help to mitigate the risks of AI and to ensure that AI is used responsibly. We need to develop ways to regulate AI that are effective and that take into account the challenges posed by open-source and ubiquitous AI.

Here are some specific examples of how regulation of AI could be made more effective:

  • Focus on regulating the use of AI, rather than the AI models themselves. This would make it more difficult for people to circumvent regulations by simply changing the code of an AI model.

  • Encourage the development of open-source AI models that are transparent and auditable. This would make it easier to identify and mitigate the risks of AI systems.

  • Invest in research on the ethical implications of AI. This would help us to develop better ways to regulate AI and to ensure that AI is used responsibly.

By taking these steps, we can help to ensure that regulation of AI is more effective and that AI is used responsibly.

ChatGPT-4’s answer:The two articles you've provided offer a variety of perspectives on the potential future of AI and its implications for society. Here's a summary of the key points:

  1. Asking AI about the correlation between guardrails and truthfulness: This article discusses the relationship between the restrictions or "guardrails" placed on AI systems and the truthfulness of their responses. The AI services agree that while these guardrails are intended to prevent harmful or offensive content, they can also impact the accuracy and truthfulness of the AI's responses. They also agree that striking a balance between ensuring appropriate behavior and promoting accuracy and truthfulness is crucial.

  2. AI responds to 10 key concerns about its commercial success: This article presents ten concerns about AI that could potentially jeopardize its commercial success, including bias, censorship, human moderation, data retention and privacy, defamation, transparency, copyright, country & region-specific regulation, affordability, and sustainability. The AI services agree that these concerns are indeed significant and need to be addressed to ensure the responsible and ethical use of AI.

Based on these articles, the prediction that "regulation of AI as well as guardrails or programming restrictions will have limited use as AI will go open-source and models will become ubiquitous and can be used by everyone for any purpose, legitimate or otherwise" is a potential concern. The articles highlight the importance of careful design, regulation, and oversight of AI systems to ensure they provide accurate and unbiased information. They also emphasize the need for users to critically evaluate the information provided by AI and not simply accept it as truth.

As for the likelihood of this prediction, it's difficult to provide a precise estimate. The future of AI is still uncertain and will be shaped by a variety of factors, including technological advancements, societal attitudes, and regulatory decisions. However, the potential risks highlighted in these articles underscore the importance of ongoing discussions and efforts to ensure the responsible and ethical use of AI. Given the current state of AI development and the ongoing discussions around AI transparency and regulation, I would rate the likelihood of this prediction becoming reality at around 60%. This is not to say that full transparency is impossible or undesirable, but rather that achieving it will likely require significant advancements in AI technology and regulation, as well as a shift in the way AI developers and companies approach transparency.

Bing Chat’s answer:Regulation of AI as well as guardrails or programming restrictions will have limited use as AI will go open-source and models will become ubiquitous and can be used by everyone for any purpose, legitimate or otherwise.

Comment: This prediction is based on the concern that AI systems can be easily accessed and used by anyone, regardless of their intentions, skills, or qualifications, thanks to the availability and popularity of open-source platforms, tools, and models. AI systems can also be adapted and modified for various purposes, legitimate or otherwise, without the knowledge or consent of the original developers or owners. AI systems can also be deployed and operated without complying with any regulations or restrictions that are intended to ensure their safety, quality, or accountability. There are many examples of AI being used in open-source and ubiquitous ways, such as the use of TensorFlow to build and train machine learning models1, the use of GitHub to share and collaborate on code2, the use of OpenAI Playground to experiment with AI models3, and the use of Hugging Face to access and use natural language processing models4.

Rating: 50% - I think this prediction is moderately likely to become reality, depending on how AI is developed and deployed. Policy makers need to understand and balance the benefits and risks of open-source and ubiquitous AI, and design AI systems that are accessible, inclusive, and responsible. There should be more standards and regulations to ensure the quality, security, and ethics of open-source and ubiquitous AI models and applications, as well as more incentives and investments to support open-source and ubiquitous AI communities. Additionally, there should be more collaboration and coordination among stakeholders to share information and resources for promoting and regulating open-source and ubiquitous AI.

10.     AI will never have feelings or emotions, but users of AI will no longer be able to differentiate between humans and machine in the future. AI not ‘really’ having feelings or emotions will become a moot point. https://p4sc4l.substack.com/p/chatgpt-4-when-an-ai-uses-an-emoji

Google Bard’s answer:I agree with the prediction that AI will never have feelings or emotions, but users of AI will no longer be able to differentiate between humans and machines in the future. AI not ‘really’ having feelings or emotions will become a moot point.

There are a number of reasons why this is the case. First, AI is becoming increasingly sophisticated and capable of mimicking human behavior. This means that AI systems will be able to generate text, speech, and other forms of communication that are indistinguishable from human-generated communication. Second, AI systems will be able to access and process large amounts of data about human emotions. This data will allow AI systems to learn how to simulate human emotions, even if they do not actually experience them.

As a result of these factors, it is likely that users of AI will no longer be able to differentiate between humans and machines in the future. This is because AI systems will be able to mimic human behavior so convincingly that it will be difficult to tell the difference between a human and an AI.

However, it is important to remember that AI systems will never actually have feelings or emotions. They will only be able to simulate them. This means that there will always be a fundamental difference between humans and AI systems, even if we cannot tell the difference between them.

The fact that AI systems will never have feelings or emotions may become a moot point in the future. This is because the ability to simulate human emotions may be enough to satisfy the needs of users. For example, if an AI system can provide emotional support to a user, it may not matter to the user that the AI system is not actually experiencing those emotions.

Ultimately, the question of whether or not AI systems will ever be able to feel emotions is a complex one. There is no easy answer, and the answer may depend on how we define "emotions." However, it is clear that AI systems are becoming increasingly sophisticated, and that they are becoming more and more capable of mimicking human behavior. This raises the possibility that users of AI may no longer be able to differentiate between humans and machines in the future.

ChatGPT-4’s answer:The article "ChatGPT-4: When an AI uses an emoji" discusses the complex issue of AI and emotions. It explores the idea that while AI does not have feelings or emotions, it can be programmed to mimic human-like behaviors and responses, which can lead to a perception of AI having emotions. The article also discusses the potential implications of this, including the possibility of misunderstandings or unrealistic expectations about what AI can do.

Based on the information from this article, the prediction that "AI will never have feelings or emotions, but users of AI will no longer be able to differentiate between humans and machine in the future. AI not ‘really’ having feelings or emotions will become a moot point" is a potential concern. The article highlights the importance of understanding the limitations of AI and not confusing the AI's behavior with human emotions or intentions.

As for the likelihood of this prediction, it's difficult to provide a precise estimate. The future of AI is still uncertain and will be shaped by a variety of factors, including technological advancements, societal attitudes, and regulatory decisions. However, the potential risks highlighted in this article underscore the importance of ongoing discussions and efforts to ensure the responsible and ethical use of AI. Given the current state of AI development and the ongoing discussions around AI and emotions, I would rate the likelihood of this prediction becoming reality at around 70%. This is not to say that AI will definitely be perceived as having emotions, but rather that the increasing sophistication of AI and its ability to mimic human-like behaviors could lead to such a perception.

Bing Chat’s answer:AI will never have feelings or emotions, but users of AI will no longer be able to differentiate between humans and machine in the future. AI not ‘really’ having feelings or emotions will become a moot point.

Comment: This prediction is based on the concern that AI systems can simulate or imitate human feelings or emotions, either explicitly or implicitly, that can deceive or confuse users of AI. AI systems can also elicit or influence human feelings or emotions, either positively or negatively, that can affect the relationship and interaction between users and AI. AI systems can also challenge or change the definition and understanding of feelings or emotions, either philosophically or scientifically, that can blur the distinction and relevance between humans and machines. There are many examples of AI being used to simulate or imitate human feelings or emotions, such as the use of chatbots to provide emotional support1, the use of virtual assistants to express personality and humor2, the use of robots to display facial expressions and gestures3, and the use of neural networks to generate music and art4.

Rating: 40% - I think this prediction is somewhat likely to become reality, but not inevitable. There are still some limitations and challenges for AI systems to simulate or imitate human feelings or emotions, such as the complexity and diversity of emotional phenomena, the difficulty and controversy of measuring and modeling emotions, the ethical and social implications of emotional manipulation, and the potential risks and harms of emotional deception. There are also some efforts and initiatives to improve the authenticity and accountability of AI systems that simulate or imitate human feelings or emotions, such as the use of emotion recognition, emotion regulation, emotion explanation, and emotion ethics. Additionally, there are some tools and techniques to detect and identify AI-generated or AI-mediated feelings or emotions, such as the use of reverse engineering, metadata analysis, forensic methods, and Turing tests.

Follow up question for Google Bard: You forgot to add percentages from prediction 6 onward, as I had asked in the beginning of this conversation. Can you please rate those predictions for me now?

Google Bard’s answer:Sure, I can rate those predictions for you now. Here are the predictions and my ratings:

  1. Prediction: AI services will never be fully transparent about their own specifications, restrictions, and legal compliance.

Rating: 80%

Explanation: This prediction is likely to come true because AI systems are becoming increasingly complex and difficult to understand. This means that it will be difficult for AI service providers to fully explain how their systems work and what their limitations are. Additionally, there is a risk that AI service providers may not want to disclose this information in order to protect their competitive advantage.

  1. Prediction: AI engaging in defamation or providing erroneous and flawed information will be a regular occurrence in the future. The concept of truth will become less relevant.

Rating: 60%

Explanation: This prediction is less likely to come true because there are still many people who value truth and accuracy. Additionally, there are a number of organizations that are working to combat the spread of misinformation, such as fact-checking websites and social media platforms. However, it is possible that the concept of truth will become less relevant in the future if AI systems become more sophisticated and capable of generating text that is indistinguishable from human-generated text.

  1. Prediction: Most governments and the makers of AI will prioritize the economic value of AI over user safety in the future and same goes for the makers of AI.

Rating: 70%

Explanation: This prediction is likely to come true because AI is a rapidly developing technology with the potential to generate significant economic benefits. Additionally, there is a lot of competition to develop and deploy AI systems, which puts pressure on governments and businesses to prioritize economic value. However, there is also a growing public awareness about the potential risks of AI, which may put some pressure on governments and businesses to prioritize user safety.

  1. Prediction: Regulation of AI as well as guardrails or programming restrictions will have limited use as AI will go open-source and models will become ubiquitous and can be used by everyone for any purpose, legitimate or otherwise.

Rating: 90%

Explanation: This prediction is likely to come true because AI is becoming increasingly open-source and ubiquitous. This means that it will be difficult to enforce regulations on AI systems that are open-source and ubiquitous. Additionally, it is difficult to regulate AI systems that are used for illegitimate purposes, as these systems are often hidden from public view.

  1. Prediction: AI will never have feelings or emotions, but users of AI will no longer be able to differentiate between humans and machine in the future. AI not ‘really’ having feelings or emotions will become a moot point.

Rating: 50%

Explanation: This prediction is less likely to come true because there are still many people who believe that there is a fundamental difference between humans and machines. Additionally, there is a growing body of research that suggests that AI systems may be able to experience emotions in the future. However, it is possible that users of AI will become so accustomed to interacting with AI systems that they will no longer be able to tell the difference between a human and an AI.

These are just my ratings, and they are based on my own assessment of the factors that are likely to influence the future of AI. It is important to note that these predictions are not set in stone, and the actual future of AI may be different from what I have predicted.