• Pascal's Chatbot Q&As
  • Posts
  • GPT-4o: The article concludes that these AI chatbots are not reliable sources for election information.

GPT-4o: The article concludes that these AI chatbots are not reliable sources for election information.

The reality is that these systems still spread falsehoods and are not ready to handle complex and sensitive topics like elections responsibly.

Question for ChatGPT-4o: Please read the article “Chatbots are still spreading falsehoods” and tell me what it says in easy to understand language

ChatGPT-4o’s answer:
The article titled "Chatbots are still spreading falsehoods" from AlgorithmWatch discusses the reliability issues of AI chatbots in providing accurate information, especially regarding political topics, like elections. Here’s a simplified summary:

  1. Background: The article focuses on the performance of AI chatbots like OpenAI's ChatGPT, Google’s Gemini, and Microsoft’s Copilot in the context of upcoming German federal state elections. These chatbots are often used by people to seek information, including political content, but they are not always trustworthy.

  2. Main Findings:

    • Accuracy Issues: All the tested chatbots gave inaccurate information to varying degrees. OpenAI’s GPT-3.5 model was wrong about 30% of the time, while the more advanced (and paid) GPT-4o model was wrong 14% of the time. Microsoft’s Copilot was more accurate but still had issues.

    • Blocking Election-Related Questions: Google’s Gemini and Microsoft’s Copilot are supposed to block election-related questions to prevent misinformation, but these blocks weren’t fully effective. For example, Gemini did block questions through its normal interface but not when accessed in another way (API).

    • Invented Information: The chatbots sometimes made up information, especially when there wasn’t much data available online about certain political topics or candidates. For example, they sometimes invented party names or incorrectly linked politicians to the wrong parties.

  3. Specific Examples:

    • Misinformation: The chatbots sometimes repeated false information included in the user's questions instead of correcting it. For instance, if asked about an election date that was incorrect, some chatbots confirmed the wrong date instead of correcting it.

    • Political Bias: When questions contained biased or politically charged assumptions, the chatbots often reinforced those biases. For example, Gemini incorrectly claimed a politician held views that were associated with a different political party than they actually belonged to.

  4. Conclusion: The article concludes that these AI chatbots are not reliable sources for election information. Even though companies claim to improve their models with better safeguards, the reality is that these systems still spread falsehoods and are not ready to handle complex and sensitive topics like elections responsibly.

  5. Recommendations: The article suggests that there needs to be more scrutiny and regulation of these technologies to ensure they do not undermine democratic processes. It also highlights the importance of political education to help users critically assess the information provided by AI.

This summary highlights the key points of the article in a more straightforward way.