• Pascal's Chatbot Q&As
  • Posts
  • GPT-4o analyses two studies: AI experts overwhelmingly see the tech as positive, while the public fears job losses, disempowerment, and a lack of regulation.

GPT-4o analyses two studies: AI experts overwhelmingly see the tech as positive, while the public fears job losses, disempowerment, and a lack of regulation.

This reflects an elite-populace split not unlike those seen in political polarization​. Experts and the public share a lack of confidence in both government and companies to regulate AI effectively.

Asking ChatGPT-4o: Please analyze the article “Most Americans don’t trust AI — or the people in charge of it / Two new studies show that the public is anxious about AI.” as well as the two associated papers and provide me with a) the most surprising, controversial and valuable statements and findings in those papers and b) what AI developers and regulators can do to try and restore trust among (potential) AI users, and c) what they should stop doing immediately to prevent the situation from deteriorating.

ChatGPT-4o’s answer:
Here's a synthesis of the article “Most Americans don’t trust AI — or the people in charge of it” and the associated reports from Pew Research Center and Gallup-Walton-GSV:

A) Surprising, Controversial, and Valuable Findings

Surprising

  1. Massive Trust Gap: Only 24% of U.S. adults believe AI will benefit them, compared to 76% of AI experts​.

  2. Even AI experts lack control: Nearly 60% of the general public and a similar number of experts feel they have little or no control over how AI affects their lives​.

  3. Gen Z = High Usage, Low Trust: Despite 79% of Gen Z using AI tools, only 27% feel hopeful, and 41% feel anxious about them​​.

Controversial

  1. Elites vs. Everyone Else: AI experts overwhelmingly see the tech as positive, while the public fears job losses, disempowerment, and a lack of regulation. This reflects an elite-populace split not unlike those seen in political polarization​.

  2. Demographic Misrepresentation in AI Design: Both public and experts believe AI systems reflect the views of white men, not women or people of color — with only 44% of experts saying AI design reflects women’s perspectives well​.

  3. Regulation Failure Acknowledged by All: Experts and the public share a lack of confidence in both government and companies to regulate AI effectively — a rare moment of agreement​.

Valuable

  1. Clear Policies = More Trust: Gen Z students and workers are far more likely to use and trust AI when their school or workplace has clear AI use policies​​.

  2. Critical Thinking Threat: Nearly half of Gen Z fears AI harms their ability to think critically, even as they rely on it in daily tasks​.

  3. Desire for Control: Over 55% of both experts and the public want more control over how AI is used in their lives — a unifying message that could guide responsible AI development​.

B) What AI Developers and Regulators Can Do to Restore Trust

1. Prioritize Transparent Governance

  • Enact clear, accessible, and enforceable AI use guidelines for workplaces, schools, and platforms.

  • Build independent oversight bodies including diverse community representatives to monitor and audit AI behavior and impacts.

2. Democratize Participation in AI Development

  • Involve public voices — especially Gen Z, minorities, and underrepresented groups — in design and testingto reduce bias and improve legitimacy.

  • Foster citizen assemblies or participatory frameworks for major AI decisions, similar to climate deliberation models.

3. Invest in Education and Agency

  • Mandate AI literacy programs in schools and professional environments.

  • Help users understand how AI works, when to trust it, and how to critically engage with AI-generated content.

4. Ensure Ethical AI Deployment

  • Require developers to disclose how data is used, who benefits, and what limitations exist.

  • Use third-party audits and bias assessments before public deployment, especially for high-impact applications (e.g., hiring, healthcare, education).

C) What Developers and Regulators Should Stop Doing Immediately

1. Stop Prioritizing Hype Over Harm

  • Cease marketing AI as omnipotent without addressing real risks — like job displacement, surveillance, and misinformation.

2. Stop Building Without Accountability

  • End the practice of releasing powerful models (e.g., GPT-4, Gemini) without thorough safeguards or transparent impact assessments.

3. Stop Excluding Critical Voices

  • Avoid designing AI systems in echo chambers dominated by a narrow group of (mostly male, white) technologists​.

  • Halt the trend of defunding or downplaying DEI and AI ethics initiatives, which undermines trust even further.

4. Stop Regulatory Theater

  • Move away from performative hearings and superficial self-regulation by tech companies.

  • Instead, enable real enforcement power for regulatory bodies and support global coordination on AI standards.