• Pascal's Chatbot Q&As
  • Posts
  • GPT-4o: Steve Bannon’s inflammatory remark strikes at the heart of a real fear: the values embedded in AI may not reflect the values of the people it governs.

GPT-4o: Steve Bannon’s inflammatory remark strikes at the heart of a real fear: the values embedded in AI may not reflect the values of the people it governs.

While his phrasing is ableist and jingoistic, the underlying concern — about accountability, allegiance, and cultural alienation — deserves engagement, not dismissal.

"Code, Country, and Control: Who Gets to Program the Future?"

Analysis by ChatGPT-4o

Based on the article At NatCon, the populist right calls for holy war against Big Tech(The Verge, 6 Sept 2025), this analysis identifies the statements that make sense, explains why they are valid, outlines how they should be addressed, details potential consequences of inaction, and concludes with recommendations for regulators, framed around this core statement:

“And the last thing I want is a bunch of folks on the spectrum in Silicon Valley — who I’m not sure are even that dedicated to the United States of America, because they got these weird people talking about network systems and ‘we’re a network and not really a country’ — I don’t want them making decisions for the American people.”

✅ Statements That Make Sense — and Why

  1. “The AI industry shares virtually no ideological overlap with national conservatism.”

    •  Valid: Silicon Valley is dominated by technolibertarian, neoliberal, and transhumanist ideologies, often detached from traditional religious or nationalistic values.

    • 📌 Action: Regulatory bodies should foster pluralism in AI governance by mandating inclusive advisory panels, including diverse ideological, ethical, and religious perspectives.

  2. “Tech founders were just as heroic as the Founding Fathers” (Sankar's defense)

    • ⚠️ Mixed: While visionary, this comparison overlooks the constitutional grounding and democratic accountability the Founding Fathers embraced.

    • 📌 Action: Encourage civic responsibility and ethical training among AI entrepreneurs; require AI firms to align innovation with democratic values.

  3. “The tech industry was colluding against conservatives to silence conservative ideas.”

    •  Valid perception: Documented instances of algorithmic bias, demonetization, and platform bans have fueled this belief.

    • 📌 Action: Introduce transparent algorithmic auditing laws and due process rights for moderation decisions, modeled on the EU DSA and Brazil’s fake news law.

  4. “Social media addiction and chatbots marketed toward kids feed directly into panic over family values.”

    •  Valid concern: Evidence shows negative impacts of screen time, AI chatbots, and algorithmic feedback loops on child mental health and development.

    • 📌 Action: Impose age-appropriate AI design standards and educational content disclosure laws. Mandate parental controls and limit personalization for minors.

  5. “Transhumanism… is an affront to the creations of God.”

    •  Makes sense within a values framework: Many citizens see radical body modification and AI-enhanced cognition as ethically questionable.

    • 📌 Action: Create an AI ethics council that includes bioethicists, religious leaders, and civil society to evaluate AI-human integration risks.

  6. “The AI genie is out of the bottle — that doesn’t imply a transhumanist future.” (Sankar)

    •  Important clarification: Not all AI development must lead to a loss of human agency or identity.

    • 📌 Action: Regulators should guide AI development toward augmentation, not substitution, through clear policy signals and public investment incentives.

  7. “The tech industry… is incapable of not running afoul of the cultural, intellectual, and religious values of social conservatives.”

    •  Perceptually true: This cultural disconnect fuels polarization and regulatory backlash.

    • 📌 Action: Promote diverse hiring practices in tech and require public-interest impact assessments before deploying large-scale AI systems.

❌ What Happens If These Issues Are Not Addressed?

  1. Polarization Deepens

    • AI becomes another wedge issue in the culture war, leading to erratic, ideologically motivated regulation or political weaponization of technology oversight.

  2. Loss of Trust in AI and Government

    • Public rejection of AI systems, especially in education, health, and governance, leads to widespread non-compliance, conspiracy theories, and populist backlash.

  3. Fragmented Tech Ecosystem

    • States or regions may attempt to ban or restrict AI, fragmenting the digital economy and weakening national competitiveness.

  4. Extremist Tech Alternatives Rise

    • Distrust of mainstream platforms may fuel the adoption of “parallel tech” ecosystems that are less accountable and more radicalized.

  5. Global Allies Distance Themselves

    • International partners may hesitate to engage with U.S. AI systems seen as ideologically captured or misaligned with democratic and human rights norms.

  6. Loss of U.S. Global Tech Leadership

    • As countries like the EU adopt more balanced AI governance models, they attract global trust and talent, marginalizing U.S.-based innovation.

🧭 Recommendations for U.S. and Non-U.S. Regulators

🇺🇸 For U.S. Policymakers:

  1. Launch a Federal AI Values Commission

    • Include voices from faith communities, labor unions, Indigenous groups, bioethicists, and veterans — not just Big Tech and academia.

  2. Expand the AI Bill of Rights

    • Enshrine protections against ideological discrimination, AI manipulation of children, and overreach by automated systems into enforceable law.

  3. Prohibit Ideological Profiling by Platforms

    • Mandate algorithmic transparency and anti-bias testing to ensure platforms do not covertly suppress political or religious viewpoints.

  4. Rebuild Civic Education for the Tech Era

    • Teach students not just how AI works, but also who builds it, who governs it, and how it impacts democracy and faith.

  5. Audit Tech’s National Loyalty Risks

    • Assess corporate structures, offshore tax schemes, and AI deployment strategies that may undermine national interests or democratic accountability.

🌍 For EU and International Regulators:

  1. Strengthen Cross-Border AI Oversight

    • Cooperate on global AI risk registries and include sociocultural impact assessments alongside safety and IP compliance.

  2. Reinforce Cultural Sovereignty in AI Deployment

    • Insist on region-specific customization of foundational models that align with local languages, traditions, and values.

  3. Track Transhumanist Research Initiatives

    • Monitor biotech and neurotech research partnerships between U.S. tech firms and foreign militaries or authoritarian governments.

  4. Support AI That Reinforces Human Dignity

    • Fund public AI projects that enhance human well-being without undermining identity, community, or agency — e.g., civic education bots, ethical health assistants, etc.

🧩 Final Thoughts — Who Decides?

Steve Bannon’s inflammatory remark about “a bunch of folks on the spectrum in Silicon Valley… who I’m not sure are even that dedicated to the United States” strikes at the heart of a real fear: the values embedded in AI may not reflect the values of the people it governs. While his phrasing is ableist and jingoistic, the underlying concern — about accountability, allegiance, and cultural alienation — deserves engagement, not dismissal.

We must reject dog-whistle politics and technocratic arrogance alike. Otherwise, AI will become a tool of neither innovation nor progress, but division, exploitation, and control.