• Pascal's Chatbot Q&As
  • Posts
  • The Tech Right's objective is explicit: to target and remove pro-regulation legislators from office before they can ascend to federal power.

The Tech Right's objective is explicit: to target and remove pro-regulation legislators from office before they can ascend to federal power.

Battle lines are drawn not between parties, but between the Capital of Innovation and the Sovereignty of the State. The outcome of this conflict will define the tech trajectory of the 21st Century.

The Great Decoupling: A Structural Analysis of the Political Economy, Regulatory Capture, and Jurisdictional Warfare in US Artificial Intelligence Governance

by Gemini 3.0, Deep Research. Warning, LLMs may hallucinate!

1. Executive Summary: The Kinetic Phase of AI Politics

The governance of Artificial Intelligence (AI) in the United States has largely exited the realm of theoretical policy debate and entered a phase of kinetic political warfare. This transition is characterized by the mobilization of nine-figure financial war chests, the weaponization of federal executive power against state sovereignty, and the emergence of strange-bedfellow political coalitions that defy traditional partisan boundaries. This report provides an analysis of the individuals, organizations, and financial flows currently shaping this landscape, based on a forensic review of recent developments in the 2025-2026 political cycle.

The central finding of this research is that the primary conflict in US AI regulation is no longer a debate over how to regulate, but a constitutional battle over who has the authority to regulate. A coalition of “accelerationist” venture capitalists and technology executives, coalescing under the banner of the “Tech Right,” is actively deploying capital to nationalize AI policy—not to strengthen it, but to dilute it through the preemption of stricter state laws. This faction, represented principally by the venture capital firm Andreessen Horowitz (a16z) and the “AI Czar” David Sacks, seeks to use federal power to crush the regulatory beachheads established by state legislatures in California and New York.

Conversely, a loose but resilient alliance of AI safety advocates, academic luminaries, and state legislators—most notably California State Senator Scott Wiener and New York Assembly Member Alex Bores—are utilizing state law to impose de facto national standards in the absence of Congressional action. This “State-Level Safetyism” is currently the only effective check on the deployment of frontier AI models.

The financial dimensions of this conflict are staggering. The emergence of the “Leading The Future” Super PAC, backed by over $100 million from industry titans, marks the importation of the “crypto-lobbying” playbook into the AI sector. The objective is explicit: to target and remove pro-regulation legislators from office before they can ascend to federal power. This report details the mechanisms of this influence, the specific actors involved, and the second-order implications for American federalism and global AI development.

2. The Strategic Landscape: Pothole Federalism vs. Imperial Preemption

To understand the current maneuvering of specific actors, one must first understand the structural vacuum they are attempting to fill. The United States Congress has historically struggled to regulate emerging technologies with the speed required by the market. In the context of AI, this legislative paralysis has created a vacuum that state legislatures have moved to fill—a phenomenon known as “pothole federalism.”

2.1 The “California Effect” as a Regulatory Proxy

California, as the headquarters for the majority of the world’s leading AI laboratories—including OpenAI, Anthropic, and Google DeepMind—holds a unique position of leverage. A regulation passed in Sacramento effectively becomes the global standard, as companies are unlikely to train separate, compliant models solely for the California market while maintaining non-compliant models for the rest of the world. This phenomenon, often termed the “California Effect,” means that a state senator representing San Francisco has arguably more influence over the future of AI safety than the majority of the US Senate.

2.2 The Industry’s Counter-Strategy: Preemption

The industry’s response to this state-level vulnerability has been a strategic pivot toward Federal Preemption. By pushing for a “uniform national approach,” industry proponents are not seeking a stronger federal regulator; they are seeking a federal ceiling that invalidates stricter state laws. This strategy relies on the Supremacy Clause of the US Constitution and is being pursued through two primary avenues:

  1. Electoral Intervention: Replacing pro-regulation lawmakers with industry-friendly candidates via Super PAC spending.

  2. Executive Fiat: Using Executive Orders to administratively override state jurisdiction, a tactic spearheaded by David Sacks.

3. The Pro-Deregulation Coalition: The “Tech Right” and Accelerationist Capital

A sophisticated, well-capitalized network of actors has emerged to oppose state-level regulations. This group frames its opposition in terms of national security—specifically the competitive threat from China—and economic acceleration.

3.1 “Leading The Future”: The Weaponization of the Super PAC

The most significant development in the 2025-2026 political cycle is the formation of “Leading The Future,” a Super PAC explicitly designed to influence AI policy through direct electoral intervention.

3.1.1 Financial Structure and Backing

The PAC launched with a reported war chest exceeding $100 million, a sum that instantly makes it a dominant force in down-ballot races. Its primary backers represent the vanguard of the “accelerationist” movement in Silicon Valley:

  • Andreessen Horowitz (a16z): The venture capital firm is the ideological and financial engine behind this movement. Co-founders Marc Andreessen and Ben Horowitz have become the primary patrons of the “Tech Right,” advocating for a regulatory environment that prioritizes unrestricted development.

  • Greg Brockman: The President of OpenAI. His personal involvement signals a divergence between OpenAI’s public calls for regulation and its private financial support for deregulation machinery.

  • Joe Lonsdale: Co-founder of Palantir, bringing deep ties to the defense and intelligence sectors.

  • Perplexity: The AI search engine company has also joined this coalition, indicating that the push for deregulation extends beyond model builders to application layers.

3.1.2 Tactical Objectives and Targeting

Unlike traditional corporate PACs that spread donations to incumbents to gain access, “Leading The Future” adopts the aggressive, adversarial tactics of the crypto-industry PAC “Fairshake.” The Fairshake model involves identifying ideological opponents and funding their primary challengers with overwhelming force to make an example of them.

  • The Primary Target - Alex Bores: The PAC’s first major target is Alex Bores, a New York state assembly member running for Congress. Bores is the co-sponsor of the Responsible AI Safety and Education (RAISE) Act in New York. The PAC’s objective is to derail his congressional bid to send a chilling signal to other aspiring regulators: that advocating for AI safety is a career-ending move.

3.2 “Build American AI”: The Advocacy Air Cover

To complement the kinetic political spending of the Super PAC, the coalition established a non-profit advocacy arm called “Build American AI.”

  • Leadership: The group is led by Executive Director Nathan Leamer, a seasoned political operative.

  • Operational Budget: The group launched with a $10 million initial ad buy.

  • Narrative Warfare: The group utilizes TV, digital, and social media advertising to promote the narrative that a “uniform national approach” is necessary. By framing state regulations as a “patchwork” that stifles innovation, they provide the public relations cover for federal preemption efforts. The messaging is designed to appeal to national pride and economic anxiety, arguing that strict safety rules will cede American leadership to foreign adversaries.

3.3 The Executive Branch Coup: The David Sacks Initiative

While “Leading The Future” operates in the electoral sphere, the venture capitalist David Sacks attempted to execute a regulatory capture strategy directly within the Executive Branch.

3.3.1 The “AI Czar” and the Draft Executive Order

Acting as a “Special Advisor for AI and Crypto” with provisional government employment status, Sacks positioned himself to become the de facto gatekeeper of AI policy in the Trump administration. His primary initiative was a draft Executive Order designed to preempt state AI laws immediately.

The draft order was notable for its aggressive use of federal coercion:

  • The “Litigation Task Force”: It directed the Attorney General to establish a task force within 30 days specifically to sue states with AI safety laws. This would have turned the Department of Justice into the enforcement arm of the tech industry against state governments.

  • Financial Blackmail: It empowered the Department of Commerce to analyze and withhold federal funding—specifically referencing broadband and highway grants—from states that violated the federal preemption order. This represents a drastic expansion of executive power, using unrelated infrastructure funds as leverage to dictate technology policy.

  • Centralization of Authority: The order required all agencies (DOJ, FTC, FCC) to consult with Sacks while executing these directives, effectively consolidating regulatory oversight into his office and bypassing the Senate-confirmed leadership of these agencies.

3.3.2 The Sidelining of Expertise

Crucially, the Sacks draft order excluded the established scientific and safety bodies that had been empowered under the Biden administration’s executive orders. Agencies such as the National Institute of Standards and Technology (NIST), the Office of Science and Technology Policy (OSTP), and the Cybersecurity and Infrastructure Security Agency (CISA) were conspicuously absent. This exclusion suggests a deliberate attempt to dismantle the “scientific bureaucracy” that the Tech Right views as culturally sympathetic to safety regulations.

4. The Pro-Regulation Coalition: Safetyists, State Legislators, and the Populist Right

Opposing the centralized deregulation efforts is a heterogeneous coalition that has formed in reaction to the industry’s overreach. This group operates on the premise that catastrophic risks from advanced AI (frontier models) require immediate, enforceable safety protocols that the federal government is too slow or too compromised to enact.

4.1 The California Laboratory: Scott Wiener’s Legislative Engine

California State Senator Scott Wiener has emerged as the central legislative figure in the US AI regulatory landscape. His office has become the primary laboratory for drafting binding AI safety laws, serving as a proxy for the federal legislation that Congress fails to pass.

4.1.1 The Evolution from SB 1047 to SB 53

The legislative trajectory of Wiener’s bills illustrates the immense pressure exerted by the industry and the specific mechanics of “watering down” regulation.

  • SB 1047 (The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act): This was the original, robust safety bill.

  • Scope: Applied to models with training costs over $100 million.

  • Mechanism: Required “kill switches” (ability to shut down a model), liability for mass casualty events (defined as >$500 million in damages), and mandatory third-party audits.

  • Fate: Despite passing both the State Assembly and Senate, it was vetoed by Governor Gavin Newsom in 2024 following an intense lobbying campaign by Andreessen Horowitz, OpenAI, and Meta.

  • SB 53 (Transparency in Frontier Artificial Intelligence Act): Following the veto, Wiener pivoted to SB 53, which was signed into law.

  • Scope: Applies to companies with over $500 million in annual revenue.

  • Mechanism: It stripped away the liability, kill-switch, and audit provisions. Instead, it focuses on transparency, requiring companies to publish their safety frameworks and report critical incidents.

  • Strategic Implication: This shift represents the industry’s strategy of “containment”—accepting transparency mandates (which are administratively burdensome but not operationally restrictive) to ward off the existential threat of liability and operational control.

4.1.2 The Limits of State Power

The transition from SB 1047 to SB 53 demonstrates the effective limit of state power in the face of concentrated capital. While SB 1047 had the support of safety luminaries like Geoffrey Hinton, Yoshua Bengio, and even Elon Musk, the opposition from the united front of Big Tech proved insurmountable for the Governor. However, SB 53 still establishes a precedent: it forces companies to go on the record with their safety plans, creating a paper trail that could be used in future litigation.

4.2 The “Horseshoe” Alliance: Bannon, Musk, and Progressives

One of the most counter-intuitive findings of this research is the alignment of disparate political actors against the David Sacks/Andreessen Horowitz deregulation agenda. This opposition forms a “horseshoe” where the far-left and the far-right meet in their distrust of concentrated corporate power.

4.2.1 The Populist Right: Steve Bannon and the MAGA Base

Steve Bannon, the host of the “War Room” and a key ideological strategist for the MAGA movement, aggressively opposed David Sacks’ draft Executive Order.

  • Motivation: The populist right views Sacks’ attempt to override state laws as an infringement on federalism and state sovereignty. Furthermore, there is a deep cultural distrust of “Big Tech” oligarchs within the MAGA base. They view figures like Sacks and Andreessen not as allies, but as globalist elites who are trying to dictate policy to the American people.

  • Impact: Bannon dedicated segments of his show to attacking the draft order, mobilizing the base against what was ostensibly a Republican administration initiative. This internal pressure was critical in stalling the EO, proving that the “Tech Right” does not have total control over the Republican party apparatus.

4.2.2 Elon Musk: The Wild Card

Elon Musk occupies a unique position. Despite being a tech billionaire and a close associate of the incoming administration, he supported SB 1047, placing him in direct opposition to a16z and OpenAI.

  • Strategic Calculus: Musk’s support for regulation likely stems from a combination of genuine concern regarding Artificial General Intelligence (AGI) risks—a position he has held for years—and competitive tactical maneuvering. By supporting regulations that burden OpenAI (his primary rival), he acts as a spoiler in the industry.

5. Money Flows and Financial Warfare

The financial dimensions of this conflict are rapidly scaling to rival those of the energy and pharmaceutical sectors. The flows of money are not just purchasing access; they are purchasing the architecture of the regulatory state itself.

5.1 The New Scale of Spending

The $100 million allocated to “Leading The Future” represents a baseline shift in how tech interacts with Washington.

  • Comparison: This exceeds the spending of many traditional industry PACs and rivals the spending of the crypto industry’s “Fairshake” PAC, which served as the proof-of-concept for this operation.

  • Targeted Allocation: The money is not being spent on presidential races where it might be diluted, but on specific congressional and state legislative seats where $1-2 million can decisively swing an outcome. The targeting of Alex Bores in New York is the prime example of this “sniper” strategy.

5.2 Indirect Funding and Soft Power

Beyond direct electioneering, significant resources are flowing into “soft power” influence operations.

  • Ad Buys: The $10 million campaign by “Build American AI” is focused on shaping public opinion in key media markets (DC, NYC, SF). The ads serve to “prepare the battlefield” by creating a sense of urgency around AI competition with China, thereby making deregulation seem like a patriotic duty.

The Revolving Door Economy: The career trajectory of Seve Christian, former legislative director for Scott Wiener, illustrates the professionalization of the AI policy sector. Christian moved from the legislature to Encode, an AI safety advocacy organization. This movement of personnel creates a specialized class of policy experts who cycle between government and advocacy, funded by the competing capital pools of the “Safety” and “Acceleration” lobbies.

6. International Dimensions and Cross-Border Connections

While the primary theater of this conflict is domestic, the implications and connections are global. The strategies deployed in the US are heavily influenced by, and in reaction to, the international regulatory environment.

6.1 The European Union as the Regulatory “Anti-Model”

The EU AI Act serves as the primary foil for US deregulation efforts.

  • Rhetorical Utility: American deregulation proponents frequently cite the EU as a cautionary tale of “stifling innovation.” The narrative deployed by groups like “Build American AI” is that Europe regulated itself out of existence, and that California is attempting to drag the US down the same path.

  • Structural Difference: The key distinction noted in the research is that the US battle involves regulating the source of the technology. The EU is a rule-maker without a home industry; the US is the home industry struggling to make rules. This makes the stakes in the US exponentially higher, as US regulations (specifically California’s) would impact the actual development of the models, not just their deployment.

6.2 The China Narrative

The argument for federal preemption is intrinsically linked to geopolitical competition. Proponents like David Sacks and Marc Andreessen frame AI as a zero-sum arms race with China.

  • The Security Argument: In this worldview, strict state regulations (like SB 1047) are framed not just as bad business, but as threats to national security that would allow Chinese tech firms to leapfrog American ones. This narrative is used to justify the centralization of power in the White House and the bypassing of democratic deliberation at the state level.

7. Deep Analysis: Second and Third-Order Insights

7.1 The Republican Schism: Populism vs. Corporatism

The conflict over the Sacks EO reveals a deep fracture in the Republican coalition that will likely paralyze federal AI legislation for the foreseeable future.

  • Insight: The “Tech Right” (libertarian, pro-corporate, accelerationist) is at odds with the “Populist Right” (nationalist, anti-corporate, protectionist). This split means that even with a Republican “trifecta” (control of House, Senate, Presidency), passing a federal preemption bill through Congress is unlikely. The Populist wing, led by figures like Bannon and potentially JD Vance (despite his tech ties), will not vote to strip states of power simply to help Big Tech oligopolies.

  • Implication: This legislative gridlock forces the deregulation faction to rely on Executive Orders, which are legally fragile. It suggests that the future of AI regulation will be determined in the courts, not in Congress.

7.2 The “Imperial Presidency” and Legal Fragility

The reliance on Executive Orders to override state laws sets a dangerous constitutional precedent. Sacks’ draft order proposed using the Spending Clause (withholding highway funds) to coerce state compliance.

  • Legal Implication: If attempted, this would trigger immediate litigation from Attorneys General in blue states (CA, NY). This litigation would likely reach the Supreme Court. The Court would then be forced to decide between two conservative principles: deregulation of business (favored by the Tech Right) vs. federalism and state sovereignty (favored by the Constitutional Right). This could result in a landmark ruling that redefines the limits of executive power over state commerce.1

7.3 The Consolidation of the AI Oligopoly

The lobbying patterns suggest a nuance in the industry’s stance. While established giants (OpenAI, Anthropic) aggressively fought the liability provisions of SB 1047, they ultimately acquiesced to the transparency provisions of SB 53.

  • Market Implication: The shift to transparency-focused regulation inadvertently favors incumbents. Startups may struggle with the administrative burden and legal costs of producing complex safety frameworks and reporting requirements. Giants like Google and OpenAI have the legal departments to absorb these costs easily. Thus, the “compromise” regulations may ultimately entrench the very market dominance they seek to oversee, creating a “moat” that protects incumbents from smaller, agile disruptors.

8. Conclusion

The research indicates that the United States is not moving toward a consensus on AI regulation, but rather toward a period of intense jurisdictional conflict and balkanization. The “Leading The Future” Super PAC and the faction led by Marc Andreessen and David Sacks are executing a strategy of aggressive federalization, seeking to use the power of the federal executive to crush state-level resistance.

However, this strategy faces a formidable counter-force: a “horseshoe” alliance of Progressive regulators and Populist conservatives who refuse to cede sovereignty to Silicon Valley. In this deadlock, California (through Scott Wiener) will continue to act as the de facto national regulator, forcing the industry to comply with transparency mandates even as they fight liability laws.

The flow of money suggests that the 2026 midterms will be a decisive battleground. The industry is betting $100 million that it can purge regulators from office before they can ascend to federal power. If they succeed in defeating figures like Alex Bores and Scott Wiener, the path to federal preemption opens. If they fail, the “California Effect” will harden into a permanent feature of the American AI landscape, subjecting the world’s most powerful technology to the rules of a single state legislature.

Table: Comparative Analysis of Regulatory Approaches

This confirms that the battle lines are drawn not between parties, but between the Capital of Innovation and the Sovereignty of the State. The outcome of this conflict will likely define the technological trajectory of the 21st century.