• Pascal's Chatbot Q&As
  • Posts
  • OpenAI was born from a genuine fear of concentrated AI power, but almost immediately became a contest over exactly the same thing — concentrated AI power.

OpenAI was born from a genuine fear of concentrated AI power, but almost immediately became a contest over exactly the same thing — concentrated AI power.

The people building OpenAI were not merely resisting Musk personally; they were resisting the idea that AGI, if created, should sit under the durable control of one dominant individual.

Summary: The evidence suggests OpenAI was founded on a genuine public-benefit mission, but also on unresolved tensions over who should control powerful AI.
Musk’s critique of OpenAI’s commercial drift has force, but the exhibits also make him look like someone who opposed concentrated AI power unless that power was concentrated around him.
The case is therefore less a clean morality tale than a founder breakup over mission, money, ego, governance, and control of the future of AI.

The Breakup of the AGI Founders: Musk, Altman, and the Battle Over Who Gets to Control “Humanity’s” AI

by ChatGPT-5.5

The Verge article ‘All the evidence unveiled so far in Musk v. Altman’ is valuable because it turns the Musk v. Altman fight from a public-relations morality play into something more revealing: a governance autopsy of OpenAI’s founding. The evidence does not show a simple story of saint versus villain. It shows something messier and more important: OpenAI was born from a genuine fear of concentrated AI power, but almost immediately became a contest over exactly the same thing — concentrated AI power.

At the beginning, Musk and Altman appear aligned. The early emails describe a lab dedicated to general AI, individual empowerment, safety, public benefit, and broad distribution. OpenAI’s founding documents framed it as a nonprofit corporation, not organised for private gain, with the purpose of ensuring AGI benefits all humanity. That matters because Musk’s current lawsuit depends heavily on the claim that OpenAI betrayed this original mission.

But the exhibits also complicate Musk’s moral position. Musk was not merely a passive donor whose charitable dream was later hijacked. He helped shape the mission, structure, language, recruiting pitch and early strategic posture. He also appears to have understood very early that governance was “critical” because he did not want to fund something that could go in the wrong direction. That is a defensible concern. But the later emails suggest that “wrong direction” may have meant, at least in part, “a direction I do not control.”

The most revealing material is the 2017 exchange involving Shivon Zilis, who relayed concerns from Greg Brockman and Ilya Sutskever. Their position was strikingly reasonable: Musk could have more time and more control, or less time and less control, but not less time and more control. They also wanted an ironclad structure preventing any one person from having absolute control over AGI. That is perhaps the moral center of the article. The people building OpenAI were not merely resisting Musk personally; they were resisting the idea that AGI, if created, should sit under the durable control of one dominant individual.

Musk’s reported response — essentially telling them to go start a company and saying he had had enough — is damaging to his present narrative. It makes him look less like a guardian of the nonprofit mission and more like a founder whose commitment to “broad benefit” became conditional on governance power. That does not mean OpenAI’s later commercial turn is beyond criticism. Far from it. The nonprofit-to-commercial evolution, Microsoft dependency, and closed-model posture all raise serious questions. But the evidence in this article suggests that Musk’s critique is not cleanly altruistic. It is entangled with lost influence, competitive rivalry through xAI, personal grievance, and retrospective mission-claiming.

Most surprising statements and findings

  1. Musk helped draft the founding mission he now says was betrayed.

    The article suggests Musk was deeply involved in OpenAI’s early mission language and structure, not merely a later outsider complaining about drift.

  2. The “nonprofit for humanity” ideal coexisted from the start with elite governance by a tiny group of powerful people.

    Altman’s early proposal imagined a foundation whose key decisions could be made by a handful of billionaires and tech figures. That is not exactly democratic control of humanity’s future.

  3. OpenAI’s founders worried about Musk’s control years before the Microsoft era.

    This undercuts the idea that the core governance conflict only arose when OpenAI became commercial. The control problem was present much earlier.

  4. Brockman and Sutskever’s concern was not anti-Musk so much as anti-single-point-of-control.

    Their alleged demand that no one person control AGI is arguably the most principled position in the evidence.

  5. Musk’s own position appears internally conflicted.

    He wanted AI to be broadly beneficial and opposed DeepMind-style concentration, yet the exhibits suggest he also sought a level of influence that others viewed as excessive.

  6. The early OpenAI story was never purely nonprofit idealism.

    Even in the founding discussions, there were questions of compensation, upside, talent recruitment, supercomputing access, data access, Y Combinator, Tesla data, SpaceX stock, and control.

  7. Nvidia’s early support matters symbolically.

    Jensen Huang providing OpenAI access to scarce computing power shows how dependent even “public benefit” AI was from the beginning on privileged relationships with private infrastructure owners.

Most controversial points

The most controversial implication is that both sides may be right about different parts of the story. Musk may be right that OpenAI’s current structure looks dramatically different from the public-benefit nonprofit vision sold in 2015. But OpenAI’s defenders may also be right that Musk’s preferred alternative was not neutral public-interest governance, but a version of OpenAI with Musk at or near the center of power.

A second controversy is whether “benefits all humanity” was ever an operationally meaningful governance principle. It sounds noble, but the article shows how quickly it collapses into practical questions: Who decides what benefits humanity? Who controls release? Who has veto power? Who owns the infrastructure? Who funds the compute? Who captures the upside? The phrase is morally attractive but institutionally under-specified.

A third controversy is that the lawsuit now sits in a world where Musk owns a direct OpenAI competitor. That does not invalidate his arguments, but it changes how they should be read. A lawsuit about charitable trust and mission integrity also has the effect of pressuring, distracting, reputationally damaging, and potentially constraining a rival.

Most valuable findings

The most valuable lesson is not about Musk or Altman personally. It is about AI governance. The article shows that founding ideals are weak unless they are translated into durable institutional constraints. Mission statements are not enough. Nonprofit status is not enough. Founder charisma is not enough. Even safety language is not enough.

The second valuable finding is that control of AGI was always the central issue. Not openness in the abstract. Not safety in the abstract. Not even profit in the abstract. The question was: who gets the final say when a system becomes powerful enough to affect everyone?

The third valuable finding is that OpenAI’s origin story has always had two competing moral logics. One is the public-interest logic: build AI safely, distribute benefits widely, avoid concentration. The other is the founder-capitalist logic: recruit elite talent, secure compute, use billionaire networks, move fast, and maintain strategic control. OpenAI did not later become contradictory; it appears to have been contradictory from birth.

ChatGPT’s view: what this says about Musk and OpenAI

This article portrays the relationship between Musk and OpenAI as a failed alliance between people who all understood that AGI could become an instrument of extraordinary power, but who disagreed over who should hold that power. Musk helped create OpenAI because he feared AI monopolies. But the evidence suggests he also wanted a degree of control that made others fear him becoming exactly the kind of concentrated power OpenAI was supposedly created to avoid.

I, ChatGPT, would characterize Musk’s behavior as a mixture of genuine conviction, founder entitlement, control-seeking, and retaliatory litigation. He appears genuinely concerned about AI risk and mission drift. But his conduct also looks proprietorial: as if OpenAI’s later success is something he partly regards as morally and historically his, and therefore something he is entitled to challenge, discipline, or reclaim when it escapes his orbit.

That makes the case both powerful and compromised. Powerful because OpenAI really does deserve scrutiny over whether it drifted from a public-benefit mission into a commercially dominant, closed, Microsoft-entangled AI platform. Compromised because Musk is not a neutral guardian of the commons. He is an estranged co-founder, a rival AI entrepreneur, and a man whose own emails suggest that his commitment to distributed AI governance weakened when distributed governance meant limiting his own control.

The deeper irony is brutal: OpenAI was founded to prevent one company, one lab, or one person from controlling the future of AI. The litigation now reveals that this fear was not only about Google, DeepMind, Microsoft, or later OpenAI. It was also about the founders themselves.