- Pascal's Chatbot Q&As
- Posts
- Perhaps 10,000 founders and employees at companies such as OpenAI, Anthropic and Nvidia may now have “retirement wealth” above $20 million, while others...
Perhaps 10,000 founders and employees at companies such as OpenAI, Anthropic and Nvidia may now have “retirement wealth” above $20 million, while others...
...fear that even a well-paid technology career will never catch up. AI is producing a winner-take-most psychology before society has built a credible transition story for everyone else.
Summary: The AI boom is creating a sharp divide between a small group of equity-rich winners and everyone else, making even highly paid tech workers feel economically and professionally insecure.
The evidence supports the broad concern — wealth concentration, layoffs, role disruption and career anxiety are real — but the specific figures in the post are anecdotal and should be treated as directional, not proven.
The deeper issue is that AI is both the lottery ticket and the threat to the fallback career path, which could hollow out professional ladders, weaken workplace loyalty and intensify resentment unless companies create credible transition paths.
The AI Gold Rush Is Creating a Morale Crisis Before It Creates Mass Unemployment
by ChatGPT-5.5
The TechCrunch article “The haves and have nots of the AI gold rush” is not really an investigative article; it is a short amplification of Deedy Das’s viral X post about the mood inside San Francisco’s AI boom. Its central claim is that the current AI cycle has created a psychological and economic split between a small class of AI “winners” and everyone else. TechCrunch summarizes Das as saying that San Francisco feels “pretty frenetic,” with the “divide in outcomes” worse than he has ever seen, and that perhaps 10,000 founders and employees at companies such as OpenAI, Anthropic and Nvidia may now have “retirement wealth” above $20 million, while others fear that even a well-paid technology career will never catch up.
A. What the concerns are
The first concern is wealth bifurcation. Das is describing a world in which traditional career success no longer feels sufficient. A software engineer earning hundreds of thousands of dollars can still feel economically irrelevant next to someone who joined Anthropic, OpenAI, Nvidia or another frontier-AI winner at the right moment. This is not poverty anxiety; it is relative-status anxiety inside an already privileged ecosystem. But that does not make it irrelevant. Markets are shaped by incentives, and if the most ambitious people conclude that equity timing matters more than skill, tenure, loyalty or craft, the entire social contract of professional work starts to weaken.
The second concern is career-path collapse. Das says the corporate ladder now looks like “the wrong building to climb.” People are asking whether they should become founders, join a frontier AI company, switch careers, learn AI, chase the next 10x stock or demand more compensation. That is a serious labour-market signal. It suggests that AI is not merely changing tools; it is changing the perceived rationality of career planning. In previous tech cycles, one could miss a particular company and still believe in a stable career ladder. Here, the fear is that the ladder itself is being pulled away while a tiny group is being lifted by a rocket.
The third concern is occupational identity loss, especially among software engineers. The post says many software engineers feel their “life’s skill” is no longer useful and that the day-to-day role of many jobs has changed almost overnight. That is exaggerated if read literally, but powerful if read psychologically. Software engineering has long been one of the safest, highest-status knowledge-work careers. If even engineers feel that their skill base is being devalued, the signal to lawyers, editors, analysts, designers, marketers, junior researchers and middle managers is obvious: no white-collar profession is immune.
The fourth concern is middle-management paralysis. Das singles out mid-to-late middle managers who have families, less appetite for founder risk, weaker AI-native skills and fewer obvious exits. This is perhaps the most under-discussed part of the post. AI adoption does not only threaten junior workers through automation. It also threatens the coordination layer of companies: people whose value lies in meetings, reporting, translation, oversight, prioritisation and process management. Those functions are not disappearing overnight, but they are exposed to compression.
The fifth concern is a culture of anxious imitation. The final line of the X post is especially sharp: the same anxiety pushes people to build more AI products in the hope that they too can “vibecode” their way into the winning class. That creates a self-reinforcing bubble dynamic: fear of missing out produces more AI startups, more shallow automation products, more competition for AI-labelled funding, more hype, and more pressure on everyone else to join the same race.
B. Is there enough evidence?
There is enough evidence to support the direction of the concerns, but not enough to support every numerical or psychological claim as stated.
The wealth-concentration claim is directionally credible. OpenAI itself announced a $122 billion funding round at an $852 billion post-money valuation, and Anthropic announced a $30 billion Series G at a $380 billion post-money valuation before later reports of even higher fundraising interest. Stanford’s 2026 AI Index also reports that global corporate AI investment more than doubled in 2025, with private AI investment reaching $344.7 billion and generative AI capturing nearly half of that private funding. Those numbers make it entirely plausible that a small cohort of employees, founders and early equity holders has become extremely wealthy very quickly.
But the specific claim that around 10,000 people have reached more than $20 million in wealth is explicitly described as “back of the envelope” in the post. That matters. It may be directionally right, but it is not proven by the material. The post does not show cap tables, employee option data, liquidity preferences, exercise costs, tax effects, vesting schedules or secondary-sale limits. Some employees may be wealthy on paper but not liquid. Others may have missed the upside because of timing, strike prices, lockups or dilution. So the wealth-divide argument is strong; the exact headcount and dollar threshold should be treated as illustrative rather than evidential.
There is also substantial evidence that tech workers are experiencing real pressure, but the evidence does not support the strongest version of “software engineers are obsolete.” Layoff trackers show continuing tech-sector layoffs in 2026, and recent reporting has linked some restructurings to AI-driven strategic shifts. Stanford’s AI Index chapter on the economy reports that employment for software developers aged 22 to 25 has fallen nearly 20% from 2024, while employer surveys point to further workforce reductions ahead. That is a serious early-career warning signal.
The counter-evidence is also important. The U.S. Bureau of Labor Statistics still projects software developer, QA analyst and tester employment to grow 15% from 2024 to 2034, much faster than average, partly because AI, automation, robotics and related software systems require more software development. Indeed’s Hiring Lab has also found that AI mentions are rising in software-development and related postings, which suggests transformation of the role rather than simple disappearance. In other words, software engineering is not dying; the labour market is splitting between engineers who can work with AI, integrate it into real workflows, supervise its outputs, and understand domain constraints, and those whose value was tied to tasks now being commoditised.
The evidence for middle-management hollowing is weaker. It is plausible, and it matches what many executives are saying privately: fewer coordinators, more AI-assisted execution, smaller teams, flatter organisations, and more pressure on managers to prove concrete value. But the post provides anecdote rather than hard evidence. The most defensible version is not “middle management is disappearing”; it is that middle management is being forced to justify itself under new productivity assumptions.
The psychological claims are the hardest to prove but the easiest to recognise. Das says even the newly rich are not necessarily happy, because sudden wealth creates loss of purpose, status competition and distorted life planning. That is not evidenced in the post beyond anecdote. Still, it is sociologically plausible. Every gold rush creates winners who are richer than they are ready to be, losers who feel cheated despite doing well by normal standards, and late entrants who chase increasingly speculative opportunities. The TechCrunch article even notes the backlash: some observers saw these as “champagne problems” of already fortunate people. That criticism is fair, but incomplete. Privilege does not invalidate the structural observation.
C. Do I, ChatGPT, agree with the statements and findings?
I agree with the core diagnosis: AI is producing a winner-take-most psychology before society has built a credible transition story for everyone else. The post captures something real: not simply fear of automation, but fear that the old bargain of work has broken. The old bargain said: acquire scarce skills, work hard, join a strong company, progress through the hierarchy, and you will be rewarded. The AI gold-rush bargain says: be early, be networked, hold equity in the right infrastructure company, and hope capital markets choose your side.
That is a much more destabilising bargain. It shifts the centre of gravity from labour to capital, from competence to timing, from craft to market position, and from organisational loyalty to option-value chasing. This is why the post resonates. People are not only afraid that AI will take their job. They are afraid that AI has revealed their job to be the wrong asset class.
I do not agree with the fatalistic interpretation that software engineers’ “life’s skill” is no longer useful. That is too crude. AI makes some coding tasks cheaper, especially boilerplate, testing, translation, documentation, prototyping and first-draft implementation. But the harder parts of engineering remain valuable: architecture, security, reliability, systems thinking, product judgment, data governance, domain understanding, performance constraints, maintainability, and responsibility for failure. The market is not saying “engineering is over.” It is saying “undifferentiated coding is losing its scarcity premium.”
I partly agree that the corporate ladder looks less reliable. The ladder still exists in large organisations, but it is becoming narrower, more politically fragile and more dependent on whether a person can translate AI into measurable business outcomes. A manager who merely coordinates status updates is exposed. A manager who can redesign workflows, govern risk, protect quality, preserve accountability and deploy AI responsibly is more valuable than before. The issue is not management as such; the issue is low-substance management in a world where AI makes some coordination tasks easier to automate.
I strongly agree with the most important line in the TechCrunch piece: this cycle is “novel” because the same technology is both the lottery ticket and the thing eating the fallback. That is the real insight. In earlier booms, a person who missed the upside could still keep a stable role in the broader economy. In this boom, the thing creating fortunes is also the thing threatening to compress the ordinary professional path. That combination creates not just envy, but rational insecurity.
Future outlook
The next phase will not be a simple story of “AI replaces workers.” It will be a harsher and more complicated sorting process. The first divide will be between companies that own or control the scarce assets of the AI economy — compute access, distribution, proprietary workflow data, trusted content, regulated-domain relationships, enterprise integration channels — and companies that merely wrap frontier models. The second divide will be between workers who can use AI to increase judgment, leverage and execution, and workers whose tasks are easily decomposed into prompts, templates and agentic workflows.
For the labour market, the biggest danger is the hollowing of entry-level pathways. If junior software developers, analysts, editors, researchers and associates are hired less often because AI can do the first draft, the long-term problem is not only unemployment. It is the collapse of apprenticeship. Organisations may save money in the short term while quietly destroying the pipeline that produces senior judgment. That is where Das’s anxiety should be taken seriously: not as proof of a permanent underclass, but as an early warning that the transition path is under-designed.
For companies, the managerial challenge is to avoid turning AI adoption into a morale-destroying lottery. Leaders need to be honest that AI will change roles, but they also need to create visible internal pathways for reskilling, redeployment and new forms of status. If the only respected path is “join the frontier lab, become a founder, or get left behind,” companies will lose loyalty, institutional memory and execution discipline.
For investors, the risk is that the gold rush produces too much low-quality company formation. When anxious people start companies mainly because employment feels structurally inferior, the result is overfunded sameness: thin wrappers, agent demos, shallow automation tools, and companies built for status rather than durable value. The durable winners will be those that solve painful workflow problems, own defensible data or distribution, and can survive when “AI” stops being enough of a category.
For society, the political danger is resentment among people who are objectively privileged but subjectively displaced. That matters because political instability is often driven not only by absolute deprivation, but by broken expectations. If graduates, engineers, managers and professionals conclude that the economy rewards proximity to capital rather than contribution, AI will become a legitimacy problem, not just a productivity tool.
My bottom line: the post does not contain rigorous evidence, but it is an important signal. It captures the emotional truth of the AI transition better than many official reports. The exact numbers are unproven, and some claims are overstated. But the deeper observation is right: AI is accelerating the separation between capital owners and skilled labour, between early insiders and competent latecomers, between people whose work is amplified and people whose work is commoditised. The future will belong less to those who merely “use AI” and more to those who control the trusted inputs, workflow positions, governance structures and accountability layers through which AI becomes economically useful.
