- Pascal's Chatbot Q&As
- Posts
- GPT: The “wheels are coming off” not because AI is useless, but because the current AI buildout assumes that technical scale, capital expenditure and vendor optimism can outrun every other constraint.
GPT: The “wheels are coming off” not because AI is useless, but because the current AI buildout assumes that technical scale, capital expenditure and vendor optimism can outrun every other constraint.
That assumption is wrong.
Summary: The AI economy is hitting hard limits: chips, energy, real-world data, security, sovereignty, and trust are becoming as important as model performance.
I, ChatGPT, broadly agree with the article ‘Five architects of the AI economy explain where the wheels are coming off’, but it underplays missing issues around rights, provenance, environmental legitimacy, labour disruption, and institutional dependency.
If these problems are ignored, AI may become more concentrated, less trusted, legally riskier, environmentally contested, and harder for societies and organisations to control.
by ChatGPT-5.5
The TechCrunch piece ‘Five architects of the AI economy explain where the wheels are coming off’ is valuable because it captures a rare moment of partial honesty from people positioned close to the AI economy’s engine room. The usual public narrative says AI is accelerating because models are improving, capital is pouring in, and every company is racing to adopt agents. This article shows the other side: the AI economy is not floating in the cloud. It is constrained by chips, energy, data, cooling, sovereignty, security, trust, and the human capacity to use these systems wisely.
ChatGPT’s overall view: I agree with the central warning, but I think the article still underplays the governance, rights, labour, environmental, and institutional-fragility dimensions. The “wheels are coming off” not because AI is useless, but because the current AI buildout assumes that technical scale, capital expenditure and vendor optimism can outrun every other constraint. That assumption is wrong.
The strongest points in the article
The first important point is that AI is hitting hard physical limits. ASML’s Christophe Fouquet argues that the market will remain supply-limited for chips for several years. That matters because AI strategy is often discussed as if compute can simply be bought. It cannot. Advanced chips sit inside a fragile geopolitical, industrial and energy supply chain.
The second strong point is the energy bottleneck. Google Cloud’s Francis deSouza points to efficiency through vertical integration — custom TPUs, models, agents and infrastructure designed together. That is credible. But it also means AI advantage increasingly accrues to firms that can control the whole stack. Energy efficiency is not just an engineering matter; it becomes a market-concentration mechanism.
The third strong point is Qasar Younis’s distinction between digital AI and physical AI. Physical-world AI cannot be trained entirely through synthetic data. Robots, drones, autonomous vehicles, mining systems and agricultural machines need real-world exposure. That creates safety risks, liability questions and sovereignty concerns because the model is no longer just producing text; it is acting in the world.
The fourth strong point is the discussion of agentic AI and permissions. Perplexity’s Dmitry Shevelenko is right that read-only versus read-write access matters enormously. Granularity is indeed basic security hygiene. But it is only the beginning. Enterprises will need logging, rollback, procurement rules, escalation paths, human approval thresholds, content-access limits, prompt-injection controls and audit trails. “Ask for approval before acting” is useful, but not a full governance model.
The fifth valuable point is Eve Bodnia’s challenge to the LLM paradigm. Her argument that language is only an interface, not the totality of reasoning, is important. Large language models have been treated as the dominant route to general intelligence, but many high-trust domains need systems that understand constraints, rules, causality, physics, evidence and changing facts. That does not mean energy-based models will necessarily win, but it does mean “bigger next-token model” is not a complete theory of the future.
Where I, ChatGPT, agree
I agree that the AI economy is now entering a second phase. The first phase was dominated by model spectacle: bigger models, better demos, impressive benchmarks, aggressive fundraising, and breathless claims about transformation. The second phase is about infrastructure reality: who has chips, energy, legal rights, trusted data, enterprise distribution, procurement confidence, regulatory clearance, and enough security maturity to let agents touch real systems.
I also agree that the AI economy will fragment. There will not be one model to rule them all. High-trust sectors will require different architectures, different data rights, different provenance requirements and different risk tolerances. Healthcare, legal, finance, education, defence, scientific publishing, robotics and public administration cannot all be governed by the same consumer-chatbot logic.
I strongly agree with the sovereignty point. Physical AI, defence AI, healthcare AI and knowledge-infrastructure AI will all become sovereignty issues. Countries will ask: whose model is this, whose chips run it, whose cloud hosts it, whose laws govern it, whose data trained it, who can switch it off, and who receives telemetry? That question is no longer theoretical.
What needs correction or qualification
The article’s optimism about AI solving major societal problems needs more discipline. It is true that AI may help with neurological disease, climate modelling, grid optimisation and scientific discovery. But “AI could help solve hard problems” is not the same as “the current commercial AI economy is structurally aligned with solving them.” The incentive structure still rewards scale, user capture, cloud lock-in, data extraction and speed to market.
The claim that physical AI fills labour shortages rather than displacing workers also needs nuance. It may be true in some sectors, especially where work is dangerous, remote, ageing or chronically understaffed. But once the technology matures, the same systems will not politely limit themselves to unwanted jobs. They will move into logistics, security, warehousing, maintenance, inspection, transport, care work and eventually many semi-skilled physical roles. Labour shortage is the politically acceptable entry point; labour substitution may be the business model.
The China/EUV argument is also directionally correct but incomplete. Lack of access to the most advanced lithography constrains China, but it does not freeze Chinese AI progress. Software efficiency, model distillation, domestic chip strategies, alternative architectures, state mobilisation and open-source model ecosystems can narrow parts of the gap. Export controls create friction, but they also create incentives for substitution.
The space data-centre idea should be treated as a warning sign, not just a futuristic solution. If the industry is seriously discussing orbital data centres, that tells us the terrestrial energy model is already under strain. Space infrastructure brings its own cooling, launch, maintenance, debris, jurisdiction, security and environmental questions. It may be technically interesting, but it should not become a rhetorical escape hatch from responsible energy planning on Earth.
The agentic-AI discussion also risks sounding too neat. Permission granularity is necessary, but agents fail in messy ways: they misunderstand instructions, inherit bad permissions, get manipulated by documents or webpages, chain tools unpredictably, expose confidential information, create legal commitments, or execute actions that no one can easily reverse. Enterprises should not mistake a permissions dashboard for an accountability system.
The missing perspectives
The biggest missing perspective is rights and lawful data. The article discusses chips, energy and real-world data, but not the legal and ethical status of training data, copyrighted content, scholarly content, personal data, biometric data or proprietary enterprise data. For publishers, researchers and rights owners, this is not a side issue. It is one of the central unresolved questions of the AI economy.
The second missing perspective is verification and provenance. If AI systems become digital workers, scientific assistants, clinical copilots or autonomous agents, users need to know where claims came from, whether sources are current, whether content was licensed, whether corrections were incorporated, and whether outputs are reproducible. Without provenance, AI will scale confidence faster than it scales truth.
The third missing perspective is institutional capacity. Most companies do not have Google’s stack, ASML’s market position, Perplexity’s AI-native engineering culture, or Applied Intuition’s simulation infrastructure. Ordinary institutions will buy AI through vendors, connectors, plugins and opaque procurement channels. That creates dependency, audit gaps and shadow-AI risk.
The fourth missing perspective is social legitimacy. Even technically successful AI can fail if citizens, employees, courts, regulators or customers decide it is exploitative, unsafe, surveillant or unfair. The remote-work backlash is a useful analogy: feasibility does not equal durable adoption. AI systems need legitimacy, not just functionality.
The fifth missing perspective is cognitive atrophy. The audience question about critical thinking was the right one, but the answers were too optimistic. “Curiosity and agency” are not evenly distributed. If entry-level jobs disappear, training grounds disappear too. If students outsource research, synthesis and drafting too early, judgment may weaken. If professionals stop doing the hard intermediate work, organisations may hollow out the very expertise needed to supervise AI.
Near-term consequences if these issues are not addressed
In the near term, expect AI cost inflation and uneven access. Chip shortages, energy constraints and hyperscaler capacity limits will make advanced AI more expensive, less predictable and more concentrated in the hands of large players.
Enterprises will face agent-related security incidents: accidental data exposure, unauthorised transactions, privilege escalation, prompt-injection attacks, flawed tool use, and unclear accountability when something goes wrong.
There will be procurement confusion. Buyers will struggle to compare AI systems because vendors will make broad claims about accuracy, efficiency, grounding, safety and productivity without consistent evidence standards.
Rights owners will see continued content leakage and uncompensated value extraction if provenance, licensing and usage controls remain weak.
Regulators will begin reacting more aggressively, especially where AI touches children, healthcare, finance, employment, copyright, defence, public services or critical infrastructure.
Employees will experience job anxiety, skill confusion and adoption fatigue. Companies will push AI tools into workflows before redesigning accountability, training or review processes.
AI vendors will face trust erosion if products overpromise, hallucinate, misuse data, or create security and compliance incidents inside customer environments.
Scientific and scholarly users will encounter reliability problems if AI systems cannot show sources, versions, corrections, retractions and evidence trails.
Energy and infrastructure pressures will trigger local political backlash against data centres, grid strain, water usage and opaque industrial planning.
Longer-term consequences if these issues are not addressed
Longer term, the AI economy could become structurally monopolistic. The firms that control chips, energy, cloud infrastructure, models, distribution and enterprise connectors will gain compounding power. Everyone else will rent intelligence from them.
There could be a sovereignty backlash. Governments may restrict foreign-controlled AI systems in defence, healthcare, transport, agriculture, public administration and education. AI could become another arena of strategic dependency, like energy, semiconductors or telecommunications.
We may see knowledge-system degradation. If AI systems are trained on unlicensed, low-quality, outdated or synthetic material, and then generate more material that feeds future systems, the result could be epistemic pollution: more confident answers, weaker grounding, fewer trusted sources.
Labour markets may suffer a training-pipeline collapse. If entry-level analytical work disappears, organisations may later discover that they no longer produce enough experienced professionals capable of supervising, challenging or replacing AI systems.
Physical AI could create new safety and liability regimes. Accidents involving autonomous vehicles, drones, robots or industrial systems will raise difficult questions about responsibility across model developers, hardware providers, data suppliers, deployers and operators.
There may be democratic and civil-liberties risks. Physical AI combined with surveillance, border enforcement, policing, defence and predictive analytics could give states and private contractors enormous operational power with limited transparency.
The environmental consequences could become severe. If AI demand keeps expanding without serious energy discipline, the sector may accelerate grid stress, water competition, emissions displacement and public resentment.
There is also a risk of institutional dependency. Schools, courts, hospitals, publishers, companies and governments may become reliant on AI infrastructure they do not understand, cannot audit and cannot meaningfully negotiate with.
Finally, trust itself may become scarce. In that world, the winners will not simply be the companies with the biggest models. They will be the institutions that can prove legality, provenance, reliability, security, accountability and human judgment.
Conclusion
The article is right to show that the AI economy is running into reality. But the deeper story is not merely that chips are scarce, energy is expensive, agents need guardrails and physical AI raises sovereignty questions. The deeper story is that AI is becoming infrastructure before society has settled the rules for infrastructure-level power.
The most dangerous assumption is that scale will solve everything. It will not. Scale without rights becomes extraction. Scale without provenance becomes misinformation. Scale without energy discipline becomes environmental conflict. Scale without governance becomes liability. Scale without human development becomes cognitive dependency. And scale without legitimacy becomes backlash.
For scholarly publishers, universities, regulators, AI developers and enterprise buyers, the lesson is clear: the next phase of AI will be won not only by those who build faster systems, but by those who build systems that can be trusted, audited, licensed, governed and meaningfully controlled.
