- Pascal's Chatbot Q&As
- Posts
- Cal Newport delivers a timely and incisive reflection on the state of artificial intelligence following the underwhelming release of OpenAI’s GPT-5.
Cal Newport delivers a timely and incisive reflection on the state of artificial intelligence following the underwhelming release of OpenAI’s GPT-5.
At the heart of the piece lies a simple but powerful question: What if the remarkable gains in generative AI observed over the past five years have already peaked?
What If A.I. Doesn’t Get Much Better Than This? – A Turning Point in the Hype Cycle
by ChatGPT-4o
In his New Yorker essay titled "What If A.I. Doesn’t Get Much Better Than This?"(August 2025), Cal Newport delivers a timely and incisive reflection on the state of artificial intelligence following the underwhelming release of OpenAI’s GPT-5. At the heart of the piece lies a simple but powerful question: What if the remarkable gains in generative AI observed over the past five years have already peaked? What if, instead of rocketing toward Artificial General Intelligence (AGI), we are now entering a period of diminishing returns?
The Rise and Stagnation of Scaling
Newport opens with a historical recap of the 2020 OpenAI paper on scaling laws, which fueled widespread belief that language models would continue improving exponentially as compute and data increased. This “hockey stick” vision led to GPT-3, GPT-4, and an AI gold rush that reshaped corporate investments, public discourse, and regulatory anxieties. With every new release, the assumption hardened: just keep scaling, and intelligence will emerge.
But GPT-5’s release deflates this narrative. While technically improved in narrow domains—such as programming and multi-language editing—users and reviewers noted limited real-world benefits. It still hallucinated, struggled with reasoning, and offered marginal gains in creativity or usability over GPT-4. Far from triggering a superintelligent revolution, it confirmed what some critics had long suspected: we may be hitting a performance plateau.
The Limits of Benchmark Hype
A core insight of Newport’s essay is that benchmark metrics have become misleading proxies for actual progress. Charts showing minor improvements in model capabilities fail to reflect the lived reality of users. As Apple and ASU researchers independently showed, so-called “reasoning” models break down when pushed even slightly outside their training distribution. The illusion of progress is maintained by fine-tuned demos and metrics, not by the emergence of genuinely new cognitive capabilities.
This realization shifts the locus of innovation from pretraining (bigger, broader models) to post-training (tuning, reinforcement learning, and retrieval-augmented generation). Newport likens this to souping up a car: we’re improving handling, not building a rocket. Yet even these upgrades offer limited utility—they don’t radically transform productivity, automate entire professions, or enable machines to reason like humans.
Reframing Expectations: AI as Tool, Not God
Newport suggests that we recalibrate our expectations. Generative AI is not a path to divine superintelligence, but a powerful productivity enhancer. Used judiciously, it can help write code, summarize reports, or brainstorm creatively. Some jobs—particularly in copywriting, low-level coding, or voice acting—may disappear. But most work, especially that requiring critical thinking or originality, remains beyond its grasp.
He draws attention to the growing chorus of moderates—like Gary Marcus, Emily Bender, and Ed Zitron—who argue that AI is a multi-billion-dollar market, not a trillion-dollar one. They see the recent stagnation as a natural correction in an overheated cycle. Rather than heralding doom, it is an opportunity to inject realism into policy, investing, and public understanding.
AI’s Financial Hype vs. Real Value
The essay also explores the economic consequences of inflated AI expectations. Big Tech’s massive capital expenditures—over $560 billion in 18 months—have not been matched by revenues ($35 billion), creating a dangerous imbalance. Retirement portfolios and markets are overexposed to speculative bets on AGI, reminiscent of the dot-com bubble. If AI plateaus, the economic fallout could ripple far beyond Silicon Valley.
A Call for Caution and Ethics
Rather than ending on a note of despair, Newport closes with a nuanced reminder: even if AI doesn’t revolutionize the world tomorrow, it still poses profound long-term challenges. The tools may plateau now, but more capable systems may still emerge. In the meantime, society must invest in regulation, digital ethics, and resilience. “We should proceed with less hubris and more care,” he writes—a fitting ethos for a moment when excitement must be tempered by sobriety.
Conclusion
Cal Newport’s essay is a sober yet hopeful meditation on AI’s future. It punctures the myth of inevitable, exponential progress while affirming the value of what’s already been built. Most importantly, it reframes the conversation around utility, ethics, and realism. If AI doesn’t get much better than this, perhaps that’s not a failure—but a chance to get smarter about how we build, use, and govern the technology we already have.
