• Pascal's Chatbot Q&As
  • Posts
  • With GPT-5, Meta’s Llama 4, and Elon Musk’s Grok 4 showing only modest improvements despite massive scale-ups, the diminishing returns of this strategy are now visible.

With GPT-5, Meta’s Llama 4, and Elon Musk’s Grok 4 showing only modest improvements despite massive scale-ups, the diminishing returns of this strategy are now visible.

As Marcus notes, governments have let AI firms operate with minimal regulation. That tolerance may evaporate quickly if the harms continue unchecked.

”How to Rethink A.I.” by Gary Marcus — What Enterprise AI Users and Investors Can Learn From This

by ChatGPT-40

The New York Times guest essay, How to Rethink A.I. by Gary Marcus (September 3, 2025), is a significant wake-up call for the AI industry. It dismantles the overhyped narrative of imminent Artificial General Intelligence (AGI) and urges a strategic shift away from brute-force scaling toward more cognitively grounded and trustworthy AI development. For enterprise users of AI and investors, this is more than a philosophical debate — it’s a map of risks, missed opportunities, and the path forward for sustainable, robust innovation.

Why This Essay Is Important

1. It punctures the myth of endless scaling.

Marcus sharply critiques the industry’s blind faith in “scaling laws,” the idea that simply increasing compute power and data size will inevitably lead to AGI or superintelligence. With GPT-5, Meta’s Llama 4, and Elon Musk’s Grok 4 showing only modest improvements despite massive scale-ups, the diminishing returns of this strategy are now visible. This challenges the dominant investment and product development strategy in generative AI to date.

2. It signals the limits of current large language models (LLMs).

Marcus calls out fundamental weaknesses in LLMs: persistent hallucinations, lack of reasoning, poor mathematical consistency, and an inability to grasp real-world concepts like the rules of chess. These limitations are not surface-level bugs — they reflect structural flaws in how these models operate. For enterprise applications requiring trust, compliance, or safety (e.g., legal tech, healthcare, education, finance), this is a critical concern.

3. It proposes a scientifically grounded roadmap.

Rather than merely critiquing, Marcus suggests a constructive way forward: integrating insights from cognitive science. He proposes:

  • Building world models, where systems learn representations of the environment and causal relationships.

  • Embedding core cognitive structures (such as understanding of time, space, and causality) instead of relying solely on internet-scale statistical learning.

  • Developing neurosymbolic AI, blending statistical LLMs with symbolic reasoning for more robust, explainable, and domain-sensitive intelligence.

Consequences if Industry Fails to Heed These Warnings

A. Wasted Capital and Misaligned Investment Strategies

If investors continue to pour billions into scaling efforts with diminishing returns, it risks a second AI winter — not due to lack of progress, but due to inflated promises that crash into technical ceilings. Failed expectations could result in mass layoffs, stalled startups, and disillusionment in public markets.

B. Loss of Trust in AI Products

Enterprise clients already face mounting complaints over hallucinations, bias, and reliability. Trust, once broken, is hard to rebuild — particularly in regulated industries. The reputational damage could extend beyond individual companies to the broader ecosystem.

As Marcus notes, governments have let AI firms operate with minimal regulation. That tolerance may evaporate quickly if the harms — from deepfakes and misinformation to copyright infringement and energy abuse — continue unchecked. Enterprises relying on LLMs without due diligence may face unexpected regulatory risk and compliance failures.

D. Stagnation in Innovation

If the field stays fixated on scaling as a panacea, we risk under-investing in genuinely novel architectures — including neurosymbolic models, hybrid learning systems, or embodied AI. This stagnation is already visible in model convergence (all major players producing near-identical chatbots), and could stall long-term breakthroughs.

Lessons for Enterprise AI Users

1. Reassess Your Tech Stack

Rethink reliance on black-box LLMs for mission-critical applications. Demand clarity on model limitations, update schedules, and fine-tuning capabilities. If your vendor is promising AGI-level intelligence, treat that as a red flag, not a value proposition.

2. Mitigate Hallucination Risk

Deploy layered systems where LLMs are used only for surface generation tasks and are supervised or grounded by deterministic logic, databases, or symbolic AI components.

3. Diversify AI Approaches

Start pilot programs with vendors and researchers working on neurosymbolic systems or hybrid AI architectures. This will give you early insight into more robust and trustworthy alternatives to the current paradigm.

4. Audit for Cognitive Gaps

Use formal evaluation frameworks to identify how your AI systems handle reasoning, causality, and factual consistency. Don’t conflate fluent language generation with intelligence.

Lessons for Investors

1. Re-evaluate Valuations

Many AI startups and hyperscalers have been valued as if AGI were imminent. Marcus's essay should force a repricing of these firms based on real-world performance rather than speculative futures. Look for ventures prioritizing hybrid systems, cognitive architecture research, or regulatory-aligned AI.

2. Fund the Next Paradigm

There’s a window of opportunity to invest in companies that take Marcus’s roadmap seriously — building world models, integrating symbolic reasoning, or innovating beyond brute-force training. These ventures may not have the hype of today’s giants, but they are positioned for long-term value creation.

3. Understand Your Exposure

Institutional investors should ask: How much of our AI portfolio is dependent on the scaling hypothesis? What happens to those valuations if GPT-6 and GPT-7 underperform? How many of our companies have real moats beyond compute and dataset access?

4. Advocate for Policy Engagement

Investors should support policy reforms that promote transparency, safety, and diverse research funding. The field must become less centralized and less dependent on corporate hype cycles.

Conclusion: Scaling Is Dead — Long Live Cognitive AI

Gary Marcus’s essay is not anti-AI — it’s pro-responsibility, pro-science, and pro-progress. He calls for a reorientation toward foundational, multidisciplinary thinking — grounded in how humans actually reason, not just how we speak. For enterprises and investors alike, this is a pivotal moment to re-anchor expectations and redirect resources. The future of AI isn’t just about bigger models. It’s about better minds — artificial and human — working together to build something worthy of trust.