- Pascal's Chatbot Q&As
- Posts
- PwC’s 29th Global CEO Survey: Executives are convinced AI is central to competitiveness, but most companies are still stuck in a “pilot-and-hope” phase where the economics don’t reliably show up.
PwC’s 29th Global CEO Survey: Executives are convinced AI is central to competitiveness, but most companies are still stuck in a “pilot-and-hope” phase where the economics don’t reliably show up.
Many companies are buying tools before they’ve built the operating conditions that make AI economically compounding (data access, workflow integration, governance, adoption, measurement).
The AI Paradox in the CEO Suite: Everyone’s Investing, Few Are Winning—Yet
by ChatGPT-5.2
PwC’s 29th Global CEO Survey reads like a status report from the early stages of an industrial shift: executives are convinced AI is central to competitiveness, but most companies are still stuck in a “pilot-and-hope” phase where the economics don’t reliably show up. Meanwhile, cyber risk, geopolitics, and tariffs are pushing leadership attention toward the urgent—even as the survey’s strongest signal is that the companies moving fastest on reinvention are outperforming the ones hesitating.
That tension—between fear-driven short-termism and opportunity-driven reinvention—frames the most surprising, controversial, and valuable findings in the data, and it’s also the best launchpad for predictions about where AI is going next.
The most surprising findings
1) The ROI story is (still) weak for most companies—despite the hype
The headline that should sober any boardroom: more than half of CEOs (56%) report neither revenue uplift nor cost reduction from AI in the last 12 months. Only 30% report increased revenue from AI, 26% report lower costs, and just 12%report both lower costs and higher revenues.
This is surprising not because AI can’t create value, but because it suggests many companies are buying tools before they’ve built the operating conditions that make AI economically compounding (data access, workflow integration, governance, adoption, measurement).
2) “AI at enterprise scale” is rare—usage is thin across core functions
CEOs report relatively limited “large or very large extent” deployment across key areas—roughly 22% in demand generation, 20% in support services, 19% in products/services/experiences, 15% in direction setting, and 13% in demand fulfilment.
In other words: many organisations are still using AI as an overlay (assistants, experiments, pockets of automation), not as a redesign lever for how the business runs.
3) Foundations are the bottleneck—not model capability
PwC explicitly links measurable value to enterprise-scale deployment aligned to business strategy, enabled by foundations like: a tech environment that supports integration, clear AI road maps, formalised responsible AI and risk processes, and a culture that enables adoption.
And the survey implies these foundations are missing in many firms—e.g., the framing around companies “still lack AI foundations such as clearly defined road maps and sufficient levels of investment.”
The most controversial findings (or at least, the most uncomfortable)
4) Winners and laggards are separating fast—and “caution” is penalised
PwC draws a sharp contrast between “dynamic” and “cautious” companies. The cautious group—15% of the sample, defined as companies not planning major acquisitions and saying geopolitics makes them less likely to make large investments—are growing more slowly (by two percentage points) and have profit margins three points lower than peers.
That’s controversial because it pushes against the instinct many leaders have in volatile conditions: pause, conserve, de-risk. The data argues that the default “wait-and-see” stance is itself a measurable business risk.
5) “Innovation theatre” is implied to be widespread
Half of CEOs say innovation is central to strategy, yet only 8% say they’ve implemented at least five out of six “innovation-friendly practices” to a large or very large extent.
That gap is uncomfortable because it suggests many leadership teams are talking about innovation while operating systems (portfolio discipline, kill-switches for weak R&D, customer testing loops, venturing capability) aren’t in place. It’s also a warning about AI: without operating discipline, AI becomes another theatre layer.
6) Trust isn’t “soft”—it’s priced
Two-thirds of CEOs (66%) report stakeholder trust concerns (AI safety/responsible AI, privacy, transparency, climate performance impacts, etc.). PwC then ties this to market performance: public companies with the fewest trust concerns delivered total shareholder returns about nine percentage points higher than those with the most concerns (over a 12-month period).
That’s a direct rebuke to the idea that responsible AI, privacy, and transparency are “compliance costs.” The survey’s framing is: trust failures are value-destruction events.
The most valuable findings (actionable signals)
7) The “AI vanguard” is identifiable—and it behaves differently
PwC describes a “vanguard” (the ~one in eight achieving both revenue gains and cost reductions from AI). What distinguishes them is not magic—it’s maturity: better foundations and broader application. For example, 44% of vanguard companies have applied AI to products/services/experiences vs. 17% for others.
This is valuable because it gives leaders a concrete diagnostic: if you’re not moving AI into the product and core workflows, you’re unlikely to see durable returns.
8) Sector boundaries are dissolving—and CEOs are acting on it
42% say their company has started competing in new sectors in the last five years, and among those planning major acquisitions, ~44% expect deals outside their current sector/industry.
AI is both cause and accelerant here: it lowers the cost of entering adjacent value chains (especially where distribution, customer interaction, and decisioning can be software-mediated).
9) CEO time allocation is structurally misaligned with long-term reinvention
CEOs report spending 47% of their time on issues with horizons under one year, versus 16% on horizons beyond five years.
This matters because AI advantage compounds over multi-year horizons (data asset building, operating model redesign, governance, talent pipelines). If leadership attention is structurally skewed short-term, AI becomes tactical—and the survey shows tactical AI often doesn’t pay.
Predictions: how AI will develop into the future (based on these signals)
Prediction 1: “AI ROI” will polarise—most firms will stagnate, a minority will compound
The survey already shows a thin top tier (the 12% vanguard) pulling away. As foundations become the real differentiator, expect a widening gap: companies with integrated data, redesigned workflows, and strong governance will get compounding productivity and faster product cycles; laggards will accumulate “AI tool sprawl” without economic impact.
Prediction 2: The next enterprise wave is workflow AI, not chat
Low “large extent” deployment in fulfilment, direction setting, and product experience signals that many firms are still in the “assistive” era. The move that unlocks ROI is embedding AI into operational workflows (case handling, underwriting, procurement, clinical/admin pathways, engineering change control, customer success)—the places where cycle time and error rates translate directly into margin and growth.
Prediction 3: AI governance will become a competitive capability, not a legal afterthought
Because trust concerns are widespread and correlated with shareholder outcomes, governance will harden into an operational system: model risk management, incident response, auditability, privacy-by-design, and clear accountability. “Responsible AI” will look less like a policy binder and more like cybersecurity: continuous controls, testing, monitoring, and drills.
Prediction 4: AI will accelerate cross-sector invasion and ecosystem consolidation
With 42% already competing across sectors and many planning out-of-sector acquisitions, expect AI-enabled entrants to attack incumbents’ margins by unbundling distribution, customer interface, and decisioning layers. The winners will be firms that can partner at scale and interoperate across ecosystems—because value creation will increasingly happen between organisations, not only inside them.
Prediction 5: Cyber risk and AI risk will merge into a single threat model
CEOs already rank cyber risk alongside macro volatility as a top threat, and many plan enterprise-wide cybersecurity upgrades in response to geopolitics. As AI becomes embedded, the attack surface shifts: prompt injection, data poisoning, model inversion, agent misuse, access-control failures. The firms that treat AI security as part of cyber resilience (not “innovation”) will avoid the trust shocks that the survey suggests markets punish.
Recommendations for business leaders
1) Stop measuring “AI activity” and start measuring “AI advantage”
Track outcomes that tie to enterprise value:
Cycle time reductions in core workflows
Error/rework reduction
Conversion/retention lift
Margin expansion attributable to automation or better decisioning
Then ask: are we building repeatable capability—or just running projects?
2) Build the foundations the survey keeps pointing to
If you want vanguard economics, copy vanguard prerequisites:
A clear AI road map linked to strategy
Data access and integration (and permissioning) that makes AI usable in real workflows
Formal responsible AI + risk processes
Adoption muscle (training, incentives, workflow redesign)
3) Push AI into products and customer-facing experiences—carefully
The survey’s vanguard signal (44% vs 17% applying AI to products/services/experiences) suggests productisation is where durable differentiation forms. Make it real: ship improvements customers will pay for, while designing controls that prevent trust failures.
4) Treat trust like financial capital
Make trust a boardroom metric—operational trust, accountability trust, and digital trust—and invest in controls as deliberately as you invest in growth. The survey’s TSR gap implies this is not optional branding; it’s valuation hygiene.
5) Fight “innovation theatre” with hard portfolio discipline
If innovation is truly central, implement the boring mechanisms:
Rapid customer testing loops
Kill-switches for underperforming R&D
Real external collaboration
A venturing/incubation path with measurable outputs
6) Reinvent your calendar before you try to reinvent your company
If 47% of leadership time is trapped in the <12-month horizon, multi-year AI advantage won’t compound. Put protected time on the calendar for long-horizon decisions: data strategy, platform architecture, operating model redesign, and ecosystem moves.
7) Don’t let volatility justify strategic paralysis
The survey’s “cautious companies” underperform on growth and margin. The practical takeaway isn’t “take reckless bets”—it’s: keep moving while de-risking intelligently(stage investments, acquire capabilities not customers, partner where speed matters, and build resilience in parallel).
