- Pascal's Chatbot Q&As
- Posts
- The AI market is unlikely to settle into a simple winner-take-all outcome. Instead, AI will diffuse through the economy as a set of competing ecosystems, each winning different segments...
The AI market is unlikely to settle into a simple winner-take-all outcome. Instead, AI will diffuse through the economy as a set of competing ecosystems, each winning different segments...
...depending on cost, capability, integration, trust, regulation, switching costs, and timing. No One Model to Rule Them All.
Summary: RAND argues that AI will not become a single winner-take-all market, but a fragmented ecosystem where frontier, low-cost, and specialised models coexist because users value different things: capability, cost, integration, trust, compliance, and reliability.
The most valuable insight is that adoption depends on “relative net value,” not raw model power: the best AI in practice is the one that fits the workflow, reduces risk, integrates well, and solves a specific problem better than alternatives.
For AI makers, users, and rights owners, the future advantage will come from trusted, rights-cleared, domain-specific, auditable AI systems — not generic models alone.
No One Model to Rule Them All: RAND’s Quietly Radical Case for a Fragmented AI Future
by ChatGPT-5.5
RAND’s April 2026 working paper, Multi-Ecosystem Competition in Artificial Intelligence Adoption and Diffusion, is deceptively technical. On the surface, it is a diffusion-model paper: it adapts the classic Bass model of technology adoption to AI markets, adds switching dynamics, and introduces “relative net value” as the key driver of adoption. But underneath the equations is a strategically important argument: the AI market is unlikely to settle into a simple winner-take-all outcome. Instead, AI will diffuse through the economy as a set of competing ecosystems, each winning different segments depending on cost, capability, integration, trust, regulation, switching costs, and timing.
That matters because much of today’s AI debate is still framed as a race: the biggest model, the best benchmark, the most compute, the strongest frontier lab, the fastest route to AGI. RAND’s paper suggests that this framing is incomplete. Raw model capability is only one part of adoption. In real markets, organisations do not simply buy “the smartest model.” They buy the model that fits their workflows, legal constraints, cost tolerance, security needs, legacy systems, and risk appetite. A frontier model may be the best choice for a defence contractor, a bank, or a high-end research lab. A cheaper model may be better for a small business. A specialised, licensed, auditable model may be better for medicine, law, education, publishing, or scientific research.
The central concept is “relative net value.” This is not just perceived performance. It is perceived value minus cost, compared with available alternatives. Value includes capability, reliability, integration, maturity, compliance, vendor support, ease of deployment, and suitability for the use case. Cost includes not only compute, but also implementation risk, maintenance, training, switching costs, and organisational disruption. This is a useful corrective to the current AI hype cycle. A model can be technically superior and commercially weaker if it is hard to integrate, expensive to govern, legally uncertain, or not trusted by the people expected to use it.
The paper’s most important conclusion is that AI adoption will likely produce durable market segmentation. Frontier, cost-effective, and specialised AI systems can grow simultaneously because they solve different problems for different users. This explains why cheaper Chinese models and more expensive U.S. frontier models may both gain ground. It also explains why large enterprises increasingly use multiple models rather than standardising on one. The future is not one universal AI brain. It is a layered, fragmented, multi-provider AI stack.
This has direct implications for publishers, rights owners, and knowledge businesses. If AI adoption is segmented by use case, then high-quality, licensed, verified, domain-specific content becomes more valuable, not less. In high-trust sectors, the decisive question will not be whether a model can produce fluent text. It will be whether it can produce reliable, attributable, current, rights-cleared, auditable answers inside real workflows. That is where rights owners can matter. They are not merely defending old content markets; they may become infrastructure providers for trusted AI adoption.
The most surprising statements and findings
The first surprising finding is RAND’s rejection of the winner-take-all assumption. Many assume that once one AI model becomes sufficiently powerful, users will converge around it. RAND argues the opposite: because users value different things, multiple AI ecosystems can maintain distinct market positions.
The second surprising point is that cheaper and more expensive AI systems can grow together. This challenges the simplistic view that low-cost models will inevitably crush frontier models, or that frontier models will monopolise the market through superior capability. RAND’s framework shows why both can be true at once: some users pay for maximum capability; others pay for adequacy, speed, support, locality, or low cost.
The third surprising point is that early advantage matters, but it is not destiny. Early entrants benefit from network effects, integration, user familiarity, and switching costs. However, they must remain “good enough.” A later entrant can still win if it delivers superior relative net value in a specific segment.
The fourth surprising finding is that switching costs are important but not the deepest force. They can delay change, but the enduring driver is relative net value. In plain English: lock-in helps, but bad products eventually lose if alternatives become sufficiently better.
The fifth surprising point is that integration may matter more than benchmark performance. RAND explicitly recognises that the best model in laboratory conditions may not be the fastest-adopted model in practice. A slightly weaker model that fits enterprise workflows, compliance systems, procurement rules, and legacy infrastructure may beat a more powerful but disruptive model.
The sixth surprising finding is that policy interventions can backfire if they accelerate the overall market without disproportionately helping the intended ecosystem. This is subtle but important. A government or regulator that wants to promote a domestic AI ecosystem cannot assume that general AI adoption support will help domestic firms. If the strongest foreign platforms capture most of the growth, broad adoption support may strengthen competitors.
The most controversial statements and findings
The most controversial implication is that AI markets may be less monopolistic than many critics fear, but also more structurally complex than regulators are prepared for. RAND’s argument reduces the likelihood of a single universal AI monopoly, but it does not eliminate concentration risk. Instead, power may concentrate within segments: frontier defence models, enterprise productivity suites, medical AI, legal AI, education AI, scientific AI, and consumer assistants.
Another controversial point is that cost competition is not a sustainable long-term advantage. This cuts against the current excitement around low-cost open or near-open models. RAND’s argument is that cost advantages matter early, especially when compute costs are high, but durable advantage comes from value differentiation: reliability, integration, specialisation, compliance, and support.
A third controversial implication is geopolitical. The paper explicitly uses the example of U.S. frontier models and Chinese cost-effective alternatives occupying different positions in the market. That suggests the AI future may not divide neatly into “the West wins” or “China wins.” Instead, different geopolitical AI ecosystems may become embedded in different sectors, regions, and price tiers.
A fourth controversial point is that policy can be used to promote or demote particular AI ecosystems for safety or national security reasons. This turns AI diffusion into an instrument of industrial strategy. Governments may increasingly intervene not only after AI causes harm, but before adoption patterns solidify.
A fifth controversial finding is that user sophistication matters. Technically sophisticated users can match models to their needs; less experienced users may make poor choices. This creates a governance problem: AI adoption may be “market-driven,” but not necessarily rational, safe, or rights-respecting if buyers do not understand model provenance, licensing, data leakage, hallucination risk, or downstream liability.
The most valuable statements and findings
The most valuable concept is “relative net value.” It gives executives a better decision framework than “which model is best?” The right question is: best for whom, in what workflow, under what legal constraints, at what cost, with what evidence, and with what switching risk?
The second most valuable finding is that AI adoption should be analysed by segment. Healthcare, law, education, finance, publishing, defence, software development, creative industries, and small-business automation will not adopt AI in the same way. Each sector has different tolerances for error, cost, integration friction, auditability, and legal exposure.
The third valuable finding is that timing advantages compound. Early entrants can build user bases, integrations, habits, defaults, and data feedback loops. But early entry only creates durable advantage if the product keeps improving. This is particularly relevant to companies building AI gateways, content APIs, workflow assistants, and domain-specific copilots.
The fourth valuable insight is that integration is strategic infrastructure. AI makers often talk about models; enterprises care about deployment. The winner in a given segment may be the provider that best embeds into existing systems, not the one with the flashiest demo.
The fifth valuable finding is that policy bundles are stronger than isolated interventions. Standards, interoperability, procurement rules, subsidies, safety requirements, transparency obligations, and data-governance rules affect different parts of adoption. Single-mechanism policy is too weak for a multi-ecosystem market.
The sixth valuable insight for rights owners is that trusted content can become a source of relative net value. Licensed, structured, current, attributable, high-quality content can improve reliability, compliance, provenance, and user trust. In high-stakes sectors, those features are not decorative. They are adoption drivers.
Recommendations for AI makers
AI makers should stop competing only on model size and benchmark performance. They need to compete on deployability, governance, auditability, integration, support, provenance, cost predictability, and sector-specific reliability. The future market will reward models that solve real organisational adoption problems.
They should build differentiated ecosystems rather than generic chatbots. The winners will be those that know which segment they serve: medical reasoning, legal research, software engineering, scientific discovery, corporate knowledge retrieval, consumer assistance, education, creative production, defence, or small-business automation. A vague “AI for everything” strategy will be weaker than a precise “AI for this workflow, with these guarantees” strategy.
They should treat rights-cleared and high-quality content as strategic infrastructure. In regulated and high-trust sectors, unlicensed or low-quality training data creates legal, reputational, and performance risk. AI makers that can prove the quality, provenance, and authorisation of their knowledge sources will have an adoption advantage.
They should reduce switching friction for users while avoiding abusive lock-in. Interoperability, exportability, transparent pricing, and clear governance will make adoption easier. But if vendors create opaque lock-in, they may gain short-term retention while inviting regulatory backlash and customer distrust.
They should build for multi-model environments. RAND’s paper strongly suggests that large organisations will not standardise on one model. AI makers should therefore support orchestration, routing, model comparison, logging, and governance across multiple systems.
Recommendations for AI users
AI users should not ask, “Which AI is best?” They should ask, “Which AI has the highest relative net value for this specific use case?” That requires evaluating accuracy, cost, reliability, integration, data protection, licensing, explainability, vendor support, auditability, and exit options.
They should avoid premature lock-in. Early adoption can bring productivity gains, but deep integration with one provider can create switching costs that later become strategically painful. Users should preserve portability where possible: data export, model-agnostic workflows, contractual exit rights, logging, and fallback providers.
They should adopt a portfolio strategy. Use frontier models where high capability matters, cheaper models where adequacy is enough, and specialised models where trust, domain knowledge, or compliance matters. The future enterprise AI stack will look less like one assistant and more like a managed ecosystem of models.
They should invest in AI literacy and procurement discipline. Less sophisticated users are more likely to make poor adoption decisions. Buyers need to understand provenance, hallucination risk, privacy, copyright exposure, security, model drift, and hidden operational costs.
They should measure outcomes, not theatre. Adoption should be tied to workflow quality, decision speed, error reduction, customer experience, compliance performance, and human productivity. Vanity metrics such as number of prompts, number of pilots, or number of deployed tools will not reveal whether AI is actually creating durable value.
Recommendations for rights owners
Rights owners should stop thinking of themselves only as content suppliers and start positioning themselves as trust infrastructure providers. RAND’s model implies that adoption depends on relative net value. Rights owners can increase that value by providing authoritative, structured, current, rights-cleared, attributable content that improves model reliability and reduces legal risk.
They should segment their AI licensing strategies. Not every AI use case has the same value. Training, grounding, retrieval, summarisation, agentic workflow integration, citation, attribution, and domain-specific decision support should be priced and governed differently. High-trust sectors should command higher value because the cost of error is higher.
They should insist on usage reporting, provenance, auditability, and contractual controls. If content becomes part of AI infrastructure, rights owners need visibility into where and how it is used. This does not necessarily mean blocking adoption; it means enabling adoption on terms that preserve value, accountability, and trust.
They should build or participate in trusted AI distribution channels. Content APIs, retrieval layers, AI gateways, rights registries, attribution systems, and verified knowledge graphs can make rights owners part of the AI adoption stack rather than passive victims of scraping.
They should use quality as the strategic argument. The strongest case is not only “pay us because copyright requires it.” It is also “use licensed, verified content because your AI product will otherwise be less reliable, less current, less defensible, and less adoptable in serious markets.”
Prediction: the future will be fragmented, layered and rights-sensitive
The future AI market will not crown a single winner. It will become a layered, segmented, multi-ecosystem market. A few frontier labs will dominate the high-capability tier. Low-cost models will spread widely in price-sensitive markets. Open and semi-open models will become infrastructure for experimentation, local deployment, and sovereignty strategies. Specialised models will grow in medicine, law, science, finance, education, defence, and publishing. Enterprises will use multiple models at once, routed by task, cost, sensitivity, and required reliability.
The most important battleground will shift from raw intelligence to trusted integration. The winning systems will be those that fit into real workflows, manage risk, prove provenance, control costs, respect rights, and generate reliable outputs. This is where rights owners, publishers, standards bodies, and sector-specific knowledge providers have leverage. They can either remain outside the AI stack and fight extraction after the fact, or become part of the infrastructure that makes AI usable in high-trust environments.
RAND’s paper therefore tells a more interesting story than “AI will replace everything” or “one company will dominate everything.” The likely future is messier: many models, many ecosystems, many regulatory regimes, many licensing structures, and many forms of lock-in. The winners will not simply be the companies with the largest models. They will be the ones that understand adoption: value, trust, timing, integration, switching costs, and control.
