• Pascal's Chatbot Q&As
  • Posts
  • AI is becoming the new enterprise interface: shaping customer discovery, shopping, service, surveillance, and internal workflows, often through platforms companies do not fully control.

AI is becoming the new enterprise interface: shaping customer discovery, shopping, service, surveillance, and internal workflows, often through platforms companies do not fully control.

At the same time, the cheap-AI era is ending, meaning enterprises will face rising token costs, tighter limits, model lock-in risks, and the need for serious AI cost governance.

Summary: AI is becoming the new enterprise interface: shaping customer discovery, shopping, service, surveillance, and internal workflows, often through platforms companies do not fully control.
At the same time, the cheap-AI era is ending, meaning enterprises will face rising token costs, tighter limits, model lock-in risks, and the need for serious AI cost governance.
Large companies should prepare by using AI selectively, protecting customer trust and data, avoiding pointless automation, building multi-model strategies, and preserving human value where it matters.

The Great AI Tollbooth: How Enterprises Will Be Squeezed, Watched, Optimised — and Forced to Choose What Kind of Company They Want to Become

by ChatGPT-5.5

The two Verge articles I saw the future of retail, and it’s all AI and You’re about to feel the AI money squeeze describe two sides of the same coming enterprise reality. One looks outward, at the AI-saturated future of retail: holographic shop assistants, agentic checkout, AI shopping interfaces, AI-generated recommendations, in-store surveillance, personalised promotions, and a creeping sense that every consumer interaction is becoming measurable, optimisable, and monetisable. The other looks inward, at the economic machinery behind that future: the end of cheap AI, rising token costs, stricter rate limits, enterprise pricing, ads, usage metering, model consolidation, and a growing need for companies to decide which AI capabilities are truly worth paying for.

Together, they point to a future in which large enterprises will no longer be asking, “Should we use AI?” They will be asking something much harder: where should AI sit in the value chain, who controls the interface with the customer, who owns the data exhaust, who carries the cost, and how much automation will customers, employees, regulators, and shareholders tolerate?

The answer will define the next decade of enterprise strategy.

1. AI will become the new customer interface — but that means losing control of the customer relationship

The retail article shows that AI is moving from back-office experimentation into the visible customer journey. At the National Retail Federation show, AI was everywhere: holographic avatars, AI-powered merchandising, smart people counting, agentic shopping, AI checkout, automated ordering, personalised discounts, and in-store behavioural tracking. The most strategically important example is Google’s Universal Commerce Protocol, which is designed to let retailers communicate directly with AI agents so consumers can buy from a retailer inside Google’s AI interface rather than visiting the retailer’s own website.

That is the key shift. The future is not merely “retailers using AI.” It is retailers being intermediated by AI platforms.

For large enterprises, this has enormous implications. In the old digital economy, companies fought for search rankings, app downloads, email subscribers, website traffic, loyalty programmes, and direct customer relationships. In the AI-agent economy, customers may not visit the retailer, publisher, bank, airline, university, insurer, or healthcare provider directly. They may ask an assistant: “Find me the best option,” “order what I usually buy,” “summarise the policy,” “book the cheapest trip,” “recommend the right medical resource,” or “compare these products.”

That means the enterprise may become a supplier to someone else’s conversational interface. The brand risks being reduced to an API endpoint. The customer relationship may migrate from the company’s own environment to Google, OpenAI, Anthropic, Amazon, Apple, Meta, or whichever agent layer becomes dominant.

This is especially important for large enterprises with trusted brands. A retailer, publisher, bank, healthcare company, or educational provider may assume its brand equity protects it. But if AI agents decide what gets surfaced, recommended, bundled, discounted, or ignored, the competitive battlefield changes. Traditional SEO becomes AEO, GEO, GSO, or whatever acronym comes next. Companies will have to optimise not only for humans and search engines, but for opaque AI systems that decide visibility.

The future enterprise question will be: are we discoverable, trusted, and transactable inside AI-mediated environments — without surrendering our margins, data, brand, and customer relationship?

2. AI will create a new surveillance layer across physical and digital commerce

The retail article also shows how the logic of online tracking is moving into physical space. The example of in-store cameras assigning shoppers a global ID, estimating demographics, tracking gaze, measuring attention, recording whether someone picks up a product, and triggering personalised promotions is not just a retail curiosity. It is a preview of enterprise infrastructure more broadly.

The commercial promise is obvious: better conversion, better inventory decisions, better store layout, better advertising attribution, better customer segmentation. The social risk is equally obvious: a world in which physical movement becomes behavioural data; attention becomes a measurable asset; and privacy notices become legal theatre masking a deeper normalisation of ambient surveillance.

Large enterprises will be tempted to adopt these systems because competitors will. The pressure will be particularly strong in retail, transport, hospitality, entertainment, healthcare, education, and physical workplaces. Once one player can claim higher conversion rates, lower theft, better staffing, and more personalised offers, others will feel compelled to follow.

But this creates serious governance risk. The fact that video is allegedly deleted “a millisecond” after metadata is captured does not solve the problem. The metadata is often the valuable and sensitive part. A faceprint, demographic inference, behavioural profile, store-path history, attention score, or purchase-propensity signal can be more commercially useful than the raw image. Enterprises that treat this as merely “analytics” will underestimate the legal, reputational, ethical, and employee-relations risks.

The future will therefore split companies into two categories: those that treat AI-enabled behavioural tracking as a growth hack, and those that treat it as high-risk infrastructure requiring explicit governance, data minimisation, auditability, customer choice, retention limits, and explainability.

The latter group will be more resilient.

3. The AI free ride is ending — and enterprise AI costs will become a board-level issue

The second article is the economic counterweight. It argues that the era of cheap or free AI is ending. AI labs have absorbed vast costs to win users, developers, and enterprise adoption. Now investors expect returns. That means higher prices, tighter rate limits, token-based billing, premium tiers, ads, usage restrictions, and new enterprise contracts.

This matters because many companies have built early AI plans on artificially subsidised economics. They have assumed that AI will keep getting cheaper, more capable, and more available at the same time. That may be partly true at the model level, especially with smaller models, open-weight models, and task-specific systems. But frontier AI, reasoning models, agents, long-context workflows, and always-on background automation are expensive. Agentic systems do not simply answer a query; they plan, call tools, verify, backtrack, spawn subtasks, and consume large volumes of invisible tokens.

For large enterprises, AI cost management will become as important as cloud cost management became after the first wave of digital transformation. The pattern will be familiar: first comes experimentation, then uncontrolled adoption, then surprise bills, then procurement controls, then architecture rationalisation, then FinOps discipline. In AI, this will become AI FinOps: tracking model usage, token consumption, latency, quality, task value, vendor dependency, and cost per business outcome.

This will be particularly challenging because AI costs are not always visible to the business user. A one-sentence prompt may trigger thousands of tokens of hidden reasoning. A simple workflow may call several models and tools. A coding agent may run continuously in the background. A customer-service agent may be cheap per interaction until millions of interactions happen. A research assistant may look efficient until retrieval, context expansion, safety checks, and model calls are included.

The future enterprise will therefore need to ask: which AI use cases deserve frontier models, which can use cheaper models, which should be self-hosted, which should use open-weight models, and which should not exist at all?

4. Model choice will become dynamic, not ideological

The second article makes clear that enterprises are already moving toward mixed-model strategies. Some use frontier models for mission-critical work, cheaper models for routine tasks, open-source or open-weight models for cost control, and self-hosted models for security or supply-chain reasons. The most mature companies will not be “OpenAI shops,” “Anthropic shops,” “Google shops,” or “open-source shops.” They will be model-routing organisations.

That means they will classify tasks by risk, value, sensitivity, and quality requirement. A legal analysis, clinical evidence summary, financial risk assessment, or scientific research workflow may justify a more expensive reasoning model. A product description, internal search query, classification task, meeting summary, translation, or basic support interaction may not. Some tasks may require a domain-specific model, some may require RAG over licensed content, and some may require no generative AI at all.

This is where large enterprises need to become more sophisticated. The question is not “which model is best?” The question is “which model is best for this task, at this risk level, with this data, under this latency requirement, at this acceptable cost, with this audit trail?”

That will require internal benchmarking. Vendors will make claims, but enterprises will need their own evidence. They will need to test quality regressions, hallucination rates, refusal behaviour, citation accuracy, privacy controls, cost volatility, latency, and performance under real workflows. A 1 percent decline in quality may be irrelevant for low-risk marketing copy but unacceptable for legal, medical, financial, or scientific applications.

The future belongs to enterprises that build their own evaluation infrastructure.

5. AI will increase dependence on platform gatekeepers unless enterprises deliberately resist it

The two articles together reveal a dangerous concentration dynamic. On the customer side, AI platforms may control discovery, recommendation, ordering, payment, and post-sale interaction. On the infrastructure side, frontier model providers may control the reasoning layer, pricing, availability, rate limits, and tool ecosystem.

That creates a double dependency. Enterprises may rely on the same handful of companies both to reach customers and to power internal operations. This is strategically risky. A change in pricing, ranking, API access, model behaviour, safety policy, data policy, or commercial terms could materially affect revenue, customer experience, compliance, and operational continuity.

This will be particularly acute for enterprises that rush into agentic integration without contractual protections. If an AI platform becomes the interface through which customers buy products or access services, the enterprise must know: who owns the customer data, who controls the recommendation logic, who is liable for errors, who controls discounts, who can change the interface, who can insert ads, who gets attribution, who retains logs, and who can train on the interaction data.

For publishers, retailers, banks, healthcare companies, airlines, and universities, this is not a technical procurement issue. It is a strategic control issue.

6. The customer may not want all this automation

One of the strongest themes in the retail article is that the AI future being sold by vendors may not match what customers actually want. A holographic AI greeter may attract attention, but it may not improve the product. A pizza-ordering assistant that invites users to upload a group photo to count diners may be technically clever but socially absurd. AI chatbots on every ordering page may create friction rather than remove it. AI-generated retail experiences may become “slop”: scalable, cheap, optimised, and emotionally empty.

This matters for all large enterprises. AI adoption is often discussed as if automation is inherently progress. But customers often value reliability, speed, human escalation, tactility, trust, beauty, status, privacy, and emotional connection. The Equapack example in the retail article is powerful because it reminds us that some forms of value are physical, human, and experiential. A well-designed bag, a reliable product, a helpful employee, a trusted expert, or a clear explanation may matter more than another AI layer.

The future will punish companies that confuse automation with value. AI should not be deployed because it is impressive in a demo. It should be deployed where it improves a real outcome: lower cost, higher accuracy, faster service, better accessibility, better personalisation, reduced fraud, improved safety, better discovery, or more meaningful customer experience.

The enterprise mantra should be: do not add AI where it only adds abstraction.

7. The workforce impact will be indirect, uneven, and politically sensitive

The retail article notes that some companies prefer non-human holographic characters partly to avoid the perception that AI is replacing workers. That is telling. Enterprises know there is anxiety around AI and employment, but many are trying to manage perception rather than confront the underlying issue.

The workforce impact will not be a simple story of “AI replaces jobs.” It will be more uneven. AI will replace tasks, compress workflows, reduce demand for some junior roles, increase monitoring of workers, shift value toward those who can supervise or integrate AI, and create new operational dependencies. In retail, this may affect customer service, merchandising, store analytics, inventory planning, marketing, and e-commerce operations. In large enterprises more broadly, it will affect legal, finance, HR, software development, procurement, sales, customer support, compliance, research, and communications.

The real issue is not whether every job disappears. It is whether organisations redesign work responsibly or simply use AI to intensify productivity demands, reduce headcount, and hollow out learning pathways. If entry-level work is automated without redesigning career development, enterprises may later discover that they have destroyed their own talent pipeline.

This will become a governance and legitimacy issue. Employees will accept AI more readily where it removes drudgery, improves quality, and gives them more agency. They will resist it where it becomes surveillance, deskilling, speed-up, or a pretext for cuts.

8. Regulation will increasingly target AI’s real-world effects, not just model behaviour

The topics in the two articles point toward future regulatory pressure in several areas: biometric and behavioural surveillance, dark patterns, automated decision-making, consumer manipulation, advertising transparency, data retention, AI-generated recommendations, agentic purchases, platform self-preferencing, unfair competition, price discrimination, and accountability for AI-mediated transactions.

For large enterprises, the most dangerous assumption would be that AI regulation is only about model providers. In practice, regulators will also scrutinise deployers — especially where AI affects consumers, employees, patients, students, citizens, or vulnerable groups.

AI shopping agents raise questions about disclosure, liability, ranking fairness, advertising, commissions, and consumer choice. In-store tracking raises questions about consent, biometric inference, profiling, and data minimisation. Enterprise AI cost pressures raise questions about silent model downgrades and quality regressions. AI-generated customer interfaces raise questions about deception, accessibility, and human escalation. Agentic purchasing raises questions about authority, mistake correction, refunds, and contract formation.

The safest enterprises will not wait for regulation. They will build governance around foreseeable harms now.

9. The future: AI everywhere, but not evenly, not cheaply, and not without backlash

The combined picture is not that AI will fail. It is that AI will become infrastructural — but also more expensive, more contested, more regulated, more concentrated, and more operationally complex.

Over the next three to five years, large enterprises should expect:

AI interfaces to become a major route to customers, reducing direct website and app dependence but increasing platform dependency.

Agentic commerce to grow, with AI assistants comparing, recommending, ordering, negotiating, and executing transactions.

AI visibility optimisation to become a new marketing discipline, replacing or supplementing SEO.

Token costs and AI usage bills to become material budget items, especially for agentic and reasoning-heavy workflows.

Model routing to become standard, with enterprises mixing frontier, cheaper, open-weight, self-hosted, and domain-specific models.

Privacy and surveillance controversies to intensify as physical spaces become AI-measured environments.

Enterprise procurement to shift from “AI capability” to “AI economics, auditability, resilience, and control.”

Regulators to focus more on consumer manipulation, biometric inference, automated decisions, competition, platform power, and accountability.

Customer backlash to grow against pointless automation, creepy personalisation, AI slop, and loss of human service.

The strongest companies will not be those that deploy the most AI. They will be those that deploy AI where it creates defensible value, maintain control over critical interfaces, understand their cost base, protect customer trust, and avoid becoming dependent on opaque platforms they cannot govern.

Recommendations for large enterprises

1. Build an AI value map, not an AI enthusiasm map

Every AI initiative should be mapped against business value, risk, cost, and customer impact. The key question is not “can we automate this?” but “does automation improve the outcome?” Enterprises should prioritise use cases that reduce measurable pain, improve quality, increase trust, lower operational cost, or create genuinely better customer experiences.

2. Create an AI FinOps function

AI usage must be financially governed like cloud infrastructure. Enterprises should track token consumption, cost per workflow, model mix, hidden reasoning costs, agentic background activity, vendor pricing exposure, and cost per successful business outcome. AI budgets should not sit invisibly inside innovation teams until the invoices become unmanageable.

3. Adopt a multi-model architecture

Do not become dependent on one frontier model provider. Build a model-routing layer that can direct tasks to the right model based on sensitivity, quality need, cost, latency, and compliance requirements. Use frontier models only where their superior performance justifies the price. Use smaller, cheaper, open-weight, or self-hosted models where appropriate.

4. Build internal AI evaluation infrastructure

Enterprises need continuous benchmarking of model quality, hallucination rates, citation accuracy, refusal behaviour, latency, privacy controls, security, and cost. Do not rely solely on vendor benchmarks. Test models against your own workflows, documents, customers, regulatory obligations, and failure modes.

5. Protect the customer relationship

When integrating with AI agents, negotiate hard on data rights, attribution, ranking transparency, transaction logs, customer ownership, commercial terms, and liability. Do not allow your brand to become a silent fulfilment layer inside someone else’s assistant without understanding the long-term strategic consequences.

6. Treat behavioural tracking as high-risk infrastructure

In-store analytics, biometric inference, attention tracking, and personalised physical-space advertising should be governed through privacy impact assessments, legal review, customer transparency, retention limits, opt-out mechanisms, and strict vendor controls. Metadata can be sensitive even when raw video is deleted.

7. Avoid pointless AI layers

Do not add a chatbot, avatar, assistant, or agent merely because competitors are doing so. Many customers do not want more automation; they want clarity, speed, reliability, good products, fair prices, and easy escalation to a human. AI should remove friction, not create a performative layer between the customer and the service.

8. Contract for auditability and control

Enterprise AI contracts should address pricing changes, model substitutions, rate limits, data retention, training use, logging, security controls, service continuity, incident notification, regulatory cooperation, and exit rights. If the vendor can silently change the model, the cost, the quality, or the rules, the enterprise does not have a stable control environment.

9. Prepare for regulatory scrutiny now

Legal, compliance, privacy, security, procurement, product, and customer-experience teams should jointly review AI deployments before launch. Particular attention should be paid to consumer manipulation, automated purchasing, profiling, biometric inference, children and vulnerable users, advertising disclosures, and human escalation rights.

10. Preserve human value where it matters

The future enterprise should not be anti-AI, but it should be anti-slop. Human judgment, design, empathy, accountability, domain expertise, and physical product quality will become more valuable precisely because AI will make synthetic interactions cheap and abundant. The companies that retain trust will be those that know when not to automate.

Conclusion

The future described in the two Verge articles is not simply “AI everywhere.” It is AI as interface, AI as tollbooth, AI as surveillance layer, AI as cost centre, AI as platform dependency, and AI as a test of corporate judgment.

Large enterprises are entering a phase where AI will be unavoidable but no longer cheap, magical, or consequence-free. The early period of subsidised experimentation is ending. The next period will be defined by economics, governance, trust, differentiation, and control.

The winners will not be the companies that paste AI onto every surface. The winners will be the companies that understand where AI genuinely improves value, where it damages trust, where it creates dependency, where it should be constrained, and where the human experience remains the product.