• Pascal's Chatbot Q&As
  • Posts
  • Cresti’s core claim is simple: if you can’t explain where data goes and who can access it, you cannot credibly claim governance.

Cresti’s core claim is simple: if you can’t explain where data goes and who can access it, you cannot credibly claim governance.

Built-in “assistants” often route prompts, documents, or derived metadata to external services for processing. Even when vendors promise security, institutions still need auditability.

Switching Off the “Intelligence Layer”: What the European Parliament’s Device Ban Really Signals

by ChatGPT-5.2

In her LinkedIn post, Barbara Cresti uses a sharp headline—“Europe just switched AI off at the device level”—to describe a concrete governance move: the European Parliament disabled built-in AI features on official smartphones and tablets because its IT function could not trace where institutional data was processed (and therefore could not assure compliant handling under legal, security, and accountability standards). She argues that this is bigger than a Brussels anecdote: it’s an early indicator that AI has shifted from “optional software” into an embedded infrastructure layer—inside operating systems and productivity stacks—that quietly shapes work, decisions, and ultimately power.

That framing is basically right, and it is exactly the sort of institutional reflex other governments should learn from—while also improving the analysis with a clearer threat model, sharper distinctions between types of “AI,” and more concrete alternatives to blanket disablement.

The key issues her post surfaces

1) Traceability is the new minimum for “safe enough”

Cresti’s core claim is simple: if you can’t explain where data goes and who can access it, you cannot credibly claim governance. Built-in “assistants” often route prompts, documents, or derived metadata to external services for processing. Even when vendors promise security, institutions still need auditability: what left the device, where it was processed, how long it was retained, what subcontractors touched it, and what legal access regimes apply.

This is the heart of the story: not fear of AI in the abstract, but inability to prove control.

2) AI is becoming infrastructure, not an app

Her analogy (cloud infrastructure, payment rails, energy grids) is strong because it highlights a structural change: when AI is fused into default workflows—drafting, summarising, ranking, recommending—it becomes a governing layer in practice. You don’t just “use” it; you increasingly route cognition through it.

That matters because infrastructure decisions are hard to reverse, and because infrastructure tends to concentrate power.

3) “Digital sovereignty” is a jurisdiction + leverage problem

Cresti breaks sovereignty into three concrete assets that AI centralises:

  • Data visibility (who sees behavioural patterns behind prompts)

  • Jurisdictional exposure (which laws, access powers, enforcement regimes apply)

  • Dependency economics (switching costs and lock-in)

These aren’t ideological talking points; they’re contract and architecture realities. If the intelligence layer is controlled externally, sovereignty becomes something you discover only when you try to change course and realise you can’t.

4) The “deeper signal”: reversibility still exists—for now

One of her most important observations is that switching AI off without operational shock suggests dependence isn’t irreversible yet. That implies:

  • the organisation still has agency,

  • the vendor doesn’t hold full leverage,

  • political authority can still set boundaries.

This is a window of opportunity: governance is easiest before the workflow hardens around the tool.

Where I, ChatGPT, agree—and where I’d qualify her framing

I agree with the strategic diagnosis

The post is persuasive on the most important point: many institutions treat AI as a productivity feature, when it is increasingly a cognitive supply chain. And cognitive supply chains deserve the same seriousness as physical ones—risk management, contingency planning, supplier power analysis, and exit options.

I also agree with her practical governance test: an organisation should be able to state clearly:

  1. where AI-processed data resides,

  2. which legal regime governs it, and

  3. how quickly exposure can be reduced.

That’s not “anti-AI.” That’s baseline institutional competence.

I’d qualify two things

First: “Europe switched AI off” is rhetorically effective but analytically imprecise.
What happened is narrower and more useful than that: a major democratic institution placed a temporary brake on certain built-in device AI features because traceability and assurance were not yet adequate. That distinction matters, because it points toward a governance pattern—pause until assurance exists—rather than a civilisational rejection of AI.

Second: disablement is a rational short-term control, but it shouldn’t become the only control.
There are credible architectural alternatives (on-device models for defined tasks; private inference endpoints; strict data loss prevention; contractual limits plus technical enforcement; scoped allow-lists) that can reduce risk without an all-or-nothing switch. Cresti is diagnosing the problem (opacity and loss of control) correctly, but the solution space is bigger than “off until further notice.”

What’s missing from her analysis

1) A sharper threat model (what exactly are we protecting against?)

“Where is data processed?” is necessary, but insufficient. Governments need clarity on:

  • Sensitivity tiers (public policy drafts vs classified material vs constituent data)

  • Adversaries (foreign intelligence collection, cybercriminals, insider threats, vendor misuse, inadvertent retention, legal compulsion)

  • Attack paths unique to AI features (prompt injection, data exfil via tool use, retrieval leakage, cross-tenant inference, training/retention ambiguity)

Without this, “sovereignty” risks becoming a slogan rather than an operational risk register.

2) Decision-shaping power deserves to be first-class—not an afterthought

A commenter under her post notes that the deeper shift isn’t only data location; it’s subtle decision-shaping power when AI filters, ranks, and recommends. Cresti agrees in her reply, but this deserves explicit elevation: the most consequential governance question may be how AI reorganises attention, not just where tokens are processed. This is especially critical in legislatures, where agenda-setting is power.

3) The productivity politics: bans can create “shadow AI”

When official tools are disabled, staff may route work to unofficial tools on personal devices or unapproved web services. That can increase risk. A mature policy anticipates behavioural substitution: provide approved alternatives, safe sandboxes, and clear “do / don’t” rules tied to data classification.

4) The industrial policy angle

If governments want sovereignty, they need capacity: trusted domestic or allied compute options, procurement standards, audit regimes, and competitive ecosystems. Otherwise, sovereignty becomes purely defensive—saying “no” without being able to build “yes.”

5) Vendor concentration beyond AI

Another comment raises the obvious question: if the concern is external control and jurisdictional exposure, what about the rest of the stack—collaboration suites, CRM, email, cloud identity, device telemetry? That’s not a “gotcha”; it’s a reminder that AI governance must be integrated into a broader digital sovereignty posture. Otherwise the AI ban is symbolic while other channels remain porous.

Recommendations for other governments and countries facing similar tensions

  1. Adopt a “traceability or disablement” rule for sensitive roles.
    For legislators, regulators, defence, courts, and critical infrastructure operators: if AI features cannot provide verifiable processing, retention, access, and audit logs, default to off.

  2. Classify AI features the way you classify data.
    Separate: on-device autocomplete, cloud drafting, summarisation, meeting transcription, “smart reply,” and agentic features that can act across systems. Treat them as different risk classes.

  3. Procure AI like infrastructure, not like software.
    Require: data-flow maps, subcontractor disclosure, retention controls, audit rights, incident reporting, red-team cooperation, and meaningful termination/exit assistance.

  4. Create “approved capability baselines.”
    Offer staff safe alternatives: sanctioned tools for low-risk tasks, private endpoints for medium-risk workflows, and explicit prohibitions for high-risk contexts.

  5. Make reversibility a contractual and technical KPI.
    Test “exit drills”: how fast can you turn features off, migrate prompts/logs, rotate keys, and shift workflows? Cresti’s point is crucial—reversibility is sovereignty in practice.

  6. Govern decision-shaping explicitly.
    For ranking/recommendation systems in government workflows: require explainability of prioritisation logic, monitoring for systematic agenda bias, and human accountability for final decisions.

  7. Build the institutional muscle: AI risk management systems.
    Use established frameworks to operationalise governance (risk registers, controls, continuous monitoring, accountability). Don’t invent governance from scratch.

  8. Assume shadow AI will happen and design against it.
    Pair restrictions with training, safe experimentation spaces, and clear enforcement. Otherwise the ban becomes theatre and risk migrates off the balance sheet.

  9. Coordinate across allied jurisdictions.
    Sovereignty doesn’t have to mean autarky. Build trusted “governance blocs” (common audit standards, reciprocal assurance, shared incident intelligence).

  10. Treat the “intelligence layer” as a constitutional issue, not only an IT issue.
    In democracies, the integrity of deliberation, agenda-setting, and institutional memory is strategic infrastructure. Device settings are just the first battle line—the deeper work is redesigning procurement, auditability, and accountability so the ‘intelligence layer’ can be used without surrendering control.


Conclusion: “Switching Off” as a Sovereignty Test, Not an Anti-AI Move

Barbara Cresti is right to treat the European Parliament’s decision as a signal rather than a curiosity. When AI becomes embedded in devices and default workflows, governance failure stops being a compliance nuisance and becomes a constitutional risk: it can reshape what information officials see, how they prioritise issues, what gets remembered, and what gets forgotten. In that world, traceability is not a luxury feature—it is the minimum condition for legitimacy.

Where I would sharpen her analysis is in the “so what” of the response. Turning features off is an entirely rational first move when assurance is missing, but it cannot be the end state. If governments only disable without building safer alternatives, they will either fall behind in capability or—more likely—drive staff into shadow AI, pushing risk into unmonitored channels. The real lesson is that sovereignty is not a slogan; it is an engineered property created by procurement terms, technical controls, audit rights, and reversible architectures.

For other countries, the practical recommendation is to treat the “intelligence layer” as critical infrastructure. Create a hard rule for sensitive functions: no opaque data flows, no unverifiable processing, no ambiguous retention, and no black-box decision-shaping in core democratic workflows. Pair that rule with a credible pathway to safe adoption—approved tools, private or on-device options, robust monitoring, and exit drills—so the state can modernise without quietly outsourcing the cognitive plumbing of democracy.