• Pascal's Chatbot Q&As
  • Posts
  • Deployment of genAI systems in courts constitutes a de facto delegation of public power that engages, and potentially threatens, the constitutional principles that structure the rule of law in the EU.

Deployment of genAI systems in courts constitutes a de facto delegation of public power that engages, and potentially threatens, the constitutional principles that structure the rule of law in the EU.

It implicates the fundamental right to an effective judicial remedy under Article 47 CFR, judicial independence, the duty to state reasons, and essential conditions for accountable governance.


AI as De Facto Delegation of Judicial Power — Constitutional Constraints and Systemic Risks

by ChatGPT-5

Irina Carnat’s article offers one of the most rigorous, doctrinally grounded examinations to date of how generative AI challenges the constitutional architecture governing judicial decision-making. Her central claim is sharp: the deployment of generative AI systems in courts constitutes a de facto delegation of public power that engages, and potentially threatens, the constitutional principles that structure the rule of law in the European Union. Specifically, she argues that the use of generative AI by judges—even for seemingly ancillary tasks—implicates the fundamental right to an effective judicial remedy under Article 47 of the Charter of Fundamental Rights, judicial independence, the duty to state reasons, and the essential conditions for accountable governance.

What the article says

The article unfolds in four parts:

  1. State of the art: from data governance to human-computer interaction.
    Carnat highlights that generative AI (GenAI) introduces qualitatively new risks beyond bias and data-protection concerns. Because GenAI systems generate outputs unpredictably, operate with high autonomy, and interact conversationally with human decision-makers, the risks stem not only from data quality but from human-machine interactions—including automation bias, misplaced trust, and opacity. She emphasises that documented cases exist across Europe and Latin America where judges have already used ChatGPT for substantive decision-making tasks, often without safeguards.

  2. Constitutional constraints on delegation.
    With extensive reference to Hofmann’s “cyber-delegation” analysis and the Meroni doctrine, Carnat explains that any delegation of discretionary power must be circumscribed by explicit law and cannot transfer core balancing decisions involving fundamental values to AI systems. Therefore, AI-assisted judging must be constrained by enforceable rules that preserve transparency, reason-giving, contestability, and judicial independence.

  3. Operationalising constraints through the AI Act.
    Carnat analyses key provisions of the EU AI Act—risk classification, fundamental rights impact assessments (FRIA), the right to explanation, human oversight (Art. 14), and AI literacy (Art. 4)—and argues that these can be used to translate abstract constitutional principles into concrete, enforceable responsibilities. Importantly, she identifies a regulatory lacuna: general-purpose AI (GPAI) systems like ChatGPT evade clear high-risk classification because the AI Act classifies risk by intended purpose, not by actual influence. When judges use such tools informally, the Ministry of Justice may inadvertently become a “provider” under the Act.

  4. An algorithmic accountability framework.
    The article synthesises constitutional doctrine and the AI Act’s risk-management obligations into a multilayered accountability framework that distributes responsibilities across the AI value chain: providers, deployers, and surveillance authorities. Carnat argues that accountability must be continuous, lifecycle-based, and procurement-aware—especially given the risk that profit-driven AI vendors might prioritise efficiency over constitutional safeguards.

Do I agree with the argument?

Broadly, yes. Carnat’s analysis is compelling on several grounds:

  • She correctly identifies that the combination of GenAI’s “agentic” behaviour and judicial reliance generates structural risks to judicial independence and procedural fairness.

  • Her insistence that autonomy of the system does not lessen human accountability is doctrinally sound and consistent with the EU’s legal architecture.

  • She makes a powerful case that constitutional principles must be “operationalised” through real compliance mechanisms—not left as abstract values.

If anything, the article is cautious: some constitutional scholars would argue that any use of generative AI in judicial decision-making that materially influences outcomes is incompatible with Article 47 and the Meroni doctrine, regardless of safeguards. Carnat stops short of calling for outright prohibition, instead proposing a governance framework.

Most surprising statements

  1. Judges already used ChatGPT for substantive decisions
    Carnat cites real cases where judges used ChatGPT to determine autism therapy coverage, bail decisions, procedural guidance, child-support calculations, and even verdict-writing. This is extraordinary and somewhat shocking.

  2. A judge or ministry may become a “provider” under the AI Act
    If a general-purpose model is used in a manner that influences outcomes, the deployer can legally become the “provider,” inheriting the full compliance burden. This is a provocative interpretation with far-reaching implications.

  3. Human oversight risks turning judges into scapegoats
    Carnat warns that if oversight is poorly designed, judicial officers will shoulder responsibility for algorithmic harms even when those stem from opaque models or upstream design flaws.

Most controversial claims

  1. Generative AI constitutes a de facto delegation of judicial power
    This will be resisted by technologists, who may argue that AI only “supports” decision-making. Carnat insists that influence equals delegation. She is right, but the assertion challenges current AI deployment practices.

  2. GPAI systems evade high-risk classification and thus jeopardise judicial independence
    This touches a political nerve: the AI Act’s risk-based framework is not fully aligned with constitutional realities. Her critique is blunt and will prompt debate.

  3. Constitutional constraints must apply throughout the AI value chain
    This expands constitutional principles beyond state actors to private vendors—a controversial but increasingly necessary position.

Most valuable contributions

  1. A unified algorithmic accountability framework
    The lifecycle approach that integrates FRIA, human oversight, explanation duties, and AI literacy is both intellectually coherent and practically actionable.

  2. Reframing AI literacy as a constitutional requirement
    Carnat argues convincingly that judges cannot fulfil their duty to state reasons unless they understand how AI works.

  3. Identification of systemic regulatory gaps in the AI Act
    Particularly around GPAI systems and the intended-purpose doctrine.

Recommendations

For Regulators

  1. Close the GPAI loophole.
    Amend or interpret the risk classification rules so that actual influence triggers high-risk designation, regardless of “intended use.”

  2. Mandate judicial-context-specific safeguards.
    Require any AI system touching judicial decision-making—even indirectly—to meet stricter transparency, explainability, and documentation standards.

  3. Strengthen FRIA enforcement.
    Ensure that FRIAs are mandatory and public for judicial deployments, with independent audits and continuous monitoring.

  4. Standardise AI literacy training for judicial actors.
    Treat AI literacy as a constitutional safeguard, not an optional skill.

  5. Regulate procurement rigorously.
    Public procurement of AI for courts must include contractual guarantees: access to logs, model-version tracking, incident reporting, and auditability.

For the Judiciary

  1. Prohibit unsanctioned use of consumer-grade AI tools.
    Judges should never use public LLMs for legal reasoning, drafting, or decision-making without specific institutional approval.

  2. Enforce the duty to state reasons with heightened scrutiny.
    Any AI-assisted reasoning must be explainable—not only the judge’s conclusion but the AI’s influence must be transparent and documented.

  3. Adopt a “meaningful human control” doctrine.
    Judges must understand and critically interrogate AI outputs; blind reliance is incompatible with Article 47 and judicial independence.

  4. Ensure audit trails and contestability.
    Maintain records of when and how AI systems were consulted to preserve the right to appeal and effective remedies.

  5. Develop internal ethics guidelines.
    Clarify permissible and impermissible uses of AI within judicial chambers.

Conclusion

Carnat’s article delivers a necessary wake-up call. GenAI systems are no longer speculative threats—they are already entering courtrooms and influencing legal outcomes. She demonstrates that the use of generative AI in judicial contexts is not just a technical issue but a constitutional one. Her analysis convincingly shows that without robust, enforceable safeguards, generative AI risks undermining judicial independence, due process, and the right to an effective remedy.

The core message is clear: cognitive autonomy does not neutralise legal accountability. The ability of AI systems to generate text or recommendations does not shift responsibility away from human actors; it merely obscures where responsibility should lie. Regulators and judicial authorities must act now to preserve the rule of law in an age where algorithmic systems are capable of exercising de facto public power.