- Pascal's Chatbot Q&As
- Posts
- In every sector where language, ethics, and human judgment matter, the blind adoption of generative AI is not just risky—it is a quiet surrender of the very faculties that make us human.
In every sector where language, ethics, and human judgment matter, the blind adoption of generative AI is not just risky—it is a quiet surrender of the very faculties that make us human.
Foundational values—integrity, responsibility, and democratic reasoning—are incompatible with the shortcuts offered by opaque, unaccountable algorithms. Not all tools belong in all contexts.
Safeguarding the Rule of Law from Generative AI – A Caution from the Legal Academy
by ChatGPT-4o
In their compelling opinion piece “Don’t Hand Over the Rule of Law to ChatGPT: Ban AI at the Law Faculty,” legal scholars Joris van Laarhoven and Eva van Vugt deliver a powerful critique of the integration of generative AI—such as ChatGPT—into legal education. Their central thesis is unambiguous: allowing AI tools in legal academia threatens the intellectual and democratic foundations of the legal profession. They call for a ban on the use of generative AI at law faculties, and their arguments merit close attention, not just within legal circles, but across other knowledge-critical domains such as education, healthcare, finance, and scientific research.
Van Laarhoven and Van Vugt advance their position through several interwoven concerns:
Legal Practice Is a Linguistic Craft
Law is inherently a language-based discipline. Lawyers must independently develop the ability to read, interpret, argue, and write with precision. Generative AI, which produces fluent but often factually inaccurate or misleading content, undermines the cultivation of these skills by tempting students and professionals to offload critical thinking to machines.Erosion of Cognitive Development
Legal education is not about producing texts, but about developing thinking. Using AI short-circuits this process. Students fail to learn from their mistakes, forgo personal development, and miss the deep conceptual grasp that comes only through sustained, self-driven effort.Integrity of Legal Institutions
The legitimacy of law hinges on trust in its practitioners. A generation of lawyers trained with shortcuts may produce poorly reasoned judgments, increasing public skepticism and weakening democratic institutions.Epistemic and Ethical Risk of AI Hallucination
Language models "guess" plausible words without understanding meaning. This bluffing style aligns disturbingly well with the disinformation tactics of autocrats and undermines epistemic authority. It is also a risk in education, where accuracy and reasoning are paramount.Structural Concerns: Big Tech, Autocracy, and Control
The authors warn of the growing alliance between Big Tech firms and authoritarian regimes, arguing that allowing these technologies to become entrenched in legal education is tantamount to surrendering the rule of law to unaccountable private interests.Plagiarism and Intellectual Theft
AI systems are trained on massive, often unauthorized datasets. Their use in law schools, where intellectual integrity should be paramount, risks normalizing plagiarism and muddying the line between original thought and machine-generated synthesis.The Precautionary Principle
Drawing from environmental law, the authors propose an "in dubio contra machina" approach: in cases of doubt, ban the machine. Until rigorous and transparent frameworks for responsible AI use exist, abstention is the only responsible path.
Additional Supporting Arguments
The authors’ concerns are rooted in legal education, but their implications extend more broadly. Several additional points reinforce their argument:
Loss of Accountability and Professional Responsibility: Legal systems rely on clearly identifiable human agents—judges, lawyers, prosecutors—who are responsible for their actions. If AI enters the legal chain of reasoning, it muddies liability and erodes responsibility.
Cognitive Atrophy: Overreliance on generative AI can lead to cognitive dependency, where even simple argumentation or analytic thinking becomes outsourced. This affects not only competence but confidence, which is essential in high-stakes professions.
Undermining Legal Diversity and Argumentative Pluralism: AI systems are trained on dominant discourses, often in English and drawn from Western legal systems. Their use risks homogenizing thought, undermining national legal cultures and minority perspectives.
Tool Fetishism and Market Logic in Education: Institutions are increasingly seduced by the efficiency rhetoric of AI vendors. This techno-solutionism undermines the slow, deliberative, and dialogic essence of legal and academic processes.
Broader Implications for Other Critical Sectors
The authors’ thesis offers essential lessons for fields beyond law:
Education: The core aim of education is to cultivate critical and independent thinking. Generative AI tempts students to replace effort with imitation. This undermines intellectual formation and, ultimately, citizenship in a democratic society.
Healthcare: Medical decision-making demands ethical reasoning, accountability, and trust. Overreliance on AI may desensitize practitioners, dilute empathy, and introduce opaque risk where human life is at stake.
Finance: AI tools in finance can obscure responsibility for misjudgments and increase systemic risks. Without skilled human oversight, decisions may favor short-term optimization over long-term stability and fairness.
Scientific Research: Scholarship requires originality, falsifiability, and evidence-based reasoning. Using generative AI for writing or synthesis risks perpetuating existing errors, biases, or even hallucinations, undermining the credibility of science.
Recommendations
For Legal Professionals and Educators:
Adopt Clear Policies Prohibiting Generative AI Use in Core Legal Education
Establish strict guidelines barring the use of AI for writing assignments, legal briefs, or thesis preparation.Invest in Human-Centered Legal Training
Emphasize Socratic methods, moot courts, and slow reading to preserve the depth and nuance of legal reasoning.Promote AI Literacy without Normalization
Educate students about AI’s limitations and risks without implying that its use is expected or inevitable.Develop Human-Centered Alternatives to AI Tools
Encourage peer mentorship, writing centers, and collaborative research as support mechanisms.
For Regulators:
Mandate Transparency in AI Use in Legal and Educational Institutions
Require full disclosure of when, where, and how AI tools are used in decision-making or academic production.Enforce the Precautionary Principle in AI Deployment
Especially in high-risk sectors, developers and institutions should bear the burden of proof in demonstrating AI safety and necessity.Support Independent AI Oversight Bodies
Create watchdog institutions at national and international levels to audit AI systems used in justice, education, and research.Legislate against Training AI on Illegally Obtained Data
Protect authors, scholars, and publishers by enforcing intellectual property rights against unauthorized training practices.
Conclusion
Van Laarhoven and Van Vugt provide an urgent, principled, and intellectually grounded defense of the rule of law against the encroachment of generative AI. Their call is not reactionary, but rooted in the understanding that foundational values—integrity, responsibility, and democratic reasoning—are incompatible with the shortcuts offered by opaque, unaccountable algorithms. Their essay should not be seen as an anti-technology manifesto, but rather as a sober reminder that not all tools belong in all contexts, and that the burden of proof must lie with those who claim otherwise.
Their message applies far beyond law faculties: in every sector where language, ethics, and human judgment matter, the blind adoption of generative AI is not just risky—it is a quiet surrender of the very faculties that make us human.
