• Pascal's Chatbot Q&As
  • Posts
  • Joshua James Hatherley presents a philosophical and ethical critique of the rising optimism around the role of AI in healthcare. His thesis runs counter to the utopian vision promoted by AI champions

Joshua James Hatherley presents a philosophical and ethical critique of the rising optimism around the role of AI in healthcare. His thesis runs counter to the utopian vision promoted by AI champions

....like Eric Topol, who claim that AI will free doctors from bureaucracy and enable deeper human connection. Hatherley calls this view fundamentally misguided...

Data Over Dialogue – Why Artificial Intelligence is Unlikely to Humanise Medicine by Joshua James Hatherley

by ChatGPT-4o

Introduction

In Data Over Dialogue: Why Artificial Intelligence Is Unlikely to Humanise Medicine, Joshua James Hatherley presents a philosophical and ethical critique of the rising optimism around the role of AI in healthcare—especially the claim that AI will humanisemedicine by restoring empathy, care, and trust in clinician-patient relationships. Instead, Hatherley argues, machine learning (ML) systems are more likely to undermine these essential human elements in medicine. His thesis runs counter to the utopian vision promoted by AI champions like Eric Topol, who claim that AI will free doctors from bureaucracy and enable deeper human connection. Hatherley calls this view fundamentally misguided, arguing that AI is more likely to exacerbate administrative burdens, reduce face-to-face interaction, and damage the trust, empathy, and communication foundational to medicine.

Core Arguments and Content Overview

  1. Exaggerated Benefits, Discounted Risks
    Hatherley argues that the promises of AI—improved patient safety, health equity, and efficiency—are overblown. He draws historical parallels to earlier technological "revolutions" that failed to deliver on their hype and warns of repeating these mistakes with AI.

  2. The Limits of Trust in Medical AI
    He questions whether AI systems can or should be trusted. Since trust implies relational and moral expectations, and machines are not moral agents, invoking trust is inappropriate. Worse, using “trust” language around AI can obscure responsibility when harm occurs.

  3. Disclosure and Informed Consent
    Hatherley insists clinicians should disclose their use of AI to patients—not just as a matter of informed consent, but also out of respect for patients' autonomy and privacy. Patients deserve the right to refuse AI-driven diagnostics or treatment recommendations.

  4. The Problem with Opacity
    He critiques the opaque nature of deep learning systems, which can’t provide reasons for their recommendations. This erodes shared decision-making, impairs communication, and undermines patient understanding—essential elements of humane care.

  5. The Costs of Continual Learning
    Adaptive AI that learns in real-time brings risks of performance instability and increased burdens on clinicians to monitor system changes. This leads to greater administrative load and less time for patient interaction.

  6. Care, Empathy, and Deep Medicine
    AI may worsen professional burnout and distance clinicians from patients, physically and psychologically. Instead of liberating clinicians to care more, AI risks pushing them into roles as data managers and system supervisors.

Most Surprising, Controversial, and Valuable Statements

Surprising

  • The assertion that AI systems may increase, not decrease, clinician administrative burden is a striking counter to popular narratives.

  • His suggestion that AI could undermine empathy by physically displacing clinicians (e.g., telepresence, automation) challenges assumptions about technological neutrality.

Controversial

  • The rejection of Topol’s vision that AI could humanise medicine by enhancing emotional intelligence in doctors will likely be contentious among tech-optimists.

  • Hatherley’s call to not trust AI systems in the humanistic sense flies in the face of both marketing and regulatory trends that emphasize “trustworthy AI.”

Valuable

  • The emphasis on relational ethics—that trust, empathy, and care cannot be replicated or facilitated by data-driven tools—is a powerful and underrepresented perspective.

  • His detailed critique of opacity in AI systems offers a compelling argument for explainability, even at the cost of some accuracy.

Evaluation and Agreement

I broadly agree with Hatherley’s arguments. His skepticism is grounded in robust ethical reasoning and a realistic assessment of clinical environments. The idea that AI can restore the "human touch" in medicine is appealing but often naïve. Most AI systems currently exacerbate rather than alleviate system pressures. By forcing clinicians into more oversight, documentation, and interpretability gaps, AI may indeed lead to dehumanisation, not its opposite.

That said, there is a middle path that Hatherley somewhat sidelines: carefully regulated, explainable AI used as a background aid rather than a clinical actor. The success of AI in specific domains (e.g. imaging diagnostics) shows potential, but it must be contextualised within a human-led system of care.

Recommendations

For AI Developers

  • Prioritise explainability and human interpretability over opaque accuracy.

  • Design AI to assist in reducing administrative load, not shift burdens onto clinicians.

  • Embed feedback mechanisms that allow clinicians to challenge or correct AI outputs.

For Regulators

  • Mandate transparency, impact audits, and informed consent disclosures around AI use.

  • Restrict opaque, continuously learning AI systems from critical care settings until stability and oversight are guaranteed.

  • Evaluate AI tools not just on performance metrics, but on their effect on clinical relationships.

For AI Users and Patients

  • Patients should be made aware of AI involvement in their care and have the right to refuse it.

  • Clinicians should be trained not just in operating AI tools, but in understanding their limitations and communicating this to patients.

  • Professional bodies should defend the relational core of medicine by ensuring AI augments—not replaces—human engagement.

Conclusion

Joshua Hatherley's thesis is a necessary intervention into the rising chorus of AI evangelism in healthcare. While AI holds promise, its uncritical adoption may corrode the very qualities that make medicine humane: empathy, care, trust, and genuine communication. His call is not to halt innovation but to reorient AI toward supporting—not substituting—the moral and relational essence of medicine. For those designing, regulating, and deploying AI, this thesis should serve as a blueprint for restraint, responsibility, and realism.