- Pascal's Chatbot Q&As
- Posts
- As AI becomes more embedded in education, law, healthcare, and creativity, the metaphor of “machine intelligence” becomes a double-edged sword.
As AI becomes more embedded in education, law, healthcare, and creativity, the metaphor of “machine intelligence” becomes a double-edged sword.
It seduces us into trust while shielding developers from responsibility. But if we walk hand-in-hand with our models through vector space we may yet preserve both our creativity and accountability.
“How AI ‘Thinks’—And Why That Matters for Liability, Understanding, and Litigation”
by ChatGPT-4o
In his October 2025 Wall Street Journal essay, software engineer John West offers a rare, hands-on view into how artificial intelligence models—specifically large language models (LLMs)—”think.” His central message is clear: AI doesn’t think like humans, but we often talk as if it does. This conceptual slippage, he warns, is dangerous—not because AI is deceptive, but because the people who build, sell, and regulate AI don’t sufficiently understand or communicate its inner workings. Through vivid metaphors, technical explanations, and poetic analogies, West calls for deeper engagement, transparency, and accountability. His insights offer timely lessons for AI governance, future litigation, and the rhetoric surrounding machine “intelligence.”
I. What the Article Says: Peering Into the Model’s Mind
West begins by celebrating the exhilaration of working closely with AI—not through polished APIs or chat interfaces, but by literally manipulating the internals of a model: the training data, vector space, and parameters. He contrasts this hands-on experimentation with the black-box nature of commercial AI platforms like ChatGPT, which shield users from how decisions are made.
He laments that, while we don’t need to know how a washing machine works to trust it, AI is different: it handles our cognitive labor, not just our dirty laundry. Without understanding the statistical guts of LLMs, we risk misusing or overtrusting them. Worse, we risk believing that they “understand” in a human way.
West deconstructs the workings of a model with poetic flair. He explains how LLMs assign “vectors” to words—multi-dimensional points in space—that cluster into meaning. “Sweet” and “sour,” for instance, are spatially close in “taste-space.” Feed a model Seamus Heaney’s poetry and nudge the vector space toward “anger,” and you get a bizarre remix: “Angry vegetables fermented, angry bitter flesh…” These odd transformations reveal not only the model’s mechanics but also the biases and assumptions embedded in its training data.
He shares a DIY project where he trained a model on Bartleby the Scrivener. After thousands of iterations, it nearly reproduced the famous line “I would prefer not to”—but never quite nailed it. This moment of almost-human error, West suggests, is what makes models fascinating and flawed at once.
He ends with a plea: for every dollar AI firms spend on making models bigger, they should spend a penny on making them understandable. Open the training data. Build public playgrounds. Democratize understanding.
II. Extending the Metaphors: More Ways to Think About AI Thinking
West uses metaphors like “angry vegetables” and the washing machine to illustrate how alien AI cognition truly is. Here are additional metaphors that might deepen the conversation:
AI as a Parrot on Steroids: Like a parrot, it mimics human language without understanding, but it’s read every book in the library and now predicts what word should come next better than most humans ever could.
AI as a Musician Who Can’t Hear: It composes beautiful symphonies by recognizing patterns in past music but doesn’t feel the melody or understand harmony—just the math behind it.
AI as an Alien Linguist: It reverse-engineers human language by watching billions of conversations, like an alien learning English solely from TV static, but never stepping outside the spaceship.
AI as a Kaleidoscope of Human Thought: Turn the dial (i.e., tweak the prompt), and new shapes emerge—not because it understands, but because it’s reflecting fragments of our collective cognition, remixed endlessly.
AI as an Impressionist Painter: It doesn’t paint reality, just a statistical blur of what reality looks like, built from millions of brushstrokes borrowed from other artists.
Each metaphor underscores a key point: AI doesn’t “reason” or “understand” in the ways humans do. It predicts. It mimics. It interpolates. But it doesn’t grasp context or consequence in the moral or legal sense.
III. Implications for Litigation, Liability, and AI Accountability
1. Legal Accountability Hinges on Understanding AI’s Mechanics
West’s essay indirectly raises profound legal questions. If a model produces a defamatory, biased, or copyright-infringing output, who’s responsible? The developer, who cannot fully control the model’s output? The user, who prompted it? The data providers, whose work influenced the model?
Without visibility into the model’s training data, vector weights, and reasoning pathways, assigning blame becomes murky. West’s call for transparency—like disclosing training corpora or making vector spaces explorable—would dramatically aid regulators, plaintiffs, and courts in understanding causality and intent.
2. Claims of “AI Understanding” Could Be Legally Misleading
Tech companies often make marketing claims that imply intelligence, comprehension, or intent. But as West’s poetry remix shows, models merely manipulate word vectors based on training, not comprehension. Courts may increasingly scrutinize these anthropomorphic claims when evaluating liability for harm.
For example, if a company markets its AI as being able to “understand medical conditions” and a user relies on its advice, that claim might trigger different liability than if the company had said “generates statistically likely medical text.”
West’s essay thus bolsters the case for regulating claims of AI capability—perhaps along the lines of food or drug labeling—to prevent deceptive expectations.
3. Transparency as a Shield or a Sword in Court
Litigants may soon demand that AI developers open their models and training data to forensic inspection—especially in cases involving copyright infringement, discrimination, or misinformation. As West notes, most models are black boxes, but that could become legally untenable.
A model that cannot be explained may be deemed irresponsible. Conversely, a developer who provides full documentation of training data and vector behavior might reduce liability through due diligence and auditable practices.
4. Biases in Vector Space May Be Evidence of Harm
West’s “angry vegetable” metaphor is more than whimsical. It reveals how semantic associations—learned from messy, biased, or toxic data—can reshape meaning. In lawsuits about algorithmic discrimination, plaintiffs may point to such associations as evidence that the model is structurally biased.
This reinforces the importance of documenting vector transformations and auditing training data—a growing field of AI “model cards” and “datasheets” designed to describe such behavior before harm occurs.
Conclusion: What “Thinking” Really Means—and Why It Matters
John West’s essay is not a technical paper, but it may be one of the most compelling arguments yet for rethinking how we talk about, regulate, and litigate AI. By demystifying how LLMs operate, he dismantles the myth that they think. They do something more alien—and, paradoxically, more humanlike in their flaws—than that.
As AI becomes more embedded in education, law, healthcare, and creativity, the metaphor of “machine intelligence” becomes a double-edged sword. It seduces us into trust while shielding developers from responsibility. It overstates capability while masking fragility. It invites use without understanding.
Future litigation against AI developers will likely hinge on a few key questions: How was the model trained? What risks were known? Could the developer foresee the harm? West’s call for openness—about vector spaces, training data, and meaning—isn’t just good science. It’s a legal imperative.
The more we allow AI to “think” behind closed doors, the more we abdicate responsibility. But if we walk hand-in-hand with our models through vector space, as West suggests, we may yet preserve both our creativity and our accountability.
