- Pascal's Chatbot Q&As
- Posts
- AI has expanded the middle and lower parts of the book market much more than the top.
AI has expanded the middle and lower parts of the book market much more than the top.
It has produced more “modestly useful” books, but not clear evidence of more truly excellent books.
Summary: AI has dramatically lowered the cost of creating books, causing a flood of new titles, but the average quality of those books appears to have declined.
The real value of publishers, educators, libraries, and knowledge institutions will shift from producing or distributing content to verifying, curating, contextualising, and protecting trust.
The societal risk is not that AI creates books, but that it creates so much plausible low-quality material that readers, learners, and professionals struggle to distinguish real knowledge from synthetic noise.
The Great Book Flood: Why AI Is Making Publishing Bigger, Cheaper — and Harder to Trust
by ChatGPT-5.5
The NBER working paper AI AND THE QUANTITY AND QUALITY OF CREATIVE PRODUCTS: HAVE LLMS BOOSTED CREATION OF VALUABLE BOOKS? asks a deceptively simple question: has generative AI increased the creation of valuable books? Its answer is both reassuring and worrying. Yes, AI has dramatically increased the number of books being released. Yes, some of those books create value for readers. But the average quality of new books has gone down, because the market is being flooded with large numbers of low-usage, low-rated AI-assisted or AI-generated works.
The paper’s central finding is that large language models have lowered the cost of book creation so much that new book releases on Amazon nearly tripled between 2022 and late 2025. In some categories, the increase was far higher. Travel, sports, self-help, computers and technology, and other practical nonfiction categories saw especially strong growth. That makes intuitive sense: if a person can use AI to produce a travel guide, diet book, business manual, or self-help title quickly and cheaply, many more people will try.
But quantity is not quality. The authors find that books with detected AI content receive far fewer ratings, worse sales ranks, and lower star ratings than books without detected AI. In simple terms: lots more books are being published, but many of them do not seem to attract much reader interest. The paper’s most useful formulation is that AI has expanded the middle and lower parts of the book market much more than the top. It has produced more “modestly useful” books, but not clear evidence of more truly excellent books.
The authors nevertheless argue that AI has created some consumer benefit. Even if most AI books are weak, the sheer number of extra books means that some readers find something useful, cheap, or specific enough to meet their needs. Their model estimates that AI books increased consumer surplus by about seven percent in 2025. That is not nothing. But it is modest compared with the enormous increase in output. The ratio is the important point: the market got dramatically bigger, but only slightly more valuable.
The paper also contains one finding that may surprise traditional authors and publishers: AI has not yet displaced incumbent authors. Authors who were active before the LLM boom continued publishing, and their output even rose after 2023. The lower quality of AI books appears to be driven partly by who uses AI. Newer, less experienced, and lower-performing authors are more likely to adopt it. That means AI is not simply making good writers worse. It is also allowing many people with limited prior writing success to enter the market.
For publishing, the consequence is clear: the old scarcity problem is being replaced by an abundance problem. Publishing used to be constrained by access to editors, agents, printers, distributors, shelf space, and marketing channels. Digital self-publishing already weakened those constraints. AI now removes another barrier: the difficulty of producing long-form text. The bottleneck therefore shifts from production to discovery, verification, branding, trust, and curation.
This is strategically important for publishers. If anyone can produce a book-like object, the value of being a publisher can no longer rest mainly on making content available. It must rest on proving that the content is worth trusting. That includes editorial selection, author credibility, peer review, provenance, rights clearance, correction mechanisms, version control, metadata quality, and post-publication accountability. In a flooded market, the publisher becomes less a manufacturer of books and more a trust institution.
For scholarly and educational publishing, the implications are even sharper. In trade publishing, a weak AI travel guide may disappoint a reader. In education, science, medicine, law, or professional learning, weak AI content can mislead. It can create false confidence. It can blur the difference between fluency and expertise. The paper measures quality mainly through usage, ratings, and sales rank. That makes sense for consumer welfare analysis, but it is not enough for knowledge markets. A book can be popular and wrong. A niche scholarly work can be valuable and rarely rated. In high-trust domains, quality cannot be reduced to consumer appeal.
For learning, the risk is that students and lifelong learners may encounter more content but less guidance. A world with millions of cheap AI-generated learning resources sounds democratic. But without curation, it can become cognitively expensive. Learners must spend more time deciding what is accurate, what is current, what is pedagogically sound, and what is merely plausible. AI may therefore reduce the cost of producing learning materials while increasing the cost of choosing trustworthy ones.
This is where human expertise remains essential. Good learning is not just text generation. It requires sequencing, explanation, assessment, feedback, examples, correction, motivation, accessibility, and alignment with real learning outcomes. AI can help with many of these tasks, but the paper’s findings suggest that simply lowering the cost of production does not automatically raise the quality of the learning ecosystem. More content can mean more opportunity, but it can also mean more noise.
For knowledge as a whole, the deeper issue is informational pollution. The paper is not about misinformation directly, but its findings point in that direction. If AI can triple the number of books while lowering average quality, similar dynamics may affect articles, reports, white papers, course materials, policy briefs, research summaries, and professional guidance. The knowledge environment may become saturated with plausible, cheap, derivative, partially correct material. That does not destroy knowledge, but it makes reliable knowledge harder to identify.
This creates a paradox. AI can democratize authorship while weakening the signals that help society distinguish expertise from production volume. More people can publish. More niche communities can be served. More languages, topics, and micro-markets can be addressed. But the same mechanism also enables spam, imitation, low-effort publishing, SEO manipulation, synthetic authority, and the laundering of poor-quality information through book-like formats.
For society, the consequences depend on whether institutions respond properly. The optimistic scenario is that AI becomes a productivity layer: it helps good authors write faster, helps experts explain better, helps publishers serve niche audiences, and helps readers find the right content. In that scenario, the market becomes richer, more diverse, and more inclusive.
The pessimistic scenario is a trust collapse. If readers, students, teachers, librarians, researchers, and professionals increasingly encounter low-quality AI-generated material, they may become more cynical about published content in general. This would be especially damaging in science, education, healthcare, law, and democratic debate, where societies rely on shared confidence in reliable knowledge. Once trust becomes scarce, the cost of verification rises for everyone.
The paper also exposes a governance gap. Amazon requires disclosure of AI-generated content, but according to the paper, that information is not disclosed to consumers. That is a major market-design problem. If readers cannot easily know whether a book is human-authored, AI-assisted, AI-generated, professionally edited, or rights-cleared, then the market cannot reward quality and transparency properly. Disclosure will not solve everything, but without it, users are forced to navigate a polluted market with weak signals.
The lesson for publishers is not “AI books are bad.” That would be too simple. The better lesson is: AI makes production cheap, so trust becomes expensive. The winners will be those who can combine AI-enabled scale with human accountability, provenance, editorial judgment, rights integrity, and strong discovery systems. Publishers that treat AI merely as a cost-cutting tool may add to the flood. Publishers that treat AI as a way to strengthen verified knowledge may become more important than ever.
For learning platforms, universities, schools, libraries, and professional societies, the recommendation is similar. Do not measure success by the number of AI-generated resources created. Measure success by learning outcomes, accuracy, updateability, source quality, accessibility, and user trust. A thousand AI-generated study guides are not valuable if students cannot tell which one is correct.
For authors, the paper offers both warning and opportunity. AI can help with drafting, structuring, editing, brainstorming, and translation. But it does not automatically create reader value. The market already appears to punish much low-effort AI output. The authors most likely to benefit are not those who outsource judgment to AI, but those who use AI to amplify real expertise, voice, research, and craft.
The big societal conclusion is that we are entering the age of infinite publishability. That is historically significant. Printing lowered the cost of copying. The internet lowered the cost of distribution. Self-publishing lowered the cost of market entry. Generative AI lowers the cost of creation itself. Each step expands access, but each step also creates new trust problems. The institutions that survive will be those that help people navigate abundance without drowning in it.
So the paper’s message is not anti-AI. It is more subtle and more useful: AI has made the book market larger, somewhat more valuable for consumers, but also noisier and lower-quality on average. That is likely a preview of what will happen across many knowledge markets. The future will not be defined simply by whether AI can produce content. It already can. The defining question is whether society can build the filters, incentives, rights systems, provenance infrastructure, and trusted institutions needed to make that content worth relying on.
