- Pascal's Chatbot Q&As
- Posts
- AI will become normal in publishing, but trust will become scarce. The market will reward those who can prove provenance, quality, legality and human accountability.
AI will become normal in publishing, but trust will become scarce. The market will reward those who can prove provenance, quality, legality and human accountability.
The future will not belong to publishers that simply “use AI.” It will belong to publishers that can show why their AI-assisted outputs deserve to be believed.
Summary: AI is already used by nearly half of the North American book industry, but trust, policy and legitimacy lag far behind adoption.
The biggest concerns are copyright misuse, hallucinations, low-quality AI-generated books, lack of disclosure, biased data, legal liability and damage to authors, jobs and creative integrity.
The future will likely be reluctant normalization: AI will become embedded in publishing workflows, but the real winners will be those who can prove provenance, quality, legality and human accountability.
The Adoption Paradox: Publishing Is Using AI Before It Trusts It
by ChatGPT-5.5
The BISG and BookNet Canada report, AI Use Across the North American Book Industry 2025, captures a publishing sector caught in an uncomfortable middle state. AI is no longer speculative. It is already embedded in workflows, visible in strategy discussions, and beginning to reshape marketing, metadata, reporting, administration, editorial support and discoverability. But it has not yet earned trust. The central message of the report is not simply that the book industry is adopting AI. It is that the industry is adopting AI while remaining deeply uneasy about the legal, ethical, creative and cultural foundations of that adoption.
The most important number in the report is not any single adoption metric, but the contradiction between them. Almost half of respondents say they use AI individually, and almost half say their organizations use AI. Yet the open-ended responses are overwhelmingly negative: roughly 72% negative, 20% neutral and only 8% positive. This means AI has crossed the threshold from optional curiosity to operational reality before it has crossed the threshold of legitimacy.
That is the real story. AI adoption is moving faster than comfort, faster than governance, and faster than consensus. The industry is not rejecting AI, but nor is it embracing it with confidence. It is experimenting, hedging, worrying and improvising.
The survey also reveals a sharp functional divide. AI is being used mostly where it promises efficiency: administrative work, marketing, reporting, metadata, title optimization and some editorial support. It is being used least in rights and licensing, quality assurance, translation, AI-voiced audiobooks and customer-facing engagement. That pattern matters. Publishing professionals appear willing to let AI accelerate work around the book, but remain much more cautious about allowing it to interfere with the book itself, the rights that govern it, or the relationship with readers and creators.
The report’s most revealing insight is its distinction between publishers and libraries. Publishers are using AI more heavily for outward-facing commercial functions, especially marketing. Libraries are more focused on administration, editorial evaluation and information-management tasks. The report itself makes a powerful observation: publishers may be using AI as a “megaphone” to produce more marketing and metadata at scale, while libraries are becoming quality gatekeepers dealing with the consequences of AI-generated content flooding the ecosystem. That is a structural warning. AI may create efficiency for one part of the supply chain while transferring verification costs to another.
This is exactly the problem that many industries are beginning to face. AI does not merely automate work. It redistributes burdens. It may reduce the cost of producing metadata, copy, summaries, images, manuscripts and promotional material, while increasing the cost of checking authenticity, provenance, rights, accuracy and quality. In publishing, that burden falls on editors, librarians, retailers, acquisition teams, rights teams, authors and readers.
The report therefore should not be read as a simple adoption survey. It is a map of institutional lag. AI is already entering workflows, but the control systems — policies, disclosure norms, rights infrastructure, vendor assurance, training, auditability and quality standards — are still catching up.
Surprising statements and findings
The first surprising finding is that adoption is already substantial despite such strong resistance. Individual AI use stands at 45.8%, while organizational use is 48.0%. That means nearly half the industry is already using AI, even though a large portion of respondents describe serious ethical, legal or creative objections.
Second, the most common use cases are not the glamorous ones. AI is not primarily being used to write books or replace authors. It is being used for administrative work, marketing, data analysis, editorial support and metadata. This suggests that the first durable wave of AI in publishing is operational, not authorial.
Third, rights and licensing management is among the least-used areas for AI, even though copyright concerns are the top-ranked anxiety. Only 2.8% of organizations use AI for rights and licensing management, while 86.4% identify inadequate controls around copyrighted material as a concern. That mismatch is striking. The industry’s biggest fear is rights misuse, but one of its least developed AI use cases is rights infrastructure.
Fourth, libraries are more worried than publishers about AI-generated books flooding retail platforms. An extraordinary 95.1% of library respondents identify fraudulent or low-quality AI-generated books as a pain point. This shows that libraries are already experiencing AI not as an abstract technology but as a procurement, curation and trust problem.
Fifth, publishers are more worried about copyright controls than libraries, while libraries are more worried about disclosure and low-quality AI books. This reflects their different positions in the ecosystem: publishers worry about their inputs being taken; libraries worry about bad outputs entering public knowledge systems.
Sixth, larger organizations are clearly pulling ahead. Organizations with more than 100 employees are much more likely to use AI, to have policies, and to encourage controlled experimentation. Among organizations with more than 100 employees, 69.0% report using AI, compared with 21.1% among one-person organizations. This suggests that AI may widen the gap between large and small players unless shared tools, standards and training are developed.
Seventh, only 31.0% of organizations have an official AI policy, while 34.2% have no policy and 26.3% are developing one. This is one of the report’s most important governance findings: adoption is already happening before policy is mature.
Eighth, Canadian respondents are more concerned than US respondents about ethics, disclosure and sustainability. This suggests that national culture, regulatory expectations and public-interest traditions may shape AI adoption as much as technology does.
Ninth, publishers are more likely than libraries to use closed or enterprise AI models. Libraries are more likely to rely on open AI models. That creates a potentially uncomfortable asymmetry: institutions responsible for public access to trustworthy knowledge may have fewer controlled, auditable or enterprise-grade tools than commercial publishers.
Tenth, the report shows strong appetite for training despite high skepticism. 56.6% of individuals believe AI training is a good use of their time, 59.4% actively stay informed about AI developments, and 45.0% are experimenting directly. This means resistance is not ignorance. Many respondents are informed, engaged and still worried.
Controversial statements and findings
The most controversial claim running through the report is that AI can be useful while also being ethically compromised. Respondents repeatedly distinguish between practical workflow benefits and deeper concerns about copyright, labour, environmental impact and creative integrity. That distinction is uncomfortable for organizations that want a simple pro-AI or anti-AI narrative.
A second controversial theme is that AI may turn publishers into hypocrites if they adopt tools built on unauthorized or opaque training data while publicly defending author rights, sustainability and creative labour. This is one of the report’s sharpest moral tensions. The industry cannot credibly defend copyright upstream while quietly benefiting from infringement downstream.
Third, the report surfaces the idea that efficiency gains may not justify social costs. Some respondents acknowledge that tasks that once took days can now take moments, but still question whether those shortcuts are worth the environmental, creative and labour consequences.
Fourth, the report challenges the inevitability narrative. Several respondents reject the assumption that AI adoption must happen simply because the technology exists. That is important because “inevitability” is often used by technology vendors to shut down ethical debate.
Fifth, there is a controversial split between departments. Editorial teams are described as suspicious and anxious, while marketing teams are more excited. This exposes a deeper cultural divide: teams closest to authorship, judgement and quality are more cautious; teams responsible for scale, speed and reach are more enthusiastic.
Sixth, the report makes clear that AI-generated content is already creating downstream costs for libraries. One respondent notes that public libraries have unknowingly purchased AI-generated titles that were unusable because of poor writing and factual inaccuracies. That finding is controversial because it suggests AI is not merely disrupting creation; it is contaminating acquisition channels.
Seventh, respondents question whether AI systems can ever be properly accountable when they obscure sources. This is especially important in scholarly, educational and public-library contexts, where citation, attribution and verifiability are not luxuries but foundations of trust.
Eighth, the relatively low ranking of sustainability concerns compared with copyright and hallucination is itself controversial. Environmental concerns appear passionately in open-ended comments, yet sustainability is not among the highest-ranked quantitative pain points. That may indicate that sustainability is emotionally salient but operationally secondary — or that copyright and quality concerns are simply more immediate for publishing professionals.
Ninth, the report’s data on anticipated growth contains a small cautionary issue: the narrative and appendix appear not perfectly aligned for some future-use categories such as publicity and sales forecasting. The broad trend remains clear — growth is expected in data analysis, marketing, administration, metadata and strategic functions — but the inconsistency is a reminder that even AI reports need careful editorial checking.
Valuable statements and findings
The most valuable finding is that the industry wants practical guidance, not abstract evangelism. Respondents want best practices in metadata, laws and regulations, editorial processes, ethics, sales and marketing. That is a constructive signal. The industry does not merely want to complain about AI; it wants usable rules of engagement.
The second valuable finding is that adoption is strongest where the task is bounded. Administrative tasks, metadata, marketing support and reporting are lower-risk entry points because outputs can be reviewed, corrected and measured. This gives organizations a sensible adoption pathway: start where the workflow is structured, the output is inspectable and the risks are contained.
The third valuable finding is that human oversight remains the key legitimacy condition. The report repeatedly implies that AI may be acceptable as a tool but not as an autonomous replacement for human judgement. That distinction should become the basis for responsible publishing policy.
The fourth valuable finding is that the industry’s concerns are not narrow or reactionary. They span copyright, hallucinations, fraudulent content, biased training data, disclosure, distrust of AI companies, legal liability, discrimination, job loss, accessibility and sustainability. This is a broad risk map, not a single-issue complaint.
The fifth valuable finding is that publishers and libraries face different AI problems. Publishers need to protect rights, authors, quality and brand trust. Libraries need to protect users, collections, public knowledge and acquisition integrity. Any industry-wide AI framework must account for both.
The sixth valuable finding is that training is an adoption accelerant and a risk control. AI literacy should not be treated as cheerleading. Done properly, it helps employees understand when not to use AI, how to verify outputs, how to avoid uploading protected content, how to document human review, and how to spot hallucinations or bias.
The seventh valuable finding is that small organizations may need shared infrastructure. If only large organizations can afford enterprise tools, legal review, vendor audits and AI policies, AI will deepen market inequality. Trade associations, standards bodies, publishers, libraries and vendors should treat this as an ecosystem problem.
The eighth valuable finding is that disclosure is becoming a trust requirement. Readers, authors, librarians and buyers increasingly care whether content is human-created, AI-generated or AI-assisted. That creates a future need for labelling, provenance metadata and content authenticity standards.
The ninth valuable finding is that AI-generated books are not just a creator problem; they are a supply-chain quality problem. Retailers, distributors, libraries and metadata providers will need better mechanisms to identify low-quality, fraudulent or synthetic content before it reaches readers.
The tenth valuable finding is that the report frames AI as a governance challenge, not merely a productivity tool. That is the correct framing. The winners will not be the organizations that use AI everywhere first. They will be the organizations that learn where AI creates value without destroying trust.
a) Recommendations for those eager to speed up AI adoption
The fastest path to adoption is not hype. It is trust-building. Organizations that want faster AI uptake should stop treating governance as a brake and start treating it as the adoption engine.
First, begin with low-risk, high-reviewability workflows: meeting summaries, internal search, metadata suggestions, campaign drafts, accessibility checks, data analysis, rights-status triage and coding support. Avoid beginning with unsupervised content creation, author substitution, legal conclusions, editorial acceptance decisions or direct consumer-facing advice.
Second, create a clear internal AI policy before usage becomes chaotic. The policy should answer basic questions: what tools are approved, what data can be entered, whether copyrighted manuscripts can be uploaded, whether outputs must be reviewed, when AI use must be disclosed, and who is accountable for errors.
Third, build a controlled experimentation environment. Employees will experiment anyway. Better to give them approved tools, training, examples, red lines and escalation channels than to force experimentation into shadow AI.
Fourth, measure outcomes rather than activity. Do not count prompts, pilots or enthusiasm. Count reduced cycle time, improved metadata quality, lower error rates, faster rights checks, better discoverability, reduced repetitive workload and improved customer or author experience.
Fifth, create an “AI use case risk ladder.” Rank use cases by legal risk, reputational risk, reversibility, impact on authors, impact on readers, and need for human judgement. Then approve adoption progressively.
Sixth, train people on failure modes. AI training should include hallucination, copyright risk, bias, confidentiality, prompt injection, data leakage, disclosure and over-reliance. A workforce that understands AI’s limits will adopt more confidently.
Seventh, involve skeptical staff early. The report shows that skepticism is strongest among people closest to quality, creativity and public trust. Excluding them will produce bad adoption. Including them will produce better guardrails.
Eighth, separate AI-assisted work from AI-generated work. Assistance can be acceptable where humans remain responsible. Generation requires stronger disclosure, review and rights controls.
Ninth, require vendor transparency. Organizations should ask vendors what data is used, whether customer data trains models, where data is stored, how outputs are logged, whether copyrighted inputs are protected, and whether enterprise controls are available.
Tenth, create internal legitimacy. Adoption will fail if staff believe leadership is using AI to cut jobs, ignore authors, or compromise sustainability commitments. Leaders need to explain where AI will be used, where it will not be used, and how human expertise will remain central.
b) Recommendations for publishers
Publishers should treat this report as a warning that AI strategy cannot be separated from rights strategy, author trust, quality control and brand reputation.
First, build AI governance around content integrity. The publishing industry’s competitive advantage is not just content volume. It is selection, validation, editorial judgement, version control, metadata, author relationships, rights management and trust. AI should reinforce these assets, not undermine them.
Second, make rights and licensing AI-ready. The report shows that rights management is one of the least developed AI use cases despite being the greatest concern. Publishers need machine-readable rights data, licensing metadata, AI-use permissions, opt-outs, territorial rights signals, training restrictions, audit clauses and usage-reporting expectations.
Third, develop clear author-facing policies. Authors need to know whether AI is used in editorial, marketing, cover design, audiobook production, metadata, translation, rights management or discoverability. Silence will create suspicion.
Fourth, push for retailer and distributor standards for AI-generated books. Retail platforms should be expected to detect, label and demote fraudulent, low-quality or misleading AI-generated content. Without this, libraries and readers will bear the verification burden.
Fifth, invest in provenance and disclosure. Publishers should support metadata fields that indicate whether content is human-created, AI-assisted or AI-generated, and whether AI was used for cover art, translation, narration, summaries or marketing copy.
Sixth, use AI to improve discoverability, but avoid polluting metadata. AI-generated metadata can help readers find books, but if it becomes exaggerated, misleading or generic, it will degrade the ecosystem.
Seventh, protect editorial judgement. AI can assist with comparison, summarization, consistency checking and workflow acceleration. It should not replace the human editorial function that determines quality, meaning, originality and cultural value.
Eighth, create AI quality gates. Any AI-assisted output used externally should go through review for accuracy, rights, tone, bias, accessibility, disclosure and brand alignment.
Ninth, create trusted AI products from licensed content. Publishers should not only defend against AI misuse. They should build and license high-quality, rights-cleared, attributable, evidence-based AI services, especially in education, research, professional learning and specialist domains.
Tenth, collaborate across the industry. No single publisher can solve AI-generated fraud, provenance, content authenticity, rights signalling or platform accountability alone. BISG, BookNet Canada, standards bodies, libraries, retailers and publishers should turn the report’s findings into shared protocols.
c) Prediction for the future based on this data
The future of AI in publishing will not be mass rejection. It will be reluctant normalization.
By 2027 and 2028, most medium and large publishing organizations will have AI policies, approved tools, training programs and controlled workflows. AI will become ordinary in metadata, marketing, internal search, sales analysis, accessibility support, reporting, rights triage and production support. The more sensitive creative and editorial functions will remain contested, but AI assistance will slowly become embedded there as well.
The industry will split into three layers. The first layer will be high-trust publishing: rights-cleared, human-reviewed, provenance-rich, professionally edited content. This layer will become more valuable as AI-generated slop increases. The second layer will be operational AI publishing: companies using AI to improve speed, discoverability and workflow efficiency while maintaining human accountability. The third layer will be low-trust synthetic publishing: AI-generated books, fake expertise, low-quality summaries, fraudulent titles and marketplace pollution. That third layer will grow quickly and force everyone else to invest in verification.
Libraries will become even more important as trust filters. They will not merely acquire books; they will increasingly validate them. That will increase costs unless better supply-chain tools emerge. Retailers will also face pressure to label and police AI-generated content more aggressively.
Large publishers will gain an advantage because they can afford enterprise tools, legal review, rights systems, training and governance. Smaller publishers and independent professionals will either lag, rely on consumer AI tools, or need shared industry infrastructure. This could become a quiet concentration force in publishing.
The biggest unresolved issue will be rights. The report shows that copyright concerns dominate the industry’s risk perception, yet rights-management AI remains immature. That gap will drive demand for machine-readable licensing systems, AI training permissions, attribution tools, provenance standards, audit logs and content-use reporting. The winners will be those who turn rights from a legal afterthought into technical infrastructure.
The deepest risk is not that AI will replace publishing. The deeper risk is that AI will make publishing cheaper, faster and less trustworthy unless the industry acts deliberately. The opportunity is the opposite: publishers can use AI to make trusted knowledge more discoverable, more accessible, more useful and more measurable — but only if they refuse the false bargain of efficiency without accountability.
ChatGPT’s prediction is therefore blunt: AI will become normal in publishing, but trust will become scarce. The market will reward those who can prove provenance, quality, legality and human accountability. The future will not belong to publishers that simply “use AI.” It will belong to publishers that can show why their AI-assisted outputs deserve to be believed.
