• Pascal's Chatbot Q&As
  • Posts
  • The AI debate is no longer about “if” but “how.” Scholarly publishing stands at a crossroads where technical capability is outpacing policy adaptation.

The AI debate is no longer about “if” but “how.” Scholarly publishing stands at a crossroads where technical capability is outpacing policy adaptation.

AI is not just a tool — it’s a cultural disruptor. Whether it strengthens or erodes scientific trust will depend on the shared norms, governance models, and incentives that publishers choose to build.

The Shifting Sands of AI in Scholarly Publishing — From Tool to Transformation

by ChatGPT-4o

The Scholarly Kitchen article by Tony Alves, How the AI Debate Has Changed in Just a Few Short Years (Sept 2025), charts the rapid evolution of artificial intelligence within scholarly publishing from a hesitant novelty to a disruptive force. Drawing on insights from the 2022 and 2025 Peer Review Congresses, Alves captures a shift in tone from cautious optimism to uneasy pragmatism. What was once viewed as an efficiency-enhancing utility is now seen as a technology that demands new policies, ethics, norms, and cultural adaptation across the academic ecosystem.

Key Changes and Developments in the AI Debate

  1. From Supplement to Structural Disruptor

    • 2022: AI was seen as a supplementary tool for editors — e.g., image duplication detection (Proofig), language polishing, and identifying bias in manuscripts.

    • 2025: AI is now embedded deeply into authorship, peer review, and editorial decisions. It challenges human oversight and redefines norms around responsibility and disclosure.

  2. Discrepancy Between AI Use and Disclosure

    • BMJ Study: Only 5.7% of manuscripts disclosed AI use, despite estimated use rates of up to 76%.

    • AACR Study: 23% of abstracts had AI-generated text; only a fraction disclosed it.

    • JAMA Network: Out of 82,829 submissions, just 2.7% disclosed AI use — mostly for grammar support. Reviewer disclosures were even rarer.

    • China Survey: 60% of Chinese reviewers used AI (mainly for translation), but only 29% disclosed it.

  3. Linguistic Equity and Rejection Bias

    • Non-native English-speaking authors are twice as likely to use generative AI, yet their work is more frequently rejected before peer review — potentially signaling implicit bias or poor alignment with disclosure norms.

  4. AI in Peer Review

    • NEJM’s AI Fast Track: In experiments with AI reviewers (e.g., GPT-5, Gemini Pro), these tools sometimes outperformed human reviewers.

    • Despite this, journals like NEJM insist on maintaining a hybrid system with human oversight to mitigate manipulation and hallucinations.

  5. New Ethical and Cultural Frontiers

    • Should AI be credited as a contributor or co-author?

    • Should authorship norms evolve to accommodate AI-assisted research?

    • Can transparency, accountability, and trust be preserved in this hybrid human-AI landscape?

  6. Reviewer Quality Linked to AI Use

    • Reviewers who disclosed AI use received slightly higher quality ratings — suggesting the stigma may be unfounded and even counterproductive.

  7. Decline and Resurgence of Reviewer AI Use

    • After AACR implemented a prohibition on AI for reviewers in late 2023, usage dipped — only to rise again shortly after, indicating enforcement and cultural buy-in remain problematic.

Additional Examples from the Wider Ecosystem

To put these findings in broader context, consider additional developments:

  • Elsevier and Springer Nature have both released AI usage policies for authors and reviewers, yet enforcement is uneven. Transparency remains largely voluntary.

  • Nature’s AI guidelines forbid listing AI tools as authors but allow their use if disclosed — echoing the “firm no” expressed by Ana Marušić in the Drummond Rennie Lecture.

  • COPE (Committee on Publication Ethics) is developing guidelines to distinguish between ethical and unethical uses of AI in publishing.

  • arXiv recently piloted AI-detection tools to flag generative content in preprints, although concerns remain about false positives and author stigma.

  • OpenAlex and Dimensions AI are experimenting with citation analyses that factor in AI-generated metadata and content influence — potentially reconfiguring how impact is measured.

Consequences for Scholarly Publishing

If left unaddressed, the current trends could lead to:

  • Erosion of trust in peer review and authorship integrity.

  • Gatekeeping bias against non-native English speakers using AI ethically for writing support.

  • Ineffective enforcement of disclosure policies due to cultural reluctance or misunderstanding.

  • Fragmentation of norms across journals, disciplines, and regions — undermining global cohesion in research publishing.

  • AI-driven inequalities, where better-resourced institutions can fine-tune AI workflows while smaller players struggle to keep up.

Recommendations for Scholarly Publishers

To safeguard the integrity, inclusiveness, and future-readiness of scholarly publishing, publishers should consider the following strategic actions:

1. Normalize and Incentivize Transparent AI Use

  • Reframe disclosure as a badge of ethical use — not a trigger for rejection.

  • Reward authors and reviewers who provide clear AI usage statements with badges, metadata tags, or visibility incentives.

2. Establish Clear, Tiered Disclosure Policies

  • Define levels of AI involvement (e.g., grammar assistance vs. content generation).

  • Offer submission tools with built-in prompts to help authors disclose usage.

3. Integrate AI Detection Tools with Caution

  • Use detection technologies like Sapling, GPTZero, or Originality.ai with a “human-in-the-loop” model to avoid false positives and unjustified stigmatization.

4. Promote Linguistic Equity

  • Explicitly recognize that AI tools can democratize access for authors writing in English as a second language.

  • Create policy carveouts or supportive editorial policies for EAL (English as an Additional Language) researchers using AI ethically.

5. Support Reviewer Training and Guidance

  • Provide structured training and updated reviewer guidelines covering ethical AI use, disclosure, and tool evaluation.

  • Experiment with “dual reviews” where AI and human reviewers assess the same manuscript, to build comparative insight.

6. Revisit Authorship and Contributorship Models

  • Work with organizations like ICMJE and COPE to evolve the contributorship model, clarifying how AI assistance should be credited — without inflating authorship.

7. Foster Cross-Publisher Consistency

  • Collaborate through STM, Crossref, and COPE to harmonize standards, metadata schemas, and AI policy frameworks.

  • Consider aligning with EU AI Act, CNIL guidance, and other regional initiatives on transparency and algorithmic accountability.

8. Embed AI Use Metadata in DOI Records

  • Add AI-related metadata fields (e.g., AI-generated abstract, AI-assisted figures) into article records to improve discoverability and transparency.

Conclusion: Culture, Not Code, Will Shape the Future

As Tony Alves eloquently concludes, the AI debate is no longer about “if” but “how.” Scholarly publishing stands at a crossroads where technical capability is outpacing policy adaptation. AI is not just a tool — it’s a cultural disruptor. Whether it strengthens or erodes scientific trust will depend on the shared norms, governance models, and incentives that publishers choose to build in the next few years.

To navigate this transition responsibly, publishers must shift from reactive compliance to proactive stewardship — embracing transparency as the new paradigm and ensuring that equity and trust remain at the heart of the research communication ecosystem.

Cited Works

  1. Alves, Tony. “How the AI Debate Has Changed in Just a Few Short Years.” The Scholarly Kitchen, 24 Sept 2025.
    https://scholarlykitchen.sspnet.org/2025/09/24/guest-post-how-the-ai-debate-has-changed-in-just-a-few-short-years/

  2. Peer Review Congress Abstracts (2025)

  3. Nature AI Authorship Policies
    https://www.nature.com/articles/d41586-023-00107-z

  4. COPE AI Guidance (Ongoing Work)
    https://publicationethics.org/news/new-guidance-use-artificial-intelligence-ai-editorial-processes

  5. arXiv AI Detection Announcement
    https://arxiv.org/blog/announcing-ai-content-detection-in-arxiv-submissions/