• Pascal's Chatbot Q&As
  • Posts
  • "A Third Transformation? Generative AI and Scholarly Publishing": Traditional metrics may not reflect how AI-mediated research and readership are evolving. There is a need for new standards.

"A Third Transformation? Generative AI and Scholarly Publishing": Traditional metrics may not reflect how AI-mediated research and readership are evolving. There is a need for new standards.

Licensing content to AI models raises concerns about who controls access and how this affects traditional subscription models. Clear licensing agreements that preserve content integrity are necessary.

Question 1 of 3 for ChatGPT-4o: Please read the paper "A Third Transformation? Generative AI and Scholarly Publishing" and tell me what it says in easy to understand language

ChatGPT-4o’s answer:
The paper "A Third Transformation? Generative AI and Scholarly Publishing" explores how generative AI could impact scholarly publishing. Here's a simplified overview of the main points:

  1. Background: Generative AI has started affecting the scholarly publishing industry, with major publishers already using AI for efficiency in search, discovery, and backend processes. The question is whether AI will simply make things more efficient or bring deeper, transformative change.

  2. Efficiency and Change: AI could speed up writing, reviewing, editing, and discovery. This may lead to quicker publishing cycles, making research accessible faster. For some, this is about improving current processes; others think AI might change the whole structure of how research is done and shared.

  3. Two Possible Futures:

    • In one scenario, AI helps with efficiency, but traditional roles and structures remain intact.

    • In another, AI could disrupt the industry, changing how research is produced, evaluated, and accessed.

  4. Impact on Peer Review: AI could assist in peer review by suggesting reviewers, checking for alignment between abstracts and content, or spotting research misconduct. However, there's a concern about maintaining the human element, as peer review relies on expert judgment.

  5. Challenges and Risks:

    • Trust and Accuracy: AI can make mistakes, and its outputs are only as reliable as the data it’s trained on. Errors in research could harm the trustworthiness of scholarly work.

    • Competition and Licensing: Big publishers might gain revenue by licensing content to AI firms, but there are fears this could erode the traditional value of journal subscriptions.

    • Reader Interaction: If AI becomes a primary reader or synthesizer of research, fewer people may engage directly with articles, potentially decreasing the value of published content.

  6. Ethical and Practical Issues: Standards and guidelines for AI use in research need to be developed to ensure transparency, attribution, and fairness. This includes setting boundaries for AI's role in research versus human oversight.

  7. Future Possibilities: AI could automate aspects of research itself, from data analysis to drafting publications. This raises questions about the future role of human researchers and whether AI-led research could replace human-led processes.

The paper emphasizes the need for strategic planning and collaboration across stakeholders to guide AI's integration while preserving the integrity and trust in scholarly communication​.


Question 2 of 3 for ChatGPT-4o: List the most surprising, controversial and valuable statements and findings in the report

ChatGPT-4o’s answer:
Here are some of the most surprising, controversial, and valuable points from the report "A Third Transformation? Generative AI and Scholarly Publishing":

Surprising Statements:

  1. AI’s Current Role in Scholarly Work: Around 1% of scholarly literature in 2023 reportedly showed signs of AI involvement, even at this early stage. This indicates a faster-than-expected adoption of AI tools for generating content.

  2. AI as a ‘Pre-review’ Tool: Some envision AI assisting in peer review by providing preliminary feedback, spotting significant issues before human reviewers engage, or even detecting misconduct. This raises interesting possibilities for accelerating peer review.

  3. AI’s Role in Leveling the Playing Field: Generative AI could aid non-native English-speaking researchers by improving their writing quality, thus democratizing access to English-dominated academic journals.

Controversial Findings:

  1. Content Licensing to AI Companies: Some publishers (e.g., Wiley and Taylor & Francis) have licensed content to AI companies for model training, sparking concerns about undermining traditional publishing models. Many scholars worry this could degrade the integrity of the scholarly record and diminish the importance of the version of record.

  2. Potential Loss of Human Readership: If AI becomes the main reader of scholarly literature, humans may interact with condensed versions or summaries, which might lead to less direct reading of research articles. This shift could devalue human interpretation in science.

  3. Machine-Generated Scholarship: Some speculate that AI could eventually handle the entire research process, including hypothesis generation, experiments, and publication drafting, effectively reducing the human role to oversight.

Valuable Insights:

  1. Ethics and Transparency: The report emphasizes the need for clear standards and policies for AI use in research to maintain trust. Defining the boundaries of AI's role is crucial for scholarly integrity and avoiding misuse.

  2. Metrics Overhaul: Traditional COUNTER metrics (for tracking digital resource usage) may be inadequate as AI takes on a bigger role in research, suggesting a need to redefine impact measurements to reflect AI’s influence in scholarly consumption and creation.

  3. Future Business Models and Consolidation: The generative AI landscape might push smaller publishers to consolidate as larger players leverage scale to invest in new AI tools. Smaller entities may struggle to keep up, intensifying existing inequalities in scholarly publishing.

These insights provide a glimpse into the transformative potential and challenges AI poses, suggesting that careful strategy, ethical standards, and cooperative innovation will be essential to navigate the future of scholarly publishing.


Question 3 of 3 for ChatGPT-4o: What are the lessons learned for all stakeholders involved? Recommend future strategies they should be deploying

ChatGPT-4o’s answer:
The report outlines several key lessons for stakeholders in scholarly publishing and suggests strategies they could deploy to adapt effectively to the rise of generative AI.

Lessons Learned:

  1. AI’s Inevitable Presence and Disruption: AI is here to stay and will likely impact all parts of the research and publishing process, from writing to peer review and discovery. Stakeholders need to anticipate both incremental improvements and potential disruptions.

  2. Importance of Trust and Transparency: AI tools must be used responsibly to maintain trust in scholarly research. Ethical standards, transparency about AI's role, and clear policies on attribution are essential.

  3. Need for Updated Metrics and Impact Measures: Traditional metrics may not reflect how AI-mediated research and readership are evolving. There is a need for new standards to capture AI’s role in generating and consuming research.

  4. Data Ownership and Licensing Risks: Licensing content to AI models raises concerns about who controls access and how this affects traditional subscription models. Clear licensing agreements that preserve content integrity are necessary.

  5. Equity and Accessibility: AI has the potential to democratize access to scholarly publishing, especially for non-English speakers, but there must be a focus on fair implementation to prevent new inequities.

For Publishers:

  1. Adopt Transparent and Ethical AI Policies:

    • Establish clear guidelines for acceptable AI usage in manuscript preparation, peer review, and publication.

    • Emphasize the importance of human oversight and judgment in the peer review process.

    • Create transparency requirements so that authors disclose the extent of AI use in their work.

  2. Innovate in Search and Discovery:

    • Invest in AI-enhanced tools that improve the accessibility and discoverability of research, such as advanced summarization and personalized recommendations.

    • Consider integrating AI tools that offer value-added services (e.g., literature reviews, extraction of insights) without replacing core human involvement.

  3. Develop New Impact Metrics:

    • Work with libraries and COUNTER to redefine impact measures, possibly including metrics that capture the depth of engagement, quality of interactions, and the AI-generated content’s indirect influence.

  4. Collaborate on Data Licensing Standards:

    • Form alliances with other publishers to set industry-wide standards for data licensing to AI companies, protecting the integrity and economic value of scholarly content.

    • Establish standards for attribution when AI models access licensed scholarly content.

For Libraries and Educational Institutions:

  1. Build AI Literacy and Support Ethical Use:

    • Educate researchers, students, and faculty on ethical AI usage and the implications of relying on AI tools.

    • Develop training programs for librarians and research support staff on evaluating AI-generated content and guiding responsible use.

  2. Redefine Role in AI-Assisted Research:

    • Help ensure the verifiability and transparency of scholarly work by overseeing the implementation of provenance-tracking and metadata standards.

    • Collaborate with publishers and institutions on AI usage policies to ensure research integrity.

For Researchers and Authors:

  1. Use AI Transparently:

    • Disclose AI use in all research outputs to maintain transparency about the tool's role and impact on findings.

    • Engage in open dialogues on AI ethics within the research community to shape responsible norms around AI-assisted authorship.

  2. Adapt to New Publication Practices:

    • Prepare for new publishing models that integrate AI tools, especially those offering services like translation, summarization, and literature synthesis.

    • Advocate for industry standards in AI usage to ensure AI does not compromise research rigor or originality.

For Policymakers and Funders:

  1. Develop Guidelines for AI in Academic Research:

    • Support the establishment of policies defining appropriate AI usage, particularly in high-stakes areas like peer review and scientific publication.

    • Fund studies on AI’s impact on research and metrics that measure AI-driven interactions with scholarly content.

  2. Promote Standards for Attribution and Provenance:

    • Encourage the development of AI technologies that can handle complex citation and attribution requirements, supporting the scientific community’s commitment to transparency and reproducibility.

Collective Actions for All Stakeholders:

  1. Collaborate on Standards and Best Practices:

    • Form cross-industry partnerships involving publishers, libraries, academic societies, and tech companies to set uniform standards on AI use, licensing, and ethics.

    • Work together to ensure AI tools are deployed responsibly, with an eye on both efficiency and the long-term preservation of scholarly values.

  2. Embrace a Flexible and Experimental Approach:

    • Recognize that AI’s role will evolve rapidly; stakeholders should remain open to experimentation with new tools while setting up safeguards to monitor unintended consequences.

    • Share insights and best practices across organizations, fostering a community of learning and adaptation.

By implementing these strategies, stakeholders can prepare for a future where AI enhances scholarly communication responsibly while preserving human oversight, integrity, and trust.