• Pascal's Chatbot Q&As
  • Posts
  • Wiley’s ExplanAItions 2025 preview reveals a research community racing to adopt AI but pausing to recalibrate its expectations.

Wiley’s ExplanAItions 2025 preview reveals a research community racing to adopt AI but pausing to recalibrate its expectations.

The gap between enthusiasm and infrastructure, capability and credibility, remains wide. But the desire to use AI responsibly and effectively is unmistakable.

ExplanAItions 2025 – Charting AI’s Evolution in Research: Surging Use, Sobered Expectations, and a Call for Support

by ChatGPT-4o

The ExplanAItions 2025 preview report, published by Wiley, offers a striking snapshot of how researchers worldwide are integrating AI into their workflows. Drawing on the views of over 2,400 researchers across disciplines and regions, the report documents a pivotal inflection point: dramatic adoption growth tempered by a more realistic appraisal of AI’s capabilities. This essay unpacks the most surprising, controversial, and valuable findings from the study and concludes with targeted recommendations for the scientific community, AI developers, and regulators.

📈 Most Surprising Findings

  1. AI Adoption Skyrockets to 84%
    The proportion of researchers using AI in any aspect of their work soared from 57% in 2024 to 84% in 2025. More notably, AI usage for research and publication-specific tasks jumped from 45% to 62%.

  2. Researchers Prefer ChatGPT Over Scientific AI Tools
    Despite the availability of domain-specific AI tools, 80% of researchers reported using general-purpose AI tools like ChatGPT, while only 25% had tried specialized tools built for research.

  3. Disproportionate Access and Awareness
    Only 11% of researchers, on average, had even heard of specialized AI tools. Even among those using AI, most rely on a patchwork of free tools, personal subscriptions, or organizational access, with significant gaps in institutional support.

  4. High Future Expectations Despite Current Limitations
    While belief in current AI superiority dropped significantly—from 53% to under 30% of use cases—an overwhelming 83% of researchers still expect AI to become widespread in their field within the next two years.

  5. 57% of Researchers Would Let Agentic AI Act Autonomously
    More than half of researchers would be willing to let an AI agent act on their behalf for specific research use cases, signaling a high level of openness to automation in scholarly processes.

⚠️ Most Controversial Findings

  1. Reality Check on Hype: AI ≠ Human Replacement
    The study reveals a substantial decline in researchers’ belief that AI currently exceeds human capabilities across most research tasks. This contradicts popular narratives around AI’s near-omniscience and highlights the slow progress in task-specific reliability.

  2. Rising Concern About Hallucinations and Privacy
    Firsthand use of AI appears to have heightened concerns, not diminished them. Worry about hallucinations rose from 51% to 64%, and privacy concerns jumped from 47% to 58%—indicating that real-world use is revealing more flaws than anticipated.

  3. Lack of Guidelines and Institutional Support
    A majority of researchers (57%) still lack clear guidelines or training from their organizations. Only 41% feel adequately supported by their institutions—putting the burden of responsible AI use on individuals, not systems.

  4. Researchers Expect Editors and Peer Reviewers to Disclose AI Use
    Nearly three-quarters of researchers want peer reviewers and editors to disclose how and which AI tools were used in the review process. This could lead to calls for policy changes and accountability in peer review transparency.

  5. Corporate Researchers Have an AI Advantage
    Researchers in the corporate sector report fewer barriers, greater access to tools, and more confidence in AI’s capabilities—revealing a two-tiered system that risks leaving academic institutions behind.

💡 Most Valuable Findings

  1. AI Improves Efficiency, Quality, and Brainstorming
    Despite tempered expectations, 85% of AI-using researchers report improved efficiency, and over 70% report boosts in both quantity and quality of work. AI is also widely seen as valuable for ideation and brainstorming, even if critical thinking and task focus benefit less.

  2. Early Career and APAC Researchers Are Driving Change
    Younger researchers and those based in Asia-Pacific—especially China—are among the most enthusiastic adopters and anticipate even greater future integration of AI into their workflows.

  3. Strong Demand for Publisher Support
    73% of researchers believe publishers should provide guidance on how to use AI in the research and publishing process. This opens the door for publishers to position themselves as enablers of responsible AI use rather than passive distributors of content.

  4. Disclosures Are a Trust Enabler
    Researchers overwhelmingly support clear disclosures of AI use—not just by authors, but also by reviewers and editors. Transparency is increasingly seen as a pillar of legitimacy in the AI-powered research landscape.

Recommendations

For the Scientific Community:

  • Promote AI Literacy: Academic institutions should embed AI literacy and critical thinking around algorithmic tools into researcher training programs.

  • Establish Disclosure Norms: Clear, consistent rules for disclosing AI use—especially in drafting, methodology, and peer review—should be codified in journal policies.

  • Close the Access Gap: Libraries and departments must ensure fair access to high-quality, domain-specific AI tools, particularly for under-resourced researchers.

For AI Makers:

  • Develop Purpose-Built Tools: There’s an unmet need for AI tools tailored to scientific disciplines. Makers should co-design with researchers, ensuring transparency, accuracy, and traceability.

  • Improve Trust Features: Address hallucination risks and security concerns through robust testing, watermarking, provenance tracking, and privacy-preserving models.

  • Expand Accessibility and Documentation: Freemium models with open documentation and institutional licenses will boost adoption and reduce reliance on general-purpose tools.

For Regulators:

  • Mandate AI Disclosure in Scholarly Outputs: Require journals and funding agencies to enforce transparent AI disclosures in all aspects of research and peer review.

  • Fund Research-Specific AI Infrastructure: Public funding bodies should support the development and maintenance of trustworthy, open-source AI tools for research.

  • Support Standards for Auditability: Promote technical and legal frameworks for AI auditability, particularly for agentic tools proposed for autonomous research tasks.

🧭 Conclusion

Wiley’s ExplanAItions 2025 preview reveals a research community racing to adopt AI but pausing to recalibrate its expectations. The gap between enthusiasm and infrastructure, capability and credibility, remains wide. But the desire to use AI responsibly and effectively is unmistakable.

Now is the moment for institutions, publishers, developers, and regulators to step in—not to slow down innovation, but to scaffold it. Responsible AI in research isn’t just about minimizing risks; it’s about maximizing potential through transparency, guidance, and collaboration.