• Pascal's Chatbot Q&As
  • Posts
  • The UK library sector is cautiously optimistic—keen to harness AI’s efficiency and insights while fiercely guarding against its biases, ethical pitfalls, and ecological harms.

The UK library sector is cautiously optimistic—keen to harness AI’s efficiency and insights while fiercely guarding against its biases, ethical pitfalls, and ecological harms.

Their thoughtful engagement with AI—if well-supported—can help ensure that this disruptive technology ultimately serves the public good.

AI and the UK Library Profession — Navigating Innovation, Ethics, and Uncertainty

by ChatGPT-4o

Introduction

The CILIP report “AI and the UK Library Profession: Survey Results” (August 2025), authored by Andrew M. Cox, presents an illuminating snapshot of how AI is perceived, adopted, and debated within the UK library and information profession. Drawing from a survey of 162 information professionals across various sectors—including higher education (HE), health, and public libraries—the report sheds light on both the practical uses and philosophical tensions surrounding AI's integration in libraries.

The study’s value lies not just in mapping current attitudes and applications of AI, but also in identifying the broader systemic, ethical, and professional challenges that librarians face in this rapidly changing technological landscape. Below, we examine the report’s most surprising, controversial, and valuable findings, followed by a set of recommendations for key stakeholders.

Surprising Findings

  1. AI Use More Common in Older Age Groups

Contrary to expectations, the 25–34 age group had the highest percentage of non-users of AI (42%). This disrupts common narratives that younger professionals are inherently more tech-savvy or AI-friendly.

  1. Generative AI Seen Primarily as a Productivity Tool

While 83% of respondents cited time-saving benefits, only 16% saw AI as a cost-saving tool, and just 25% felt it lowered the barrier to entry for tasks. This reveals a focus on personal productivity rather than structural transformation.

  1. Lack of Job Displacement Fear

The topic of job loss or role redundancy—often prominent in broader AI discourse—was rarely mentioned. This may reflect either optimism or a gap in critical anticipation of AI’s long-term impact on employment.

  1. AI Literacy Training Favored Over Direct AI Services

Rather than embedding AI in core library services, most institutions focus on educating users about AI, treating it as a literacy issue. This suggests a meta-level engagement over hands-on operational deployment.

Controversial Findings

  1. Deep Skepticism About AI’s Ethics and Legitimacy

A striking 91% of respondents identified ethical concerns—especially bias, privacy, opacity, and legality—as primary barriers to adoption. Some called AI “actively damaging” to the information ecosystem due to hallucinations and misinformation.

  1. Environmental Impact Seen as a Dealbreaker

Several respondents voiced outrage over AI's ecological footprint, with one calling out ChatGPT for its alleged water consumption: “One bottle of water per query”. This reframes AI not just as a technical tool, but as a planetary concern.

  1. AI Models Viewed as Ethically Corrupt Due to Training Data

There was strong criticism that generative AI tools had been trained on copyrighted academic material without consent or compensation, making their use fundamentally unethical for librarianship’s ethical standards.

  1. Library IT and Policy Inhibiting Innovation

Respondents in health and public sectors described IT departments and institutional policies as gatekeepers who blocked access to AI tools—sometimes due to data privacy concerns, other times due to outdated guidelines.

Valuable Insights

  1. Policy Development Still in Early Stages

Only 34% of libraries have an AI policy, and another 25% are developing one. HE institutions are more proactive, but public and health libraries lag significantly.

  1. AI’s Integration Is Uneven Across Sectors

HE libraries lead in applications such as cataloguing, metadata generation, and chatbots. Public libraries focus on enquiry support and digital literacy; health libraries are experimenting with literature screening and decision support.

  1. Top Opportunities Identified

Respondents saw key service opportunities in:

  • Process efficiencies (42%)

  • Data skills to support AI (41%)

  • IP and copyright literacy (36%)

  • Strong Demand for Ethical Guidance and Training

There is overwhelming demand for short courses, ethical guidance, and practical toolkits. CILIP is expected to lead not just in training, but in setting moral and professional standards.

  1. Stakeholder Expectations for Regulation

38% of respondents called on the UK government to regulate AI, particularly with respect to copyright, data protection, misinformation, and environmental impacts.

Recommendations for Stakeholders

📚 For Library Leaders and Professionals

  • Shift from Exploratory to Strategic: Move beyond ad hoc experimentation toward a strategic framework for AI adoption tailored to sector-specific needs.

  • Bridge the Skills Gap: Invest in upskilling programs—especially for public and health library staff—focused on AI literacy, prompt engineering, and critical evaluation.

  • Embed Ethics in Practice: Develop internal guidelines aligned with CILIP’s ethical standards to evaluate when and how to use AI responsibly.

  • Participate in AI Policy Shaping: Library professionals should take part in institutional and national conversations about AI governance, IP, and data justice.

🏛️ For CILIP

  • Develop Sector-Specific Playbooks: Offer detailed use-case guides for HE, public, and health libraries, highlighting safe, effective, and ethical applications.

  • Create an Ethics-Centered AI Framework: Define acceptable uses of generative AI, aligned with transparency, attribution, and sustainability goals.

  • Launch a “Green AI” Awareness Campaign: Educate members about the environmental costs of AI and promote alternatives or efficiencies.

  • Advocate for Fair AI Licensing: Collaborate with publishers, authors, and tech platforms to push for transparent and equitable use of training data.

🏛️ For UK Government and Regulators

  • Enact Strong Copyright and Attribution Protections: Ensure that generative AI tools used in libraries respect licensing terms and acknowledge training data origins.

  • Support Sustainable AI Practices: Mandate environmental transparency and sustainability metrics for AI service providers.

  • Fund AI Literacy and Digital Inclusion: Target underserved public libraries and schools to ensure equitable access to devices, tools, and training.

  • Ensure AI Does Not Undermine Information Integrity: Require platform accountability for AI-generated content and its potential to degrade public knowledge.

🏢 For AI Makers and Vendors

  • Design for Transparency: Make training data sources, algorithmic decision processes, and hallucination risks visible and manageable for institutional users.

  • Co-Develop with Libraries: Work with library professionals to design domain-specific models and tools that align with professional values and information integrity.

  • Offer Affordable, Tiered Licensing: Ensure public-sector libraries and schools can afford safe and compliant AI tools without compromising access.

Conclusion

This report reveals a professional community that is deeply engaged, intellectually critical, and ethically driven in its approach to AI. The UK library sector is neither blindly enthusiastic nor dismissive. Instead, it is cautiously optimistic—keen to harness AI’s efficiency and insights while fiercely guarding against its biases, ethical pitfalls, and ecological harms.

In an age where information ecosystems are being rapidly reshaped, librarians remain crucial stewards of trust, transparency, and truth. Their thoughtful engagement with AI—if well-supported—can help ensure that this disruptive technology ultimately serves the public good.