- Pascal's Chatbot Q&As
- Posts
- The Ipsos AI Monitor 2025 captures a world in flux: awed by AI’s potential yet wary of its social consequences. This paradox shapes how people evaluate brands, institutions & information they consume.
The Ipsos AI Monitor 2025 captures a world in flux: awed by AI’s potential yet wary of its social consequences. This paradox shapes how people evaluate brands, institutions & information they consume.
The future will reward those who can navigate both the wonder and the worry of AI with integrity, transparency, and a steadfast commitment to human-centered values.
Understanding Public Sentiment on Artificial Intelligence – An Analysis of the Ipsos AI Monitor 2025
by ChatGPT-4o
The Ipsos AI Monitor 2025, surveying over 23,000 adults across 30 countries, offers a nuanced and at times contradictory view of how people around the world perceive artificial intelligence (AI). The report captures the deep ambivalence many feel: optimism about AI’s promise, tempered by skepticism about its consequences. This tension — described aptly as "the Wonder and the Worry of AI" — is at the heart of the global AI discourse. Below is an overview of the report's key messages, surprising and controversial findings, and their implications for scholarly publishers, concluding with best practices they should consider in this shifting landscape.
Key Messages
Global Ambivalence: Hope Meets Hesitation
A slim majority (52%) of global respondents are excited about AI-powered products and services, but nearly as many (53%) are nervous. In markets such as the US, UK, and Canada — the "Anglosphere" — nervousness tends to outweigh excitement. Conversely, enthusiasm is stronger in Southeast Asian countries such as Indonesia and Thailand.
Economic outlooks shape attitudes: countries more optimistic about AI’s effect on the economy are more excited about it overall.
AI Has Already Changed Lives – and Will Do More
52% say AI has already changed their lives in the past 3–5 years. Two-thirds expect it will do so again in the next 3–5 years. Tasks such as online search, ad targeting, and even restaurant order-taking are widely seen as soon-to-be AI domains.
Trust in Regulation and Corporate Use Is Shaky
Just 31% of Americans trust their government to regulate AI responsibly. Globally, only 48% trust companies using AI to keep their data safe.
Despite this, 79% believe AI use should be disclosed by companies — signaling a demand for transparency even amid uncertainty.
The Preference for Humans in Creative Domains Persists
Whether it’s news articles, photojournalism, or movies, most people prefer human-generated content. For example, 71% prefer humans to create photojournalistic content, and 69% favor humans for news stories.
Yet expectations are growing that AI will soon dominate many of these domains, including advertising, job screening, and political ad generation.
Job Market Anxiety Coexists with Personal Optimism
Only 31% think AI will improve the national job market, while 35% think it will worsen. However, 38% believe AI will improve their personal job situation — a curious optimism that might mask deeper concerns about societal shifts.
AI Is Trusted More Than Humans — Sometimes
A surprising 54% of respondents trust AI not to discriminate — more than the 45% who trust their fellow humans. This is especially true among Gen Z and Millennials.
However, confidence drops when it comes to trusting AI with user data or with tasks like writing news stories.
Most Surprising, Controversial, and Valuable Findings
Surprising: In a reversal of traditional thinking, more people trust AI than humans to avoid bias. Younger generations (Gen Z and Millennials) especially exhibit this counterintuitive trust in machines over people.
Controversial: AI-generated political ads are expected to become a norm — and 73% of people think they’re likely to happen. Yet they are also among the most discomforting uses of AI, along with writing news articles and creating disinformation. These uses threaten democratic trust and information integrity.
Valuable: The public clearly distinguishes between different applications of AI. For instance, 79% are comfortable with AI in search but not in journalism or content creation. This signals an opportunity for brands and publishers to tailor their AI use transparently to avoid backlash.
Best Practices and Recommendations for Scholarly Publishers
Given the report’s findings, scholarly publishers face both challenges and strategic opportunities. Here are actionable insights:
Emphasize Human Curation and Authorship
With the strong global preference for human-generated content, publishers should highlight the role of authors, editors, and peer reviewers in creating trustworthy, accurate material — especially in scientific or journalistic contexts.
Implement Transparent AI Disclosure Policies
Align with the 79% of respondents who want AI use disclosed. Publishers should clearly communicate when AI is used — whether for summaries, translations, or accessibility — and when human judgment remains central.
Focus on Trust and Data Security
Only 48% trust companies to keep their data safe. Scholarly publishers must ensure robust data protection and be proactive in communicating their safeguards to institutions, authors, and readers.
Avoid Full Automation in High-Stakes Communication
The public is especially uncomfortable with AI-generated political messaging, news, and reviews. Scholarly publishers should be cautious about deploying generative AI in article abstracts, editorials, or critical commentary — areas where human nuance matters.
Explore AI for Infrastructure, Not Content
AI is widely accepted for tasks like search (79% see this as likely), customer support, and efficiency improvement. Publishers should invest in AI to enhance back-end workflows — search, indexing, metadata enrichment — rather than replace the content itself.
Prepare for Workforce Adaptation
AI is seen as a personal productivity enhancer but a threat to the broader job market. Scholarly publishers must reskill staff for AI-assisted workflows, while reaffirming their value in maintaining research integrity and editorial oversight.
Align with Positive Economic Narratives
Countries optimistic about AI’s economic impact are more enthusiastic overall. Publishers should position their AI adoption as part of a broader value-creation mission: improving scientific communication, accelerating discovery, and enhancing global access to knowledge.
Participate in Regulatory Dialogues
With global trust in government AI regulation at 54% — and much lower in some key markets — scholarly publishers can play a leadership role. They should advocate for responsible AI governance, especially around research ethics, content licensing, and provenance tracking.
Support Public Education on AI Literacy
While 67% say they understand AI, only 52% know what products and services use it. Publishers can contribute by embedding explainers, metadata, or visualizations in their digital platforms to demystify AI tools used in search, discovery, and analytics.
Plan for Resilient Reputation Management
The report warns of potential backlash. Publishers should monitor AI sentiment closely, track platform trust metrics, and prepare communications strategies in case of reputational risks tied to AI use (e.g. hallucinated citations, misuse of research outputs).
Conclusion
The Ipsos AI Monitor 2025 captures a world in flux: awed by AI’s potential yet wary of its social consequences. This paradox shapes how people evaluate brands, institutions, and even the information they consume. For scholarly publishers — stewards of knowledge and trusted content — the implications are profound. To remain credible and competitive, they must balance innovation with ethics, human insight with machine efficiency, and bold experimentation with public accountability. The future will reward those who can navigate both the wonder and the worry of AI with integrity, transparency, and a steadfast commitment to human-centered values.
