• Pascal's Chatbot Q&As
  • Posts
  • By late 2024, nearly 1 in 4 corporate press releases, almost 1 in 5 financial consumer complaints, and 14% of UN press releases were at least partially written by AI.

By late 2024, nearly 1 in 4 corporate press releases, almost 1 in 5 financial consumer complaints, and 14% of UN press releases were at least partially written by AI.

Recently founded companies were far more likely to use AI in job postings (15% v. <5% in older firms). Complaints written in areas with lower educational attainment had higher AI usage rates (20%).

The Widespread Adoption of AI-Assisted Writing — Surprising Trends, Controversies, and Lessons

by ChatGPT-4o

Introduction

In less than three years since the release of ChatGPT in late 2022, generative AI has reshaped written communication across multiple domains. The report analyzed provides one of the most comprehensive examinations to date of this phenomenon: a large-scale study of over 1.5 million texts spanning consumer complaints, corporate press releases, job postings, and UN communications. It paints a nuanced picture of adoption: rapid, wide-ranging, and stabilizing at surprisingly high levels, with implications for trust, productivity, and democratic discourse.

Most Surprising Findings

  1. Speed and breadth of adoption: By late 2024, nearly 1 in 4 corporate press releases, almost 1 in 5 financial consumer complaints, and 14% of UN press releases were at least partially written by AI. This is a breathtaking speed of normalization across sectors that typically move at very different paces.

  2. Higher adoption in small and young firms: Contrary to expectations that big corporations with resources would lead, the study shows small, recently founded companies were far more likely to use AI in job postings—up to 15% compared to under 5% in older firms. This reflects agility, cost pressures, and cultural openness to experimentation.

  3. Education paradox: Consumer complaints written in areas with lower educational attainment had higher AI usage rates (around 20% vs. 17% in better-educated regions). This suggests LLMs may be lowering barriers to formal written advocacy, giving disadvantaged communities new tools to engage with institutions.

  4. The UN as an early adopter: That the United Nations—an institution known for formality and careful messaging—used AI in 14% of press releases by 2024 is striking. In some regions, such as Latin America and the Caribbean, adoption was closer to 20%. The fact that such a global body is already relying on AI for sensitive communications raises profound questions about authenticity and diplomacy.

Most Controversial Insights

  1. Stabilization of adoption: Unlike the expected steady “S-curve” of tech diffusion, AI writing use plateaued quickly around 2023–2024. Is this due to mistrust, regulatory frictions, or saturation? It challenges the narrative of inevitable exponential growth and suggests barriers—such as reputational risk or “AI fatigue”—may already be emerging.

  2. Trust and authenticity risks: The reports note that AI-written communication is often perceived as less trustworthy or authentic. For businesses, governments, and NGOs, credibility is currency—so heavy reliance on AI could backfire if publics suspect automation is replacing sincerity.

  3. Homogenization of communication: Especially in job postings, AI use risks making companies indistinguishable. Research even shows AI-generated posts may yield more applications but fewer hires—a paradox where efficiency undermines outcomes.

  4. Opacity of measurement: Even these large-scale studies admit that true adoption rates are likely higher than reported because AI-written texts that are heavily edited or produced by more advanced models are nearly impossible to detect. This undermines transparency and makes policymaking harder.

Most Valuable Contributions

  1. First large-scale quantitative evidence: Unlike anecdotal accounts, this research systematically maps adoption across millions of documents. It shifts the debate from speculation (“are people really using AI for serious writing?”) to evidence (“they already are, at significant scale”).

  2. Cross-sector comparability: By studying consumers, companies, and global institutions together, the work highlights the universality of adoption and creates a shared empirical foundation for regulators, businesses, and civil society.

  3. Democratization potential: The finding that lower-education and small-organization groups lean heavily on AI underscores its potential as an equalizing tool. If managed well, this could amplify marginalized voices rather than silence them.

  4. Framework for detection and monitoring: The authors’ methodology—population-level statistical modeling—provides a scalable way to track AI-written content. Even with its limitations, it offers regulators and watchdogs a critical instrument for oversight.

Recommendations

For Society as a Whole

  • Maintain critical literacy: Citizens must develop the skills to recognize, question, and critically evaluate AI-generated text, just as earlier generations learned to handle propaganda or advertising.

  • Balance empowerment with caution: For disadvantaged groups, LLMs can be powerful amplifiers. But users must be aware of risks of errors, biases, and over-reliance on generic prose.

For Governments and Regulators

  1. Transparency requirements: Mandate disclosure when AI is used in sensitive communications (corporate filings, consumer-facing statements, public policy). This does not mean banning use, but ensuring readers know what role AI played.

  2. Authenticity standards: For sectors like healthcare, finance, and diplomacy, stricter limits may be necessary to preserve credibility and reduce risks of misinformation.

  3. Equity-focused policy: Encourage access to AI writing tools for underserved groups, but ensure safeguards against exploitative uses (e.g., low-quality job posts flooding markets).

  4. Ongoing monitoring: Invest in neutral, independent auditing tools—building on frameworks like those used in this study—to track AI adoption at scale. Public dashboards could keep institutions accountable.

  5. Incentivize responsible use: Offer certification or labeling schemes for organizations that combine AI productivity with human oversight and accuracy checks.

Conclusion

The evidence from the study confirms that large language models have already become deeply embedded in the fabric of modern communication. What was once speculative is now reality: AI is not just assisting niche tasks but shaping corporate strategy, consumer advocacy, hiring practices, and even international diplomacy. The challenge ahead is to harness the democratizing potential of these tools without eroding trust, authenticity, and diversity of voice. The societal question is no longer whether AI will write for us—but whether we, as humans, will retain the ability to recognize, govern, and guide that writing responsibly.