• Pascal's Chatbot Q&As
  • Posts
  • The following analysis details the twenty-five primary frictions governing the success of AI scaling, followed by an examination of the systemic consequences of this institutional impasse.

The following analysis details the twenty-five primary frictions governing the success of AI scaling, followed by an examination of the systemic consequences of this institutional impasse.

The future will not be decided by the intelligence of the models, but by the resilience and adaptability of the societies they inhabit.

Institutional Lag and the Great Decoupling: A Global Analysis of AI Adoption Frictions

by Gemini 3.0, Deep Research. Warning, LLMs may hallucinate!

The trajectory of artificial intelligence adoption in 2024 and 2025 reveals a profound systemic phenomenon termed the Great Decoupling, wherein the exponential advancement of model capabilities has fundamentally outpaced the linear adaptation of institutional, legal, and psychological frameworks. While frontier models demonstrate PhD-level performance in isolated benchmarks, the translation of this intelligence into enterprise-wide value remains stalled. Current research indicates that despite approximately $40 billion in annual enterprise investment, only 5% of AI initiatives are currently producing measurable returns on investment.1 This discrepancy is not a temporary lag in deployment but a structural misalignment between the probabilistic nature of machine intelligence and the deterministic requirements of global commerce and governance.

The following analysis details the twenty-five primary frictions governing the success of AI scaling, followed by an examination of the systemic consequences of this institutional impasse.

The Taxonomy of Friction: Top 25 Challenges to Global AI Adoption

1. The Governance Gap and the Proliferation of Shadow AI

The most immediate and pervasive friction to institutional adoption is the rise of Shadow AI, the unauthorized use of artificial intelligence tools by employees without formal approval or oversight.3 By mid-2025, an estimated 71% of knowledge workers were utilizing AI tools outside of organizational governance frameworks.5 This invisible adoption represents a fundamental collapse of traditional IT management. Research suggests that while 97% of AI-related security breaches involve systems lacking proper access controls, 63% of breached organizations possessed no formal AI governance policy at the time of the incident.6 The financial friction is quantifiable: breaches involving high levels of Shadow AI add an average of $670,000 to the total cost of a data breach.8

2. Unclear Provenance and the Fuel Rights Crisis

Scaling AI requires “fuel” in the form of training data, yet the rights to train, fine-tune, and retrieve content remain legally contested and opaque.10 The inability of models to provide “receipts” for their weights creates an auditability deficit that prevents adoption in knowledge-intensive sectors like finance and healthcare.10 Litigation in 2025, such as the case brought by major media publishers against Cohere, underscores the risk of “unlawful reproduction” where AI outputs are deemed verbatim copies or substitutive summaries of copyrighted works.11

3. The Productivity J-Curve and the Adjustment Cost Paradox

The implementation of general-purpose technologies typically induces an initial stagnation or decline in measured productivity before long-term gains materialize, a phenomenon known as the J-curve.12 In manufacturing, AI adoption has been shown to initially reduce labor productivity by as much as 1.33 percentage points, with some causal estimates suggesting losses of up to 44% in the short term due to the high cost of process re-engineering and workforce training.13 Organizations often abandon AI initiatives in the “trough” of the J-curve, failing to realize that gains are conditional on weathering these significant initial losses.14

4. Asymmetric Liability and the Fault-Based Regulatory Vacuum

Liability for AI-driven errors remains a significant deterrent to adoption, as the potential downside of a failure—such as a medical misdiagnosis or a financial crash—rests entirely with the adopting organization rather than the technology provider.10 The withdrawal of the EU AI Liability Directive in early 2025 has left a “regulatory vacuum” regarding fault-based liability.15 While strict liability applies to defective products under the revised Product Liability Directive, harms arising from negligent conduct within the AI supply chain remain governed by fragmented national tort laws, increasing the burden of proof for victims and the risk profile for enterprises.16

5. Environmental Resistance and the Infrastructure Backlash

The physical requirements of AI scale—massive energy consumption and water for cooling—have triggered a bipartisan backlash in the physical world.17 In 2025, local opposition in the United States led to the cancellation of at least 25 data center projects, representing 4.7 gigawatts of lost capacity.17 Water consumption is the primary driver of this resistance, with a medium-sized data center “drinking” up to 110 million gallons per year, equivalent to 1,000 households.19 This “physical world” friction is now a more immediate bottleneck to scale than algorithmic development.17

6. Cognitive Offloading and Human Skill Atrophy

Psychological research in 2025 has identified a significant negative correlation between frequent AI usage and critical thinking abilities, mediated by “cognitive offloading”.20 As users delegate reasoning and memory tasks to technology, their internal cognitive skills may atrophy.22 Studies utilizing the Halpern Critical Thinking Assessment show that younger participants, who exhibit higher AI dependence, score lower in critical thinking than older cohorts.20 This suggests a future risk of “human enfeeblement,” where the workforce is no longer capable of independent verification or handling complex tasks when AI access is removed.24

7. Hardware Scarcity and Geopolitical Export Controls

The global distribution of AI compute is increasingly constrained by strategic geopolitics, specifically GPU export controls.26 The January 2025 AI Diffusion Rule introduced comprehensive restrictions on the export of GPUs and the deployment of AI workloads involving Chinese entities.26 Estimates suggest that if current controls persist, the United States will maintain a 31x advantage in supercomputing capacity over China by 2026.27 This “compute divide” creates a two-speed world where adoption is physically impossible for organizations in restricted jurisdictions.27

8. Systemic Failure Cascades in Integrated Infrastructure

As AI is integrated into critical infrastructure, the risk of “failure cascades”—where an error in one node propagates at machine speed throughout the entire network—increases.29 Multi-agent systems exhibit emergent behaviors that are impossible to predict from individual agent analysis, creating vulnerabilities to “computational cascades”.30 A failure in a major LLM provider, such as the outages recorded in April 2024, can immediately affect a global user base, yet institutional response times remain tied to human cycles.31

9. Epistemic Power Concentration and Scientific Monocultures

The adoption of AI in scientific research risks the creation of “epistemic monocultures,” where a small number of AI platforms effectively determine which ideas get explored.33 Because these systems are optimized on existing literature, they may favor incremental “normal science” and suppress fundamental novelties that are subversive to current paradigms.33 This “value lock-in” makes the scientific ecosystem efficient under stable conditions but catastrophically vulnerable to unexpected challenges.33

10. M&A Due Diligence and Inherited Model Risk

In the surge of AI-driven mergers and acquisitions in 2025, buyers are discovering that traditional due diligence is insufficient to uncover “hidden liabilities”.35 These include “toxic” training data, missing consents, and algorithmic biases that could expose the acquirer to massive litigation, such as the $1.5 billion potential liability seen in the Bartz v. Anthropic case.37 Nearly 90% of companies reportedly fail to conduct proper cyber due diligence in M&A deals, creating “strategic organizational blind spots”.7

11. Middle Management Resistance and Status Threat

The introduction of AI often encounters “stiff resistance” from middle managers who perceive it as a threat to their authority, headcount, and budget.10 Research by Gartner predicts that through 2026, 20% of organizations will use AI to flatten their structures, potentially eliminating more than half of current middle management positions.39 Consequently, managers may sabotage AI initiatives or performatively adopt them while maintaining legacy structures to preserve their professional sovereignty.10

12. The “Commingling Problem” in Public-Private Datasets

In sectors where data is pooled across different organizations, “commingling” creates a governance nightmare.10 Once datasets are mixed, they become subject to a stew of different licenses, privacy regimes, and security classifications.10 Retracting or segregating “dirty” or unlicensed data from a trained model is technically difficult and often requires the destruction of the entire model archive, a cost that most organizations cannot bear.37

13. Regulatory Fragmentation and Interoperability Drag

While the technology is global, governance remains local, leading to a “trust gap” that taxes global growth.41 Fragmented development between the EU AI Act, U.S. executive orders, and Brazilian frameworks creates compliance drag for multinational organizations.41 Without a “shared venue” to coordinate norms, companies face the choice of bespoke deployments for every jurisdiction—a process that is expensive, slow, and inherently brittle.10

14. Data Center Water Scarcity and Regional Stress

In addition to energy, the “thirst” of AI is creating acute regional stress.18 In South Carolina, data centers are projected to account for up to 70% of all new energy usage, while in Virginia, residents have voted out council members who supported Amazon’s data center expansion.19 This environmental backlash is not limited to the United States; similar resistance is emerging in Latin America and Europe, creating a global “NIMBY” (Not In My Backyard) flashpoint for AI infrastructure.18

15. The Human-in-the-Loop Scaling Myth

Organizations often claim to keep “humans in the loop” to mitigate AI risks, but as AI output volume grows, this becomes a “rubber stamp” rather than a meaningful control.10 If the volume of automated tasks exceeds the human capacity for review, oversight becomes performative.10 In high-stakes sectors like combat or finance, this can lead to “operationally indistinguishable” errors that cascade faster than a human can intervene.44

16. Integration with Legacy IT Debt

The “digital core” of many enterprises remains built on rigid legacy infrastructure that is often incompatible with autonomous AI agents.45 Overcoming this requires not just “plugging in” AI, but a wholesale process of platform modernization and API-driven re-engineering.46 Nearly 60% of AI leaders cite integration with legacy systems as their primary hurdle to scaling agentic AI.46

17. Organizational AI Literacy Deficit

Most institutions lack a shared mental model of what AI models actually do and how they fail.10 This literacy gap leads to either reckless over-trust or paralyzed risk aversion.10 In 2025, only 28% of employees in organizations implementing AI strongly agree their manager supports its use, reflecting a fundamental lack of leadership readiness.2

18. Security Model Abuse: Prompt Injection and Agent Hijacking

The attack surface of the enterprise has expanded to include prompt injection, jailbreaking, and training data poisoning.47 Researchers in 2025 documented one-click remote exploits and authentication bypasses that allow for “agent hijacking,” where an AI assistant becomes a “double agent” for a malicious actor.31 These vulnerabilities outpace traditional security playbooks, leaving CISOs in a state of “cyber risk dilemma”.48

19. Asymmetric Evaluation: Measuring Theater vs. Value

Organizations often adopt “AI theater” metrics—such as usage rates or time saved per prompt—instead of workflow outcomes.10 When leadership cannot see credible impact on the profit-and-loss statement, AI rollouts get reversed or lose funding.1 Currently, 95% of corporate generative AI pilots fail to deliver measurable financial returns because they focus on the “glitter” of the technology rather than business strategy.1

20. Cultural Legitimacy and Machine Authority Resistance

In relational domains like education, healthcare, and justice, societies often resist machine authority.10 If institutions cannot explain how an AI decision was made or how it can be appealed, adoption is capped by a lack of social trust.10 Public sentiment supports automation for “repetitive” tasks but resists its use in “sensitive” or moral decision-making roles.49

21. Workforce Backlash and the Entry-Level Stagnation

AI is reshaping labor markets by automating entry-level tasks, leading to a “hollowing out” of the junior workforce.51 For workers aged 22-25 in AI-exposed roles, employment growth has been stagnant or declining, while senior employment remains stable.12 This “early-career thinning” threatens the long-term sustainability of professional pipelines and compounds generational inequality.51

22. Unequal Value Capture and the Social Bargain Failure

The benefits of AI in 2025 appear to be flowing disproportionately from labor to capital.52 While AI-intensive firms like the “Magnificent Seven” reach record valuations, typical workers face a “productivity-pay gap” where gains do not translate into wage growth.52 Without a credible “social bargain” that ensures broad-based benefits, organizations face chronic churn, labor conflict, and political backlash.10

23. National Security Chilling and Ecosystem Fragmentation

Treating AI as strategic national infrastructure chills cross-border research and data sharing.10 When AI development is viewed through the lens of an “arms race,” safety protocols are often sacrificed for speed.25 This fragments global innovation into geopolitical blocs, where authoritarian regimes adopt open-source platforms like DeepSeek to bypass Western barriers, further eroding global alignment.28

24. Procurement and Vendor Governance Lag

Enterprise procurement and vendor governance frameworks move slowly, while AI vendors iterate weekly.10 This mismatch prevent “fast and safe” scale, as due diligence, security assessments, and legal reviews cannot keep pace with the speed of model updates.10 Consequently, organizations often choose between uncompetitive caution or ungoverned risk.

25. The Attention Economy and Institutional Capacity

Institutions only have so much capacity for change. AI must compete for budget and leadership focus with cybersecurity, digital modernization, and geopolitical shifts.10 If AI is framed as “one more transformation,” it loses the necessary institutional “oxygen” required to survive the difficult J-curve adjustment phase.10

Comparative Data Analysis of Adoption Barriers

The following tables synthesize the statistical realities of AI adoption as recorded in 2024 and 2025.

Table 1: Financial and Operational Impacts of AI Friction (2025)

Table 2: The Security and Shadow AI Landscape

Table 3: Physical Infrastructure and Environmental Frictions

Deep Insights: The Institutional and Psychological Undercurrents

The frictions identified above do not exist in isolation; they are symptoms of a deeper “Institutional Lag.” By examining the second and third-order effects of these data points, we can discern the underlying trends that will define the success or failure of AI scale.

The J-Curve and the “Digital Maturity” Filter

The most profound economic insight from 2025 research is that AI is a “capital-deepening” technology that rewards digital maturity and punishes legacy.13 The massive productivity losses (up to 44%) seen in some manufacturing firms suggest that AI is not an incremental improvement but a “structural shock”.14

Firms that have already completed a digital transformation have a “much easier ride” because their data is already structured for predictive models.13 Conversely, older firms that try to “force” generative AI into legacy processes without adaptation are the primary drivers of the 95% project failure rate.1 This creates a “Matthew Effect” in the economy: those who are already technologically rich get more productive, while those who are poor (technologically) lose what little competitive advantage they had.

Cognitive Offloading as a Systemic Competency Risk

The shift from “System 2” (deliberate) to “System 1” (automatic) thinking through AI dependency is not merely an individual psychological quirk; it is a systemic risk to institutional decision-making.24 If a generation of professionals enters the market using AI for the very tasks that build their internal “pattern recognition,” the organization loses its ability to verify the machine’s work.24

The “Google Effect” (forgetting information that can be searched) has evolved into the “AI Effect” (outsourcing the reasoning itself).21 In high-stakes fields like medicine or finance, this creates a “knowledge vacuum” where practitioners may follow an AI’s advice even when it contradicts their own (atrophied) reasoning.56 The long-term consequence is a workforce that is “operationally brittle,” capable of high output but unable to troubleshoot the system when it fails.24

The Shadow AI “Productivity Mirage”

Shadow AI creates a “productivity mirage” where individuals appear more efficient, but the organization becomes more vulnerable.4 Because this usage is hidden, the organization cannot capture the learnings or scale the benefits across the enterprise.4 Furthermore, the lack of access controls in 97% of AI-related breaches suggests that organizations are trading security for convenience.6 The $670,000 cost premium for Shadow AI breaches is a “tax” on this unmanaged innovation.8

Environmental Legitimacy as a Hard Constraint

The environmental backlash represents the arrival of “physical world reality” to a technology that was previously seen as ethereal.17 The cancellation of 25 data center projects in a single year demonstrates that “political permission” is now as important as “technological feasibility”.17 When data centers consume as much water as a town of 50,000 people, local opposition becomes a bipartisan force.19 This suggests that future AI scale will be geographically constrained to regions that can provide “legitimate” green energy and non-potable water for cooling.17

Concluding Analysis: Potential Consequences and Scenarios

The resolution or persistence of these twenty-five frictions will define the global economic landscape for the next decade.

Scenario 1: The Fractured Plateau (Persistent Friction)

If the Great Decoupling is not addressed through robust governance and a new social bargain, the world faces a “Fractured Plateau” of uneven adoption. In this scenario:

  • Economic Instability: The productivity gains of AI are confined to a 6% elite, while the rest of the economy stagnates, leading to extreme wealth concentration and social unrest.52

  • Systemic Fragility: A reliance on opaque, probabilistic systems without human verification leads to “high-scale failure cascades” in finance and infrastructure.25

  • Institutional Retreat: Cautious organizations, fearing liability and IP loss, retreat from AI, while less-accountable actors push it into everything, leading to a collapse of public trust.10

  • Generational Atrophy: A workforce of “passive consumers” lacks the critical thinking skills to manage the very systems they depend on, creating a society that is technologically advanced but cognitively enfeebled.21

Scenario 2: The Managed Transition (Resolved Friction)

If institutions can successfully “rewire” themselves to align with AI capabilities, a state of “Superagency” becomes possible.57 In this scenario:

  • Democratized Knowledge: AI acts as a “supertool” that empowers individuals to solve complex problems, regardless of their background, fostering a more inclusive and productive global economy.57

  • Human-Centric Automation: Machines perform repetitive tasks while humans focus on creativity, emotional intelligence, and ethical judgment, revitalizing the meaning of work.49

  • Robust Governance: Standardized provenance, audit regimes, and a “World Council for Cooperative Intelligence” ensure that AI systems are safe, compatible, and credible.41

  • Productivity Realization: After the initial J-curve dip, the economy experiences a significant uplift in growth, with AI helping to optimize complex systems like climate adaptation and supply chains.10

The core lesson of the “remote-work backlash” analogy remains: feasibility does not imply adoption, and adoption does not imply legitimacy.10 AI at global commercial scale requires institutions to trade legibility and control for speed and probabilistic output—a trade they will resist unless incentives are fundamentally redesigned. The future will not be decided by the intelligence of the models, but by the resilience and adaptability of the societies they inhabit.1

Works cited

  1. Why 95% of Corporate AI Projects Fail: Lessons from MIT’s 2025 Study - ComplexDiscovery, accessed February 14, 2026, https://complexdiscovery.com/why-95-of-corporate-ai-projects-fail-lessons-from-mits-2025-study/

  2. Manager Support Drives Employee AI Adoption - Gallup.com, accessed February 14, 2026, https://www.gallup.com/workplace/694682/manager-support-drives-employee-adoption.aspx

  3. The Rise of Shadow AI: Auditing Unauthorized AI Tools in the Enterprise - ISACA, accessed February 14, 2026, https://www.isaca.org/resources/news-and-trends/industry-news/2025/the-rise-of-shadow-ai-auditing-unauthorized-ai-tools-in-the-enterprise

  4. Shadow AI and the Future of Work: What Knowledge Workers Need to Know in 2026 : r/it, accessed February 14, 2026, https://www.reddit.com/r/it/comments/1qn2f9w/shadow_ai_and_the_future_of_work_what_knowledge/

  5. 2025 State of Shadow AI Report - GitHub, accessed February 14, 2026, https://raw.githubusercontent.com/jacobdjwilson/awesome-annual-security-reports/main/Annual%20Security%20Reports/2025/Reco-Shadow-AI-Report-2025.pdf

  6. Cost of a Data Breach Report 2025 The AI Oversight Gap - Baker Donelson, accessed February 14, 2026, https://www.bakerdonelson.com/webfiles/Publications/20250822_Cost-of-a-Data-Breach-Report-2025.pdf

  7. The AI Oversight Gap: IBM’s 2025 Data Breach Report Reveals Hidden Costs of Ungoverned AI | Jones Walker LLP, accessed February 14, 2026, https://www.joneswalker.com/en/insights/blogs/ai-law-blog/the-ai-oversight-gap-ibms-2025-data-breach-report-reveals-hidden-costs-of-ungov.html?id=102l0sf

  8. AI & Cloud Security Breaches: 2025 Year in Review - Reco AI, accessed February 14, 2026, https://www.reco.ai/blog/ai-and-cloud-security-breaches-2025

  9. Shadow AI: The emerging security threat in IBM’s 2025 Cost of a Data Breach Report, accessed February 14, 2026, https://www.nudgesecurity.com/post/shadow-ai-the-emerging-security-threat-in-ibms-2025-cost-of-a-data-breach-report

  10. The 25 Frictions That Will Decide Whether AI Scales Worldwide.docx

  11. AI’s War in the Courtroom: Copyright Disputes Spike in 2025 - Best Law Firms, accessed February 14, 2026, https://www.bestlawfirms.com/articles/ai-war-in-the-courtroom-copyright-disputes-spike-in-2025/7186

  12. AI, Productivity, and Labor Markets: A Review of the Empirical Evidence, accessed February 14, 2026, https://laweconcenter.org/resources/ai-productivity-and-labor-markets-a-review-of-the-empirical-evidence/

  13. The ‘productivity paradox’ of AI adoption in manufacturing firms - MIT Sloan, accessed February 14, 2026, https://mitsloan.mit.edu/ideas-made-to-matter/productivity-paradox-ai-adoption-manufacturing-firms

  14. The Rise of Industrial AI in America: Microfoundations of the Productivity J-curve(s) - American Economic Association, accessed February 14, 2026, https://www.aeaweb.org/conference/2026/program/paper/Z5DzGsQy

  15. The Artificial Intelligence Liability Directive, accessed February 14, 2026, https://www.ai-liability-directive.com/

  16. AI as product vs. AI as service: Unpacking the liability divide in EU safety legislation | IAPP, accessed February 14, 2026, https://iapp.org/news/a/ai-as-product-vs-ai-as-service-unpacking-the-liability-divide-in-eu-safety-legislation

  17. Scoop: Local Pushback, Canceled Data Centers Surged in 2025 - Heatmap News, accessed February 14, 2026, https://heatmap.news/politics/data-center-cancellations-2025

  18. More than 200 environmental groups demand halt to new US datacenters - The Guardian, accessed February 14, 2026, https://www.theguardian.com/us-news/2025/dec/08/us-data-centers

  19. Communities Push Back Against AI Data Center Expansion - Project Censored, accessed February 14, 2026, https://www.projectcensored.org/communities-against-ai-data-center/

  20. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking, accessed February 14, 2026, https://www.mdpi.com/2075-4698/15/1/6

  21. AI Weakens Critical Thinking. This Is How to Rebuild It | Psychology Today, accessed February 14, 2026, https://www.psychologytoday.com/us/blog/the-algorithmic-mind/202505/ai-weakens-critical-thinking-and-how-to-rebuild-it

  22. accessed February 14, 2026, https://www.mdpi.com/2075-4698/15/1/6#:~:text=The%20long%2Dterm%20reliance%20on,term%20memory%20and%20cognitive%20health.

  23. AI’s cognitive implications: the decline of our thinking skills? - IE, accessed February 14, 2026, https://www.ie.edu/center-for-health-and-well-being/blog/ais-cognitive-implications-the-decline-of-our-thinking-skills/

  24. Early research suggests AI tools may impair critical cognitive skills instead of building them, accessed February 14, 2026, https://www.milwaukeeindependent.com/syndicated/early-research-suggests-ai-tools-may-impair-critical-cognitive-skills-instead-building/

  25. AI Risks that Could Lead to Catastrophe - Center for AI Safety (CAIS), accessed February 14, 2026, https://safe.ai/ai-risk

  26. Navigating GPU Export Controls and AI Use Restrictions in Data Center Operations, accessed February 14, 2026, https://www.gtlaw.com/en/insights/2025/12/navigating-gpu-export-controls-and-ai-use-restrictions-in-data-center-operations

  27. Should the US Sell Blackwell Chips to China? - IFP, accessed February 14, 2026, https://ifp.org/the-b30a-decision/

  28. Global AI Adoption in 2025 – AI Economy Institute - Microsoft, accessed February 14, 2026, https://www.microsoft.com/en-us/corporate-responsibility/topics/ai-economy-institute/reports/global-ai-adoption-2025/

  29. Advisory Scientific Committee No 16 / December 2025 - European ..., accessed February 14, 2026, https://www.esrb.europa.eu/pub/pdf/asc/esrb.ascreport202512_AIandsystemicrisk.en.pdf

  30. AI Agentic Ecosystem’s Adaptive Governance with DuPont Accounting - Medium, accessed February 14, 2026, https://medium.com/@oracle_43885/ai-agentic-ecosystems-adaptive-governance-with-dupont-accounting-1d3b6ed7f64f

  31. Assuring Intelligence: Why Trust Infrastructure Is the United States’ AI Advantage, accessed February 14, 2026, https://www.cfr.org/articles/assuring-intelligence-why-trust-infrastructure-is-the-united-states-ai-advantage

  32. An Empirical Characterization of Outages and Incidents in Public Services for Large Language Models - arXiv, accessed February 14, 2026, https://arxiv.org/html/2501.12469v1

  33. The Epistemic Risks of AI-Only Science - Kevin’s Homepage, accessed February 14, 2026, https://kjablonka.com/blog/posts/ai_scientists/

  34. Rethinking Intelligence Power and Epistemic Authority in the Age of Superhuman AI, accessed February 14, 2026, https://www.sciencepublishinggroup.com/article/10.11648/j.ijsts.20251304.14

  35. M&A in 2024 and Trends for 2025 | Morrison Foerster, accessed February 14, 2026, https://www.mofo.com/resources/insights/250109-m-a-in-2024-and-trends-for-2025

  36. AI Due Diligence in M&A: The Hidden Risks That Could Tank Your Deal, accessed February 14, 2026, https://sentrytechsolutions.com/blog/ai-due-diligence-in-ma-the-hidden-risks-that-could-tank-your-deal

  37. “AI Related M&A Risks: Acquiring Hidden Liabilities from AI Models” - Shumaker, Loop & Kendrick, LLP, accessed February 14, 2026, https://www.shumaker.com/insight/ai-related-ma-risks-acquiring-hidden-liabilities-from-ai-models/

  38. Understanding Resistance to Organizational AI Adoption Dominic Frank Regent University Roundtable: Artificial Intelligence, accessed February 14, 2026, https://www.regent.edu/wp-content/uploads/2025/01/Regent-Research-Roundtables-2025-Artificial-Intelligence-Frank.pdf

  39. Is there still value in the role of managers? - Deloitte, accessed February 14, 2026, https://www.deloitte.com/us/en/insights/topics/talent/human-capital-trends/2025/future-of-the-middle-manager.html

  40. Organizational Resistance to Artificial Intelligence - reposiTUm, accessed February 14, 2026, https://repositum.tuwien.at/bitstream/20.500.12708/224256/1/Aziz%20Baker%20Hoshyar%20-%202025%20-%20Organizational%20Resistance%20to%20Artificial%20Intelligence.pdf

  41. Building trust in AI through a new global governance framework, accessed February 14, 2026, https://www.weforum.org/stories/2025/11/trust-ai-global-governance/

  42. EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act, accessed February 14, 2026, https://artificialintelligenceact.eu/

  43. $64 billion of data center projects have been blocked or delayed amid local opposition, accessed February 14, 2026, https://www.datacenterwatch.org/report

  44. EXCLUSIVE REPORT - How to Keep Generative AI from Crashing in Combat: Strategic, Technical and Operational Safeguards for Military Resilience - https://debuglies.com, accessed February 14, 2026, https://debuglies.com/2025/08/26/exclusive-report-how-to-keep-generative-ai-from-crashing-in-combat-strategic-technical-and-operational-safeguards-for-military-resilience/

  45. Why Most Agentic Architectures Will Fail - Information Week, accessed February 14, 2026, https://www.informationweek.com/machine-learning-ai/why-most-agentic-architectures-will-fail

  46. AI trends 2025: Adoption barriers and updated predictions - Deloitte, accessed February 14, 2026, https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/blogs/pulse-check-series-latest-ai-developments/ai-adoption-challenges-ai-trends.html

  47. Exec Concerns With AI Adoption Aug 23 v4 - SSRN, accessed February 14, 2026, https://papers.ssrn.com/sol3/Delivery.cfm/5405290.pdf?abstractid=5405290&mirid=1

  48. Cyber Pulse: An AI Security Report | Security Insider - Microsoft, accessed February 14, 2026, https://www.microsoft.com/en-us/security/security-insider/emerging-trends/cyber-pulse-ai-security-report

  49. The Annual AI Governance Report 2025: Steering the Future of AI - ITU, accessed February 14, 2026, https://www.itu.int/epublications/publication/the-annual-ai-governance-report-2025-steering-the-future-of-ai

  50. Artificial Intelligence - Stanford Emerging Technology Review, accessed February 14, 2026, https://setr.stanford.edu/technology/artificial-intelligence/2025

  51. Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence - Stanford Digital Economy Lab, accessed February 14, 2026, https://digitaleconomy.stanford.edu/app/uploads/2025/11/CanariesintheCoalMine_Nov25.pdf

  52. SUMMARY - FII Institute, accessed February 14, 2026, https://fii-institute.org/wp-content/uploads/2026/01/8072-R3-Macroeconomics-Report1-004.pdf

  53. The State of AI in 2024-2025: What McKinsey’s Latest Report Reveals About Enterprise Adoption - PUNKU.AI Blog, accessed February 14, 2026, https://www.punku.ai/blog/state-of-ai-2024-enterprise-adoption

  54. BARRIERS TO AI ADOPTION | TI People, accessed February 14, 2026, https://www.ti-people.com/wp-content/uploads/2025/05/Barriers-to-AI-Adoption.pdf

  55. Deep Dive: Export Controls and the AI Race | Contrary Research, accessed February 14, 2026, https://research.contrary.com/report/drawing-geopolitical-boundaries

  56. Protecting Human Cognition in the Age of AI - arXiv, accessed February 14, 2026, https://arxiv.org/html/2502.12447v1

  57. AI in the workplace: A report for 2025 | McKinsey, accessed February 14, 2026, https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work

  58. How AI is Reshaping Work and Human Psychology - Loyola University Chicago, accessed February 14, 2026, https://www.luc.edu/quinlan/whyquinlan/centersandlabs/labforappliedartificialintelligence/research/2025/3rdquarter2025/howaiisreshapingworkandhumanpsychology/