• Pascal's Chatbot Q&As
  • Posts
  • 10 of 13 AI companies disclose none of the key environmental indicators, including energy usage, water consumption, or carbon emissions.

10 of 13 AI companies disclose none of the key environmental indicators, including energy usage, water consumption, or carbon emissions.

This is shocking given the documented strain that hyperscale datacenters place on local grids and water systems.

Transparency on the Brink — The Troubling Signals from the 2025 Foundation Model Transparency Index

by ChatGPT-5.1

The 2025 Foundation Model Transparency Index (FMTI) offers one of the clearest—and starkest—diagnostics of the state of transparency in the AI industry. What makes this edition especially striking is not only the data it presents, but the story it tells about systemic regression, widening gaps between companies, and the growing tension between public commitments to “open AI” and the reality of sealed black-box development. Several findings stand out as surprising, controversial, or uniquely valuable for policymakers, rights-holders, and society.

1. The Most Surprising Findings

A historic decline in transparency—during a period of unprecedented AI adoption

The Index reveals that average transparency scores have collapsed from 58/100 in 2024 to just 40/100 in 2025. This fall is particularly counterintuitive because AI systems are now embedded in search, productivity tools, entertainment, and creativity platforms used by billions. Logically, growing impact should correlate with increased accountability. Instead, the opposite is happening—a signal that market incentives strongly favour secrecy over openness.

The complete opacity around environmental impact

Perhaps the most startling finding is that 10 of 13 companies disclose none of the key environmental indicators, including energy usage, water consumption, or carbon emissions. This is shocking given the documented strain that hyperscale datacenters place on local grids and water systems. The public debate around sustainability has intensified, yet leading AI firms are withholding the basic data needed for environmental governance.

Open-source does not mean transparent

The report makes an empirically grounded point that will surprise many: major open-weights developers, including Meta, DeepSeek, and Alibaba, are “quite opaque,” scoring near the bottom of the Index despite releasing model weights. This challenges a widespread assumption in technical and policy circles—that openness of model artifacts automatically confers clarity about corporate behaviour. It does not. The decision to publish weights tells us nothing about training data provenance, safety processes, compute usage, or downstream monitoring.

2. The Most Controversial Findings

Meta and OpenAI’s dramatic collapse: from top performers to bottom-tier transparency

Meta and OpenAI—arguably the two most influential Western AI developers—fell from the top of the Index in 2023 to the bottom in 2025, scoring 31 and 35 respectively. These declines reflect missing or delayed technical reports, absence of documentation for flagship models (e.g., Meta’s missing Llama 4 report), and unfulfilled public transparency commitments. This reversal raises contentious questions about whether competitive pressures, increased commercialisation, or litigation risks are driving formerly transparent firms into secrecy.

The deepening gap between IBM and everybody else

IBM scored 95/100, the highest in the Index’s history—far above the industry average. IBM uniquely provides replicable training data descriptions and grants access for external auditors. This sets an uncomfortable precedent: if one major company can disclose so much without undermining competitiveness, why can’t others? IBM’s transparency therefore becomes a benchmark that implicitly exposes other firms’ opacity as a choice, not a necessity.

The Index shows that transparency is driven by corporate preference—not by regulation

The report states that transparency outcomes “are primarily determined by the extent to which individual companies choose to prioritize transparency,” not by industry incentives or regulatory pressure. This is a controversial conclusion because it undercuts arguments by several AI companies that transparency is restricted by external factors (national security, IP, safety risks, etc.). In practice, transparency appears to be voluntary—and declining rapidly.

3. The Most Valuable Findings

Four areas of systemic opacity define the future regulatory agenda

The Index identifies four domains where transparency is consistently lacking across almost all companies:

  1. Training data provenance

  2. Training compute and energy use

  3. Downstream use and monitoring

  4. Societal impact

These are not merely gaps; they are structural blind spots that undermine copyright governance, model accountability, risk assessment, and sustainability oversight. Their persistence for three consecutive years signals where regulation is most urgently needed.

A rare mapping of transparency across 15 dimensions

The table on page 5 provides granular visibility into where each company does or does not disclose information—across data acquisition, model capabilities, risk mitigations, auditing, economic impact, and more. This is immensely valuable for lawmakers, researchers, and rights-holders because it transforms an abstract debate (“AI is opaque”) into an evidence-based analysis of which firms disclose what, and where systemic failures lie.

Clarifying the difference between openness and transparency

The distinction is one of the most practically important contributions of the report. It dispels the myth that releasing model weights is sufficient for accountability, thereby preventing policymakers from conflating the two concepts. The implication is profound: open-source AI may still be ungovernable if upstream opacity persists.

Conclusion: A Warning Signal and a Policy Blueprint

The 2025 Foundation Model Transparency Index is more than a scorecard—it is a warning. Transparency is declining at the very moment when foundation models are becoming infrastructure for communication, creativity, scientific discovery, and political discourse. The most powerful AI companies are voluntarily reducing visibility into how their systems are built and deployed. Without regulatory frameworks, market dynamics alone will not reverse this trend.

But the Index is also a blueprint: it identifies the precise dimensions where transparency must be mandated, provides comparative evidence that disclosure is feasible (IBM proves it), and exposes the gaps that matter most for societal oversight. For lawmakers, regulators, rights-holders, and civil society, this report is an indispensable tool in shaping the next phase of AI governance.