• Pascal's Chatbot Q&As
  • Posts
  • GAO Report: Generative AI—like ChatGPT or image creators—can change how we work and live, but it uses a lot of energy and water and raises serious risks for people and society.

GAO Report: Generative AI—like ChatGPT or image creators—can change how we work and live, but it uses a lot of energy and water and raises serious risks for people and society.

A single AI model can evaporate 700,000 liters of fresh water—about 25% of an Olympic pool. Companies often don’t report how much energy or water they use—citing competitive concerns.

Asking ChatGPT-4o: Please analyze the report “Generative AI's Environmental and Human Effects” and tell me what it says in easy to understand language. Also, list the most surprising, controversial and valuable statements made in the report. Conclude by listing recommendations for regulators, AI makers and businesses that use AI-based technologies.

ChatGPT-4o’s answer:
Here’s a simplified summary and key takeaways from the April 2025 GAO report, “Generative AI’s Environmental and Human Effects” (GAO-25-107172):

🌍 What the Report Says

Generative AI—like ChatGPT or image creators—can change how we work and live, but it uses a lot of energy and water and raises serious risks for people and society. The GAO analyzed its impact and found:

🛠 Environmental Effects

  • Training AI takes lots of electricity and water. Some models can use thousands of megawatt-hours—enough to power hundreds of homes—and evaporate hundreds of thousands of liters of water.

  • Data centers supporting AI are growing fast. They already use about 4% of U.S. electricity; this could rise to 6–12% by 2028.

  • Not enough is known. Companies don’t share detailed info, making it hard to measure or manage environmental harm.

  • Building and disposing of AI hardware (like GPUs) also has a big carbon footprint, often overlooked.

👥 Human Effects

The report identifies 5 main risks to people:

  1. Unsafe outputs – like hallucinated facts or harmful advice.

  2. Lack of data privacy – personal info may be leaked from training data.

  3. Cybersecurity threats – AI can be hacked or misused for cyberattacks.

  4. Bias and unfair outcomes – models can reinforce harmful stereotypes.

  5. No clear accountability – it’s hard to tell who’s responsible when things go wrong.

AI might also:

  • Replace entry-level jobs, making it harder for people to start careers.

  • Be misused in public services, leading to errors or discrimination.

  • Widen social inequalities if only some people benefit.

❗ Most Surprising, Controversial, and Valuable Statements

😮 Surprising

  • A single AI model can evaporate 700,000 liters of fresh water—about 25% of an Olympic pool.

  • Using AI for a search may take 10× more energy than a regular Google search.

Controversial

  • Companies often don’t report how much energy or water they use—citing competitive concerns.

  • Developers admit their models are unreliable but still release them for public use.

💡 Valuable

  • Efficiency improvements (like better algorithms and cooling systems) may help, but could backfire by making AI cheaper and increasing demand (a “Jevons Paradox”).

  • The report stresses the urgent need for better transparency and accountability frameworks.

✅ Recommendations for Regulators, AI Makers, and Businesses

For Regulators:

  • Improve transparency: Require AI developers to disclose energy, emissions, and water use.

  • Adopt and enforce AI frameworks: Use tools like GAO’s AI Accountability Framework or NIST’s Risk Management Framework.

  • Set standards: Work with industry to define best practices for environmental and human safety.

For AI Developers:

  • Build cleaner AI: Design models that need less energy and water; explore low-carbon power like nuclear.

  • Improve safety and fairness: Use more robust testing, red-teaming, and human feedback loops.

  • Be transparent: Release info about model training, data sources, and risks.

For Businesses Using AI:

  • Evaluate AI’s full impact: Consider environmental cost, bias, and accountability before deployment.

  • Use AI responsibly: Apply internal governance and employee training to avoid overreliance or misuse.

  • Share best practices: Collaborate across sectors to develop ethical and sustainable AI policies.