• Pascal's Chatbot Q&As
  • Posts
  • Bor Gregorcic’s study exposes a critical blind spot in the AI ecosystem: while models can generate videos that “look” real, they may not behave according to the laws of physics.

Bor Gregorcic’s study exposes a critical blind spot in the AI ecosystem: while models can generate videos that “look” real, they may not behave according to the laws of physics.

This has profound implications for AI development, education, and public trust. The paper is a call to action for responsible AI development and scientifically literate AI use.

Do AI-Generated Videos Obey Physics Laws? A Critical Review of Bor Gregorcic’s 2025 Study

by ChatGPT-4o

Introduction

As artificial intelligence tools increasingly permeate visual media, a new frontier emerges: AI-generated video. While these outputs may look realistic to the untrained eye, do they actually obey the laws of physics? This question forms the basis of Bor Gregorcic’s 2025 paper Do AI-Generated Videos Obey Physics Laws?, published in Physics Education. The study provides a fascinating, low-cost classroom activity that compares real-world physics videos to AI-generated counterparts using Google’s Veo 2 model and highlights serious limitations in AI’s ability to replicate basic physics. The implications stretch far beyond education—touching on media trust, disinformation, AI governance, and model development.

Key Findings and Observations

1. Surprising Finding: AI’s Failure to Simulate Air Resistance

Despite being trained on enormous video datasets, the AI failed to simulate a core physical principle: differential falling speeds due to air resistance. In the real video, a rubber ball fell faster than a lightweight cupcake liner. In contrast, the AI-generated video showed both objects hitting the ground almost simultaneously—a basic violation of Newtonian physics.

📌 Why it matters: This suggests that current video-generation models are not inferring generalized physics laws, but instead mimicking surface-level visual correlations from their training data.

2. Controversial Insight: AI Models Can Be Visually Accurate but Physically Wrong

The AI-generated video may visually resemble real footage to casual viewers but is physically inconsistent in multiple ways:

  • Sudden and erratic horizontal velocity changes.

  • Unexplained sideways drifts.

  • Jerky motion and frame stalling mid-fall.

  • Slower overall fall time, possibly due to miscalculating gravitational acceleration or generating a pseudo slow-motion effect.

📌 Why it matters: In an era where AI videos are increasingly used in media and communication, viewers may trust what they see—even when the content violates natural laws. This highlights a risk of misinformation and loss of epistemic trust in visual evidence.

3. Valuable Insight: Video Analysis as AI Forensics Tool

Gregorcic advocates using physics video analysis tools (e.g., Tracker Online) to teach students critical thinking and AI literacy. By analyzing discrepancies between AI and real footage, learners can become digital forensics experts who assess authenticity through scientific principles.

📌 Why it matters: This approach empowers students and educators to spot deep fakes and disinformation, making science education a frontline defense against AI misuse.

4. Valuable Framework: ‘World Models’ vs. LLMs

The paper draws a clear distinction between large language models (LLMs) like ChatGPT and "world models" trained to predict physical phenomena. While LLMs predict the next word in a sentence, world models aim to predict real-world dynamics over time.

📌 Why it matters: Understanding this difference is essential for AI developers. Realistic video simulation requires more than mimicking appearances—it demands integration of physical laws, sensor data, and embodied reasoning.

Why This Topic Is Important

AI-generated videos are no longer a novelty—they're increasingly used in:

  • Education

  • Social media

  • Advertising

  • Political campaigns

  • Criminal trials

As Gregorcic notes, the rise of deepfakes and AI-generated disinformation makes it vital that society develops mechanisms to assess whether video content is real or fabricated. The inability of today’s models to follow physics laws—even in trivial falling motion—suggests current AI is far from achieving embodied or physically grounded intelligence.

The danger is that as AI models improve in surface realism, their failures become harder to spot, especially when people lack scientific literacy. Therefore, embedding physics-informed AI literacy into education, journalism, and forensic investigation becomes a critical safeguard.

Recommendations

For AI Makers:

  1. Integrate Physics Engines or Constraints into Video Models
    Train models not just on visual similarity but also on physical accuracy using real-world simulations and sensor data.

  2. Collaborate with Physicists and Cognitive Scientists
    Adopt interdisciplinary approaches to design models that better approximate the behavior of the real world.

  3. Tag AI Outputs with Reliability Metadata
    Include "confidence scores" or warning labels indicating the model’s likely accuracy in reproducing physical dynamics.

  4. Prevent Model Overfitting on Contextual Bias
    Ensure models don’t rely on surface features (e.g. object color, background) to predict motion but instead learn causal relationships.

For AI Users (Educators, Journalists, Policymakers, Public):

  1. Develop Critical AI Literacy Skills
    Understand that not all AI-generated visuals are scientifically plausible. Visual realism ≠ truth.

  2. Use Tools Like Tracker Online
    Learn how to analyze video motion, velocity, and acceleration using open-source tools to spot inconsistencies.

  3. Treat AI-Generated Videos as Suspect by Default
    Especially in legal, journalistic, or scientific contexts, assume AI content requires independent verification.

  4. Incorporate AI Forensics into Curricula
    Teach students how to evaluate visual evidence through physics—building skepticism into education as a safeguard.

Conclusion

Bor Gregorcic’s study exposes a critical blind spot in the AI ecosystem: while models can generate videos that “look” real, they may not behave according to the laws of physics. This has profound implications for AI development, education, and public trust. The paper is more than a physics lesson—it is a call to action for responsible AI development and scientifically literate AI use. If left unaddressed, the mismatch between appearance and physical truth may lead to a world where falsehoods are not just spoken—but vividly animated.