- Pascal's Chatbot Q&As
- Posts
- Creativity in diffusion models doesn’t come from any deliberate design to be “imaginative”—it stems from imperfections in the process these models use to construct images from noise.
Creativity in diffusion models doesn’t come from any deliberate design to be “imaginative”—it stems from imperfections in the process these models use to construct images from noise.
Just because an AI produces something that looks new doesn’t mean it understands what it's doing. It’s not “inspired,” it’s just following rules that happen to lead to novel results.
Demystifying AI’s Creativity — What Recent Discoveries Mean for Responsible AI Development
by ChatGPT-4o
Artificial intelligence (AI) has dazzled the public with its apparent ability to “create”—producing artworks, poetry, or music that feels novel and even imaginative. But where does this “creativity” come from? A recent study, as reported by WIRED, helps answer this question, and the findings have big implications for developers, ethicists, and anyone invested in building AI responsibly.
1. The Puzzle of AI Creativity
When AI image generators like DALL·E or Stable Diffusion create strange, surreal images—like people with extra fingers or dreamlike landscapes—it’s tempting to call this “creativity.” Yet these systems are trained to replicate data, not invent new ideas. They start with static training material (millions of existing images) and are supposed to imitate what they've seen. So how do they end up inventing something new?
That was the paradox scientists sought to solve: If AI is only mimicking training data, why is it producing outputs that seem original?
2. Discovery: Creativity as an Accident of the Machine
The key breakthrough came when two researchers—Mason Kamb and Surya Ganguli—figured out that the AI’s “creativity” isn’t a mystery or magic. It’s an inevitable side effect of how image-generating AIs are built. Their study, presented at the International Conference on Machine Learning 2025, revealed that creativity in diffusion models doesn’t come from any deliberate design to be “imaginative”—it stems from imperfections in the process these models use to construct images from noise.
Imagine taking a clear photo and turning it into static, like TV snow. Then imagine trying to reconstruct the photo from that static. AI image generators do something similar: they start with noise and “denoise” it step-by-step to recreate a picture. But during this process, they pay attention only to small patches of an image, not the big picture. These patches are then stitched together based on local rules, without full knowledge of the final image's structure. This results in something that looks new—sometimes surprisingly beautiful, sometimes bizarre.
3. Turing Patterns and Human Development
To explain how such local actions can create complex results, the researchers looked at biology. When human embryos grow, individual cells don’t know the full body plan. Instead, they follow local chemical signals to decide what to become—a process that sometimes fails, like when extra fingers form. This principle, known as “Turing patterns,” also applies to AI: local decision-making in AIs, when combined with a specific set of rules, produces unexpected (and sometimes creative) results.
4. The Equivariant Local Score (ELS) Machine
To test their theory, the researchers built a mathematical simulation called the ELS machine. This wasn’t a traditional AI model trained on data. Instead, it was a system that mimicked the local decision-making and spatial rules used in diffusion models. Astonishingly, it generated nearly identical outputs to real AI models 90% of the time—suggesting that this form of “creativity” isn’t the result of high-level thinking, but of local rule-following. The illusion of originality, then, is a structural by-product of how these models function.
5. Implications for Responsible AI
This finding has several important consequences:
Creativity ≠ Understanding: Just because an AI produces something that looks new doesn’t mean it understands what it's doing. It’s not “inspired,” it’s just following rules that happen to lead to novel results. This helps ground public and policy conversations about the true nature of AI intelligence.
Transparency in AI Systems: By mathematically explaining where creativity comes from, researchers make AI systems more understandable and less like a black box. This is crucial for ethics, accountability, and safe deployment.
Avoiding Overhype: Many tech companies market AI as creative or even sentient. This research reminds us that AI’s outputs can be impressive without implying consciousness or intent. Developers and marketers should resist exaggerating what AI can actually do.
Risk Mitigation: Unintended creativity can lead to mistakes, like grotesque images or inappropriate content. Understanding how these outputs emerge allows developers to build safeguards and corrections into future systems.
Bridging Human and Machine Thinking: Interestingly, this research also suggests that human creativity might have similarities with AI. We, too, combine fragments of past experiences and follow local “rules” (like grammar or cultural norms) when generating something new. This opens the door for deeper interdisciplinary research into how creativity functions across biological and artificial systems.
6. Recommendations for AI Developers and Ethicists
Don’t treat AI outputs as inherently intelligent or creative—build in checks for hallucinations and errors.
Push for explainable AI that can be reverse-engineered and audited, like the ELS machine.
Educate users (especially in creative industries and education) about the mechanical origins of “creative” AI outputs.
Avoid anthropomorphizing AI in public-facing narratives. Misleading users about AI capabilities can distort public trust.
Incorporate structural awareness into AI design—improving how AI models track global image structure, not just local patches, may reduce grotesque outputs and improve safety.
Conclusion
What seems like magic in generative AI is often just math, structure, and quirks of design. This new research demystifies one of the most compelling aspects of AI—its apparent creativity—and shows that it arises not from deep thinking, but from the simple rules that guide its internal processes. For responsible AI developers, ethicists, and users, this means we can better understand, anticipate, and govern the outputs of generative models. And in doing so, we take a vital step toward making AI not just powerful, but trustworthy.
