Formulating a thermodynamic limit for trained neural diffusion networks

Formulate a rigorous thermodynamic limit for trained neural diffusion networks that specifies how architectural features—such as locality, receptive field size, sparsity, and equivariance—scale with system size, in order to make precise the effective field-theoretic description of critical behavior in large models.

Background

To connect observed crossover phenomena in realistic architectures to genuine critical behavior and universality classes, one needs a precise large‑system framework analogous to the thermodynamic limit in statistical physics. The authors hypothesize that trained diffusion models may exhibit soft sectors constrained by locality and equivariance that admit effective field-theoretic descriptions, but a formal limit is required to ground such claims.

The paper explicitly identifies defining this limit—including how architectural features scale with image size—as an open and technically challenging problem, which is central for establishing when true phase transitions might emerge in trained networks and how to compare them across architectures.

References

Making this precise requires a careful formulation of the ``thermodynamic limit'' for trained neural networks, including how architectural features scale with system size, an open and technically challenging problem.

How Out-of-Equilibrium Phase Transitions can Seed Pattern Formation in Trained Diffusion Models  (2603.20092 - Ambrogioni, 20 Mar 2026) in Section: Out-of-Equilibrium Critical Phenomena in Real Deep Architectures