Activity–degree scaling as the cause of poor size generalization in nODEs
Prove that, for neural ordinary differential equations with Barabási–Barzel–form vector fields trained on small graphs, an increase of node-state magnitude with node degree in the underlying dynamical system causes predictions at high-degree nodes on larger, degree-heterogeneous graphs to enter regions of state space not covered by the training data and thereby degrades predictive performance.
References
We conjecture that the increase in activity with node degree pushes the model into regions of state space that were not observed during training when making predictions at hubs, thereby impairing predictive performance.
— When do neural ordinary differential equations generalize on complex networks?
(2602.08980 - Laber et al., 9 Feb 2026) in Discussion (first paragraph)