Theory for the neural-network self-test estimator

Establish rigorous convergence guarantees and finite-sample error bounds for the neural-network estimator that minimizes the trajectory-free self-test loss for learning the interaction potential Φ and external potential V from unlabeled snapshots, accounting for the non-convex optimization landscape and the implicit regularization induced by stochastic gradient methods.

Background

The paper provides nonasymptotic error bounds for parametric (basis-function) estimators minimizing the quadratic self-test loss, showing O((Δt)α + M{-1/2}) rates. However, the practical and flexible neural-network version of the estimator introduces non-convex optimization and implicit regularization effects that are not covered by the current analysis.

A theoretical treatment would require establishing identifiability, coercivity, and statistical concentration in the neural-network function class, together with optimization stability and generalization guarantees that reflect the implicit bias of first-order methods.

References

The nonparametric neural network estimator is more challenging to analyze due to the non-convexity of the loss landscape and the implicit regularization effects of the optimization, which we leave for future work.

Learning interacting particle systems from unlabeled data  (2604.02581 - Wei et al., 2 Apr 2026) in Section 4 (Error bounds for the parametric estimator), opening paragraph