Theory for the neural-network self-test estimator
Establish rigorous convergence guarantees and finite-sample error bounds for the neural-network estimator that minimizes the trajectory-free self-test loss for learning the interaction potential Φ and external potential V from unlabeled snapshots, accounting for the non-convex optimization landscape and the implicit regularization induced by stochastic gradient methods.
References
The nonparametric neural network estimator is more challenging to analyze due to the non-convexity of the loss landscape and the implicit regularization effects of the optimization, which we leave for future work.
— Learning interacting particle systems from unlabeled data
(2604.02581 - Wei et al., 2 Apr 2026) in Section 4 (Error bounds for the parametric estimator), opening paragraph