Convergence properties of the supervised learning estimator

Establish convergence properties of the estimator of the generative hyperparameters θ_t obtained by minimizing the Monte Carlo–based negative log-likelihood in the proposed stochastic graph neural network.

Background

The paper implements a projected gradient descent with Adam updates to minimize a Monte Carlo–estimated negative log-likelihood over random architecture realizations. While numerical evidence and preliminary analysis are provided, theoretical convergence behavior of the resulting estimator remains unresolved.

This open problem asks for rigorous convergence guarantees of the estimator in the proposed training setup, including conditions ensuring convergence and characterizing the nature of the limit, which are central to the statistical soundness of the learning procedure.

References

Several deeper questions, such as the full characterization of the induced function class, identifiability of the generative hyperparameters, and convergence properties of the supervised learning estimator, remain open. These are mathematically subtle and require further development.