Identifiability of the generative hyperparameters

Determine the identifiability of the generative hyperparameter vector θ_t = (θ, β) that governs the latent-field-driven architecture and neuron biases in the stochastic graph neural network when learned from supervised input–output data via negative log-likelihood minimization.

Background

The training objective in the paper is to infer the generative hyperparameters that control the latent Gaussian random field, neuron placement via a Poisson process, sparse connectivity, and random weights. Despite preliminary analysis, the identifiability of this hyperparameter vector under the proposed likelihood-based approach is explicitly stated as unresolved.

This open problem focuses on whether and under what conditions θ_t can be uniquely recovered (up to equivalence) from observed input–output data in the proposed stochastic architecture, a key requirement for reliable inference and model interpretability.

References

Several deeper questions, such as the full characterization of the induced function class, identifiability of the generative hyperparameters, and convergence properties of the supervised learning estimator, remain open. These are mathematically subtle and require further development.