Extensions beyond the single-neuron and two-latent setting
Extend the theoretical analysis developed for single-neuron autoencoders and the two-latent spiked cumulant model to multi-neuron architectures, to richer latent-dependence structures, and to other self-supervised objectives such as contrastive or masked predictive losses.
References
Extending the analysis to multi-neuron architectures, richer dependence structures, and other self-supervised objectives remains open.
— A solvable high-dimensional model where nonlinear autoencoders learn structure invisible to PCA while test loss misaligns with generalization
(2602.10680 - Mendes et al., 11 Feb 2026) in Conclusion