Reduce natural-gradient update cost using randomized matmul techniques
Develop randomized numerical linear algebra methods that approximate the matmul-only natural-gradient update for the auxiliary matrix T (parameterised via its Cholesky factor L) in the R-SVGP framework, reducing its computational cost below cubic in the number of inducing points M while preserving stability and convergence during training.
References
Furthermore, we conjecture that randomized matmul techniques \citep{drineas2016randnla} could further reduce NG cost.
— Inverse-Free Sparse Variational Gaussian Processes
(2604.00697 - Cortinovis et al., 1 Apr 2026) in Other Practical Considerations – Bound Optimisation paragraph, Section 3