Reduce natural-gradient update cost using randomized matmul techniques

Develop randomized numerical linear algebra methods that approximate the matmul-only natural-gradient update for the auxiliary matrix T (parameterised via its Cholesky factor L) in the R-SVGP framework, reducing its computational cost below cubic in the number of inducing points M while preserving stability and convergence during training.

Background

Although the R-SVGP bound can be evaluated using only matrix multiplications with quadratic cost (when using Hutchinson trace estimators), its optimisation remains cubic in M due to full matrix–matrix multiplications needed by the natural-gradient (NG) updates for T.

The authors explicitly conjecture that randomized matmul techniques from randomized numerical linear algebra could further reduce the NG update cost, suggesting a direction to make the entire procedure more efficient on modern accelerators.

References

Furthermore, we conjecture that randomized matmul techniques \citep{drineas2016randnla} could further reduce NG cost.

Inverse-Free Sparse Variational Gaussian Processes  (2604.00697 - Cortinovis et al., 1 Apr 2026) in Other Practical Considerations – Bound Optimisation paragraph, Section 3