Cayley-Free Two-Step Algorithm
- The paper introduces the Cayley-free two-step algorithm that bypasses costly Cayley transforms by using explicit algebraic updates for singular vector corrections.
- The method attains local cubic root convergence under standard analyticity and nonsingularity conditions, simplifying the iterative process for ISVP.
- Empirical results demonstrate 10–15% lower CPU times compared to traditional methods, particularly benefiting large-scale matrix problems.
The Cayley-free two-step algorithm constitutes a class of iterative methods for the inverse singular value problem (ISVP) that achieve high-order convergence without recourse to Cayley transformations. This approach eliminates the necessity to solve $2(m+n)$ large linear systems typically involved in updating approximate singular vectors at each iteration within previous two-step frameworks. The method is structured around explicitly computable first-order corrections to left and right singular vectors, leveraging only analytic operations and dense matrix updates. Under standard analyticity and nonsingularity conditions for the Jacobian at a target solution, the Cayley-free two-step algorithm attains local cubic root convergence with reduced computational overhead relative to competing techniques (Fan et al., 31 Jan 2026).
1. Formulation of the Inverse Singular-Value Problem (ISVP)
Given real matrices and a target spectrum , with distinct strictly positive entries , define the affine matrix mapping: Let denote the singular values of . The ISVP asks for such that
which is equivalent to solving the nonlinear system , where is given by
The mapping is analytic, and its Jacobian is
where and are the -th left and right singular vectors of .
2. Description of the Cayley-Free Two-Step Iterative Scheme
The Cayley-free two-step algorithm performs a Chebyshev-corrected two-step iteration, bypassing Cayley transform–based updates:
- Predictor:
- Correction:
- Chebyshev matrix update:
where approximates , and , , use (potentially inexact) updated singular vector approximations. Initialization requires a starting value , , and a thin SVD .
At each outer iteration, rather than solving O() linear systems, closed-form first-order corrections to () and () are derived. Specifically, matrix “skew-block” corrections , (, ) are assembled from explicit formulas involving blockwise combinations of , , and the residual matrix . For instance, for ,
with analogous formulas for other index sets. All steps to update , , , , involve only dense algebraic operations, no linear system solves or exponentials.
The procedure for a single iteration comprises:
- Forming the predictor ;
- Computing and associated block corrections , ;
- Updating , via postmultiplication by , ;
- Calculating residuals and taking the Chebyshev step;
- Repeating analogous corrections (second step) for improved vector approximations.
3. Theoretical Conditions and Analytic Properties
Analysis assumes:
- The target spectrum features strictly decreasing positive entries ();
- The function is analytic, and the Jacobian is nonsingular.
These conditions guarantee that in a neighborhood of the solution , all steps are well-defined and the algorithm is locally convergent.
4. Convergence Analysis and Root-Cubic Rate
Perturbative and matrix-equation estimates establish the convergence rate. For suitable constants , and all ,
Thus, the convergence rate is a cubic root, with root rate for and 0 otherwise.
An explicit error bound holds for constants ,
5. Computational Efficiency and Comparison
Traditional Cayley-based two-step schemes require solving $2(m+n)$ linear systems of size or at every outer iteration, incurring per-iteration complexity . The Cayley-free two-step algorithm eliminates these costs: all singular vector updates occur via inner products and algebraic manipulations, with no linear solves or matrix exponentials required. Only the Chebyshev update of requires operations (matrix-matrix product for matrices).
Storage requirements are correspondingly reduced: there is no need to retain or factorize skew-symmetric matrices for Cayley operations—only , , , are required (sizes , , and , respectively).
6. Numerical Experiments and Empirical Results
Numerical studies were conducted on test matrices generated randomly in three representative cases:
- (a) ,
- (b) ,
- (c) ,
For each experiment, a random yields ; initial with uniformly distributed in , –. The iterative procedure halts if the outer residual or .
The comparative results between the proposed Cayley-free two-step, the Ulm-Cayley method, and a two-step inexact Newton (TIN) scheme are as follows (10 random trials, averaged):
| Case | Ulm-Cayley (CPU sec, # Iters) | Cayley-Free (CPU sec, # Iters) | Two-step TIN (CPU sec, # Iters) |
|---|---|---|---|
| (a) 100, 60 | 0.47, 3.20 | 0.36, 3.20 | 0.48, 3.20 |
| (b) 300,120 | 8.51, 3.10 | 7.52, 3.10 | 8.72, 3.10 |
| (c) 600,300 | 266.6, 2.50 | 241.0, 2.50 | 271.7, 2.50 |
The Cayley-free scheme exhibits the same iteration count (2–4 steps) as previous methods but shows 10–15% lower CPU time, with increasing benefits as grow, due to the elimination of Cayley-related solves.
The Cayley-free two-step algorithm achieves ISVP solutions with cubic root local convergence while requiring only algebraic updates for singular vector approximations. By dispensing with the use of Cayley transforms, the method substantially reduces both computation and storage, particularly for larger-scale problems, without compromising on theoretical convergence guarantees or empirical performance (Fan et al., 31 Jan 2026).