Quaternion Nuclear Norm Regularization
- Quaternion nuclear norm regularization is a low-rank modeling technique for hypercomplex (quaternion) matrices, preserving inter-channel correlations.
- It employs both convex and nonconvex surrogates, such as QNN, QNOF, and QNMF, to optimize signal recovery in multi-channel data like color images.
- Advanced optimization algorithms, including ADMM and proximal methods based on QSVD, ensure efficient convergence in tasks like denoising and inpainting.
Quaternion nuclear norm regularization extends low-rank regularization to hypercomplex-valued (specifically, quaternionic) matrices. By exploiting the quaternion structure, these methods provide principled frameworks for modeling and recovering multi-channel signals—especially color images and videos—where exploiting inter-channel coupling is critical. Modern quaternion nuclear norm regularization encompasses convex relaxations (e.g., the quaternion nuclear norm), nonconvex enhancements (e.g., truncated, weighted, and ratio-based norms), and robust optimization algorithms enabling effective image inpainting, denoising, robust PCA, and classification.
1. Quaternion Nuclear and Frobenius Norms
A quaternion matrix is typically encoded as , where each . For such matrices, the Singular Value Decomposition (QSVD) gives unitary factors and singular values (real, nonnegative). The quaternion nuclear norm (QNN) and Frobenius norm are defined as: These norms retain key properties—unitary invariance under left/right quaternion rotations and real scaling invariance—making them natural extensions of their real-valued counterparts (Yang et al., 2021, Huang et al., 2024). The QNN is the tightest convex surrogate for the rank, but uniformly shrinks all singular values, often leading to over-penalization of the dominant signal components.
2. Nonconvex Surrogates: QNOF, QNMF, Truncated, and Weighted Norms
To bridge the gap between convex relaxations and true rank, several nonconvex surrogates have emerged:
- Quaternion Nuclear Norm Over Frobenius (QNOF):
This is a scale-invariant, parameter-free, nonconvex proxy, tightly approximating the rank (since closely tracks ). QNOF satisfies and exhibits both scaling and unitary invariance. Its optimization reduces to an penalty on the singular values (Guo et al., 30 Apr 2025).
- Quaternion Nuclear Norm Minus Frobenius Norm (QNMF):
This hybrid penalty sharply suppresses small singular values while reducing shrinkage on dominant ones. The closed-form proximal mapping leads to exact global solutions in the denoising setting (Guo et al., 2024).
- Truncated and Weighted Norms: Truncated versions () restrict shrinkage to the tail singular spectrum (Yang et al., 2021, Yang et al., 2022). Weighted nuclear norms (e.g., QWNNM) and weighted Schatten -norms (QWSNM) introduce elementwise weights or nonconvex exponents, further enhancing rank-approximation (Zhang et al., 2023, Miao et al., 2022).
- Bilinear and Factor Norms: Bilinear-factor surrogates such as quaternion double nuclear norm (Q-DNN) and quaternion Frobenius/nuclear norm (Q-FNN) offer efficient nonconvex Schatten- proxies, directly optimizing the factor representations to avoid large-scale QSVD (Miao et al., 2020).
3. Optimization Algorithms: Proximal and ADMM Frameworks
Most quaternion nuclear norm regularized models, including nonconvex extensions, are solved via ADMM frameworks. Key algorithmic features include:
- Proximal Operator via QSVD: The central step is computation of the QSVD, followed by shrinking singular values via the corresponding thresholding or nonconvex operation. For QNOF and QNMF, the optimal singular value update solves either a root-finding or closed-form piecewise system (Guo et al., 30 Apr 2025, Guo et al., 2024).
- Auxiliary Splits and Augmented Lagrangian: To decouple complex nonconvex objectives, models use auxiliary variables (splits) and Lagrange multipliers, alternating updates for primal and dual variables and increasing penalty parameters to guarantee convergence (Guo et al., 30 Apr 2025, Huang et al., 2024, Yang et al., 2021).
- Weighted and Structured Steps: Weighted and truncated methods build weights from observed data patterns, and the row/entry structure, accelerating convergence under missing or irregular observation patterns (Yang et al., 2021, Miao et al., 2022).
Tables summarizing the main quaternion regularizers and their optimization properties:
| Regularizer | Expression | Optimization Complexity |
|---|---|---|
| QNN | QSVD + soft-threshold per iter () (Huang et al., 2024) | |
| QNOF | QSVD + -shrinkage per iter (Guo et al., 30 Apr 2025) | |
| QNMF | QSVD + piecewise shrinkage (Guo et al., 2024) | |
| QTNN | QSVD + low-rank factorization (Yang et al., 2021) | |
| QWNNM/QWSNM | QSVD + Generalized Soft-Threshold (Zhang et al., 2023) | |
| Q-DNN/Q-FNN | Bilinear factorization surrogates | Small-matrix QSVD per iter (Miao et al., 2020) |
4. Applications in Imaging and Machine Learning
Quaternion nuclear norm regularization is foundational in modern color image and video restoration, classification, and completion:
- Matrix Completion and Inpainting: QNOF and truncated/weighted methods deliver superior PSNR and SSIM in color-image inpainting, especially under high missing or impulse noise, leveraging stronger rank approximation than convex nuclear norm models (Guo et al., 30 Apr 2025, Yang et al., 2022, Yang et al., 2021, Huang et al., 2024).
- Robust Principal Component Analysis: Joint QNOF/ models achieve state-of-the-art denoising and impurity removal, robust to heavy-tailed and sparse errors (Guo et al., 30 Apr 2025, Guo et al., 2024).
- Classification: Low-rank support quaternion matrix machines (LSQMM) integrate QNN regularization with hinge-loss, resulting in noise-robust classification, outperforming matrix-based SVMs (Chen et al., 9 Dec 2025).
- Color Video and Tensor Completion: Tensorial extensions (QTNN, QWNN, log-nuclear norms) promote both global low-rankness and sparse local detail in 3D/color video tensors, leveraging quaternion transforms (e.g., QTDCT) (Yang et al., 2022, Miao et al., 2022).
Empirically, QNOF was found to recover low-rank quaternion matrices with relative error when entries are observed and to offer superior performance (best PSNR/SSIM) on standard color imaging benchmarks for $50$– missing rates (Guo et al., 30 Apr 2025). QNMF ranks first on average PSNR/SSIM in Gaussian/real-noise denoising, inpainting, and deblurring (Guo et al., 2024).
5. Theoretical Guarantees and Practical Behavior
- Convergence: All convex models (QNN, QWNNM/QWSNM) admit global convergence via standard ADMM arguments; nonconvex formulations (QNOF, QNMF) show weak or fixed-point convergence provided penalties increase and primal variables remain bounded (Guo et al., 30 Apr 2025, Zhang et al., 2023, Yang et al., 2021).
- Rank Approximation Fidelity: Nonconvex surrogates (QNOF, QNMF, truncated/weighted norms, MCP/logarithmic norms) provide tighter and less biased proxies to the true rank function, evidenced both by theoretical approximation bounds and practical gains in reconstruction accuracy (Guo et al., 30 Apr 2025, Guo et al., 2024, Huang et al., 2024).
- Robustness and Efficiency: QNOF-based completion converges 2-3× faster than other nonconvex methods; LSQMM classification accuracy remains even under noise ratio (Guo et al., 30 Apr 2025, Chen et al., 9 Dec 2025).
6. Extensions: Weighted, Sparse, and Transformed Domains
Several extensions further increase modeling flexibility and efficacy:
- Weighted Norms: QWNN/QWSNM in tensor settings differentially penalize singular values of unfoldings, enabling more flexible low-rank structure capture. Weights can be data- or spectrum-dependent (Zhang et al., 2023, Miao et al., 2022).
- Sparse Regularization: Integrating sparse penalties (e.g., quaternion norm in QDCT/QTDCT domains) enables simultaneous low-rank and sparse recovery, which is critical for fine-detail restoration and denoising (Yang et al., 2022, Yang et al., 2022).
- Tensorial Generalizations: Quaternion nuclear norm regularizers generalize naturally to higher-order tensors (QTNN, QTLN, QTT-rank minimization), enabling global and local structured recovery in color video (Yang et al., 2022, Miao et al., 2022).
7. Comparative Effectiveness and Principal Results
Extensive numerical results affirm the effectiveness of quaternion nuclear norm regularization and its extensions. QNOF consistently achieves best or top performance in color-image matrix completion, robust PCA, and denoising under standard and adverse corruptions (Guo et al., 30 Apr 2025). QNMF provides state-of-the-art results across denoising, deblurring, and inpainting benchmarks (Guo et al., 2024). Weighted and truncated variants offer further accuracy gains and computational benefits, especially as missing data rates increase or under more challenging noise regimes (Yang et al., 2021, Miao et al., 2022, Yang et al., 2022).
In summary, quaternion nuclear norm regularization and its nonconvex, weighted, and tensorial extensions represent a systematic, theoretically-grounded approach to low-rank modeling in hypercomplex domains where cross-channel coupling is essential. They yield scalable, convergent algorithms with demonstrably superior empirical performance in complex, high-dimensional imaging and machine learning tasks.