- The paper introduces a rank-adaptive HOOI algorithm that dynamically refines tensor ranks based on a prescribed error threshold.
- It employs constrained least squares and SVD-based updates to ensure monotonic convergence and local optimality during iterations.
- Empirical evaluations on synthetic data and MNIST demonstrate improved compression rates and reduced approximation errors compared to conventional methods.
A Rank-Adaptive Higher-Order Orthogonal Iteration Algorithm for Truncated Tucker Decomposition
Introduction
Higher-order tensor decomposition, specifically the Tucker decomposition, has gained substantial attention due to its capability to generalize the classical singular value decomposition for higher-order tensors. The practicality of this method is often realized through its truncated form, which aims to provide low multilinear-rank approximations based on a predetermined truncation parameter. However, determining this parameter a priori can be challenging, leading to suboptimal truncations that degrade performance. The paper presents a novel, rank-adaptive higher-order orthogonal iteration (HOOI) algorithm designed to compute truncated Tucker decompositions with a specified approximation error threshold, promising monotonic convergence and local optimality. This method stands distinct from traditional algorithms by dynamically adjusting the rank during the iterative decomposition process.
Algorithm Overview
The proposed rank-adaptive HOOI algorithm introduces a systematic approach where the truncation for tensor dimensions is recalibrated iteratively to meet an error constraint. The algorithm begins with an initial guess, derived either from a truncated HOSVD (t-HOSVD) or a randomized approach, and proceeds to refine the factor matrices and core tensor simultaneously. The key lies in solving a constrained least squares problem formulated during the iterations, whereby the truncation is adjusted to ensure that the approximation error remains within the prescribed tolerance.
Algorithmically, the process relies on evaluating the singular values of the intermediate matricized tensor products and adjusting the rank to exclude negligible contributions while maintaining error bounds. A significant feature is the rank reduction strategy driven by the inequality in Equation \eqref{eq:select truncation}, which assures the adaptation is inherently optimal within the local scope of each iteration.
Implementation Details
The HOOI involves iterative updates where each step calculates a mode-n matricization of the tensor, pursuing a rank-reduced singular value decomposition (SVD) to update the factor matrix for that mode. The initialization phase, crucial for convergence speed and accuracy, can utilize either a sequentially truncated HOSVD basis or a randomized orthogonal matrix set. Once initialized, the rank adaptation is governed by comparing the residual Frobenius norms against the error threshold, iteratively minimizing the rank until further reduction breaches the constraint.
The complexity of the algorithm is primarily dictated by the SVD computations, where for each iteration, the focus is on the mode-n matricization's rank adaptation. Practical implementations must consider computational efficiency enhancements, such as leveraging modern parallel computing architectures or optimizing computational libraries to handle large-scale tensors effectively.
Experimental Results
Empirical evaluations highlight the advantages of the rank-adaptive HOOI across various applications: synthetic noisy low-rank tensor reconstructions, compression of a regularized Coulomb kernel, and classification tasks using MNIST datasets. In all scenarios, it consistently achieves superior rank efficiencies and reduces approximation errors compared to classical and greedy adaptation strategies. Notably, the compression rates and computational overhead illustrate that the dynamic adaptation does not come at the cost of significant processing time, making it competitive for real-world tensor applications.
Theoretical and Practical Implications
Theoretically, the method contributes a robust procedure for tensor decompositions where prior rank information is unknown or difficult to estimate. Its adaptability ensures computational resources are optimally utilized, potentially opening new avenues in tensor-based machine learning models and scientific computing. Practically, applications in compression and data classification underscore its utility, particularly where dimensionality reduction and accuracy are paramount, such as in image processing, signal recovery, and data analysis.
Conclusion and Future Directions
The study introduces a significant enhancement to higher-order tensor decompositions through a rank-adaptive HOOI algorithm, validated by rigorous theoretical backing and supported by substantial empirical results. Further research may explore its convergence properties more deeply and expand its adaptation strategy to broader tensor factorization frameworks, potentially integrating it with advanced parallel processing techniques to leverage high-performance computing infrastructures.