Papers
Topics
Authors
Recent
Search
2000 character limit reached

More on low rank approximation of a matrix

Published 10 Jun 2019 in math.NA and cs.NA | (1906.04929v11)

Abstract: We call a matrix algorithm superfast (aka running at sublinear cost) if it involves much fewer flops and memory cells than the matrix has entries. Using such algorithms is highly desired or even imperative in computations for Big Data, which involve immense matrices and are quite typically reduced to solving linear least squares problem and/or computation of low rank approximation of an input matrix. The known algorithms for these problems are not superfast, but we propose a novel ACA-like superfast randomized iterative refinement of a crude LRA. Like ACA iterations we output CUR LRA, which is a particular memory efficient form of LRA. Unlike ACA method we use sampling probability to reduce our task to recursive solution of generalized Linear Least Squares Problem (generalized LLSP) and to prove monotone convergence to a close LRA with a high probability under mild assumptions on an initial LRA. For crude initial LRA lying reasonably far from optimal we have consistently observed significant improvement in two or three iterations in our numerical tests.

Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.