Papers
Topics
Authors
Recent
Search
2000 character limit reached

Fast Hermitian Diagonalization with Nearly Optimal Precision

Published 19 Aug 2024 in math.NA and cs.NA | (2408.09880v1)

Abstract: Algorithms for numerical tasks in finite precision simultaneously seek to minimize the number of floating point operations performed, and also the number of bits of precision required by each floating point operation. This paper presents an algorithm for Hermitian diagonalization requiring only $\lg(1/\varepsilon)+O(\log(n)+\log\log(1/\varepsilon))$ bits of precision where $n$ is the size of the input matrix and $\varepsilon$ is the target error. Furthermore, it runs in near matrix multiplication time. In the general setting, the first complete analysis of the stability of a near matrix multiplication time algorithm for diagonalization is that of Banks et al. [BGVKS20]. They exhibit an algorithm for diagonalizing an arbitrary matrix up to $\varepsilon$ backward error using only $O(\log4(n/\varepsilon)\log(n))$ bits of precision. This work focuses on the Hermitian setting, where we determine a dramatically improved bound on the number of bits needed. In particular, the result is close to providing a practical bound. The exact bit count depends on the specific implementation of matrix multiplication and QR decomposition one wishes to use, but if one uses suitable $O(n3)$-time implementations, then for $\varepsilon=10{-15},n=4000$, we show 92 bits of precision suffice (and 59 are necessary). By comparison, the same parameters in [BGVKS20] does not even show that 682,916,525,000 bits suffice.

Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 2 likes about this paper.