Papers
Topics
Authors
Recent
Search
2000 character limit reached

Krylov Graph Representation

Updated 14 January 2026
  • Krylov graph representation is a method that maps the Lanczos basis vectors from the Krylov subspace onto graph nodes with edges weighted by recurrence coefficients.
  • It unifies spectral theory, numerical linear algebra, and complex systems by enabling precise approximations of matrix functions, diffusion processes, and graph kernels.
  • Adaptive and block Krylov methods extend these representations for efficient graph learning, offering scalable polynomial filter design and analysis of operator complexity in quantum systems.

A Krylov graph representation is a structured mapping of the Krylov subspace associated with a matrix function or time-propagation operator onto a graph whose nodes correspond to Lanczos (Krylov) basis vectors, and whose edge and node weights encapsulate the structure of operator growth, energy diffusion, or filter response as dictated by the underlying generator. Krylov graph representations unify fundamental concepts in spectral theory, complex systems, numerical linear algebra, and learning on graphs, providing a geometric and combinatorial foundation for efficient algorithmic approximations, analysis of complexity, and embedding of non-Euclidean domains.

1. Definition and Mathematical Foundations

The fundamental object in a Krylov graph representation is the Krylov subspace: Km(A,b)=span{b,Ab,A2b,,Am1b}\mathcal K_m(A, b) = \mathrm{span}\{b,\, Ab,\, A^2b,\, \dots,\, A^{m-1}b\} where AA is a prescribed matrix operator (commonly the graph Laplacian, adjacency, or a Liouvillian), and bb is an initial reference vector or feature state. Each Lanczos step generates an orthonormal basis {v1,,vm}\{v_1,\ldots,v_m\}, with tridiagonal recurrence coefficients (the Lanczos coefficients). In the Krylov graph, each basis vector (e.g., Kn|K_n\rangle) is assigned to a graph node, while edges correspond to actions of AA (or its tridiagonal form), and edge weights correspond to the associated recurrence coefficients.

For tensor-product settings (e.g., composite quantum systems), the higher-dimensional Krylov graph is defined as the Cartesian product of the single-subsystem graphs, yielding a lattice structure in which propagation aligns with combinatorial growth across diagonal shells (Murugan et al., 13 Jan 2026).

2. Graph-Theoretic Structure of Krylov Subspaces

Krylov subspace methods naturally induce weighted graph structures:

  • Nodes: Each basis element, indexed by nn, corresponds to a distinct node.
  • Edges: Only nearest-neighbor (nn±1n \leftrightarrow n\pm1) connections exist for tridiagonal recurrences, with weights equal to Lanczos coefficients bn+1b_{n+1}.
  • Distances: Edge lengths may be assigned from a metric derived from circuit or operational complexity (e.g., Nielsen cost-function on SU(2)-orbits, leading to geodesic distances on a sphere) (Lv et al., 2023).

For time-dependent or piecewise dynamics, multilayer (“stacked-chain”) Krylov graphs are constructed, with zero-weight edges for transitions at quench points.

In the tensor-product context, the Krylov graph becomes a dd-dimensional lattice (for dd subsystems), where the position (n1,,nd)(n_1,\dots,n_d) indexes the product of subsystem basis states. Diagonal “shells” with total index s=knks = \sum_k n_k capture the combinatorics of path-countings relevant to operator spreading (Murugan et al., 13 Jan 2026).

3. Krylov Representations for Graph-Based Operators and Fractional Diffusion

Krylov representations are the canonical setting for iterative approximation of matrix functions on graphs. Functions of the graph Laplacian, such as those arising in diffusion (f(L)=etLf(L) = e^{-tL}), fractional diffusion (f(L)=etLαf(L) = e^{-tL^\alpha}), or graph kernels (K=φ(L)K = \varphi(L)), can be approximated by the action of these functions on Krylov-projected bases.

In fractional diffusion on directed graphs, the Laplacian LL is singular; Krylov subspaces associated to LαL^\alpha must be constructed using desingularization—either via rank-one shift

L~=L+θ1z\widetilde L = L + \theta\,\mathbf 1\, z^\top

or projection to 1\mathbf 1^\perp. This yields a nonsingular surrogate where rational Krylov subspace methods converge rapidly even for non-analytic functions ff with branch cuts, as in the case of fractional powers of Laplacians. These rational Krylov bases provide a low-dimensional, graph-adaptive feature embedding under the fractional operator (Benzi et al., 2020).

4. Adaptive and Block Krylov Graph Approximations for Graph Learning

Graph neural networks and kernel methods on graphs exploit Krylov subspaces to bypass eigendecomposition, leveraging block Krylov methods and polynomial filter design (Erb, 2023, Huang et al., 2024). Any degree-KK polynomial graph filter can be expressed as

h(P)x=k=0KθkPkxKK+1(P,x)h(P)x = \sum_{k=0}^K \theta_k P^k x \in \mathcal K_{K+1}(P, x)

for an appropriately chosen propagation matrix PP. Thus, Krylov subspaces encode the expressive space of linear spectral filters and unify diverse bases (Chebyshev, Bernstein, Jacobi, PageRank).

Block Krylov methods further generalize this to multiple starting vectors, effectively constructing a graph of block nodes and block-weighted recurrences. Five principal block Krylov schemes (classical, global, sequential Lanczos; Chebyshev; squared Chebyshev) are used to balance trade-offs among computational cost, orthogonality, and positive definiteness preservation in graph-based kernel machines (Erb, 2023). The adaptive Krylov subspace approach, as in AdaptKry, tunes the propagation matrix PτP_\tau to adjust the spectrum, providing controllable frequency preservation for graphs of varying homophily and heterophily (Huang et al., 2024).

5. Krylov Graphs in Quantum, Statistical, and Complexity Contexts

Krylov graph representations naturally encode operator spreading and Krylov complexity in quantum many-body systems. The single-chain Krylov graph corresponds to the Lanczos chain for an operator evolving under a Hamiltonian or Liouvillian. In composite systems, the product graph forms a higher-dimensional lattice, leading to binomial path multiplicities in diagonal shells, which underpin the superadditivity of Krylov (operator spread) complexity under tensor products: C12(t)C1(t)+C2(t)C_{12}(t) \geq C_1(t) + C_2(t) This geometric broadening is rigorous and quantified via the excess complexity operator, whose non-negative spectrum encodes the deviation from perfect synchrony in operator growth (Murugan et al., 13 Jan 2026).

Geometric circuit complexity establishes a unique metric on the Krylov graph, especially when the generator admits a dynamical symmetry (e.g., SU(2) or SU(1,1)), linking the graph structure to the geometry of underlying group manifolds (Lv et al., 2023).

6. Algorithmic Implementation and Computational Properties

Constructing Krylov graph representations requires the following algorithmic steps:

  • Lanczos—or rational Arnoldi—recurrence generates basis nodes and edge weights.
  • In polynomial settings, the constructed Krylov graph stores explicit chain or lattice structure; in rational variants, additional shifts or inverses are encoded in edge construction (Benzi et al., 2020).
  • For block Krylov methods, block structures are maintained, and projection matrices are updated to ensure orthogonality and positive definiteness as required by downstream learning or inference tasks (Erb, 2023).
  • For adaptive variants, a tunable matrix parameter (e.g., τ\tau) modulates the spectral properties of the Krylov basis, and multiple subspaces can be merged with no parameter overhead (Huang et al., 2024).

Empirical and theoretical analyses demonstrate that block classical Lanczos delivers the fastest convergence and SPD preservation, while Chebyshev methods offer minimal storage at the cost of larger iteration counts. Rational Krylov methods provide optimal convergence for non-analytic matrix functions, especially in the presence of Laplacian singularities (Benzi et al., 2020).

7. Applications, Limitations, and Extensions

Krylov graph representations underlie algorithms for efficient low-dimensional embedding of graphs under diffusion operators, scalable evaluation of graph kernels for learning, optimal polynomial filter design in GNNs, and operator complexity analysis in quantum systems. Notably, they provide near-optimal approximation to spectral embeddings, precise control of filter frequency properties, and unification of disparate perspectives (spectral, combinatorial, geometric) on graph-based computation.

Typical limitations arise from loss of positive definiteness or symmetry at low Krylov dimensions (addressed by SPD-preserving schemes), slow convergence for polynomial-only subspaces in presence of singularities, and sensitivity to the spectrum of the underlying propagation matrix, especially for highly heterogeneous graphs. Adaptive methods and rational Krylov variants directly address these pathologies, resulting in algorithms with state-of-the-art empirical and theoretical guarantees for complex networks (Erb, 2023, Benzi et al., 2020, Huang et al., 2024).

Overall, the Krylov graph representation provides a principled, algorithmically powerful, and geometrically transparent approach to low-dimensional modeling and complexity analysis across network science, machine learning, and many-body physics.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Krylov Graph Representation.