Papers
Topics
Authors
Recent
Search
2000 character limit reached

Krylov Dynamic Mode Decomposition (DMD)

Updated 27 January 2026
  • Krylov DMD is an optimized variant of dynamic mode decomposition that integrates time-delay embedding with Krylov subspace projections to capture oscillatory modes in high-dimensional data.
  • It reduces computational complexity and memory usage by projecting large datasets onto a lower-dimensional subspace before applying SVD and spectral decomposition.
  • Empirical benchmarks demonstrate that Krylov DMD achieves nearly identical modal accuracy as classic DMD while significantly lowering FLOPs and runtime for large-scale systems.

Krylov Dynamic Mode Decomposition (DMD) is an optimized variant of the Dynamic Mode Decomposition framework designed for efficient analysis of high-dimensional, highly oscillatory spatiotemporal datasets. By incorporating both time-delay coordinates (TDC) and Krylov-subspace-inspired projections, this methodology achieves significant computational and memory reductions with negligible loss in modal accuracy. Krylov DMD addresses bottlenecks present in standard DMD by projecting high-dimensional data onto a carefully constructed low-dimensional subspace before matrix factorization and spectral decomposition, enabling robust identification of oscillatory modes in large-scale systems (Murshed et al., 2020).

1. Standard DMD and Computational Challenges

Standard DMD operates on snapshot matrices XRM×NX \in \mathbb{R}^{M \times N}, where MM is the spatial dimension and NN the number of temporal samples. The snapshot pair is split as X1=[x1,...,xN1]X_1 = [x_1, ..., x_{N-1}] and X2=[x2,...,xN]X_2 = [x_2, ..., x_N], seeking the best-fit linear operator AA such that X2AX1X_2 \approx A X_1. The core algorithm involves the following steps:

  1. Economic-size singular value decomposition (SVD): X1=UΣVX_1 = U \Sigma V^*, truncated to rank rr.
  2. Low-dimensional operator construction: S~=UX2VΣ1\tilde{S} = U^* X_2 V \Sigma^{-1}.
  3. Spectral decomposition: S~yk=μkyk\tilde{S} y_k = \mu_k y_k, DMD modes ϕk=Uyk\phi_k = U y_k, eigenvalues ωk=ln(μk)/Δt\omega_k = \ln(\mu_k)/\Delta t.
  4. State reconstruction: xDMD(t)=Φdiag(eωt)bx_{\text{DMD}}(t) = \Phi\,\mathrm{diag}(e^{\omega t})\,b, with b=Φx1b = \Phi^\dagger x_1.

However, when MM and/or NN are large, the SVD step induces computational complexity O(min(M,N)2max(M,N))O(\min(M,N)^2 \max(M,N)) and memory usage O(MN)O(MN). Data from many relevant phenomena are “big” (M103M \gg 10^3) and “highly oscillatory,” requiring dense sampling (NN large), which amplifies these bottlenecks. Standard DMD may also fail to resolve oscillatory modes in non-Markovian or coarsely-sampled systems (Murshed et al., 2020).

2. Time-Delay Coordinates and the Need for Projection

Time-Delay Coordinates augment each snapshot xkx_k by stacking pp past states, forming highly informative, tall Hankel-like matrices:

$X_{1,\text{aug}} = \begin{bmatrix} x_1 & \hdots & x_{q-1} \ x_2 & \hdots & x_q \ \vdots \ x_p & \hdots & x_{N-1} \end{bmatrix} \in \mathbb{R}^{(p+1)M \times (N-p)}$

This procedure effectively embeds hidden oscillatory dynamics into a higher-dimensional, approximately linear, manifold, enhancing the extractability of underlying coherent structures. However, it increases the row dimension to qMqM, drastically elevating the cost of subsequent SVD. Mitigation involves the application of projection operators RRa×(qM)R \in \mathbb{R}^{a \times (qM)}, with aqMa \ll qM, resulting in compressed matrices Z1,aug=RX1,augZ_{1,\text{aug}} = R X_{1,\text{aug}}, Z2,aug=RX2,augZ_{2,\text{aug}} = R X_{2,\text{aug}}. The projection must preserve dominant DMD eigenvalues and modes while reducing both floating-point operations and memory requirements (Murshed et al., 2020).

3. Krylov-Subspace Projections

The Krylov subspace, Km(A,b)=span{b,Ab,A2b,...,Am1b}\mathcal{K}_m(A, b) = \mathrm{span}\{b, Ab, A^2 b, ..., A^{m-1} b\} for ARn×nA \in \mathbb{R}^{n \times n} and seed vector bRnb \in \mathbb{R}^n, is a classical construct from numerical linear algebra exploited here to define low-dimensional projectors. Its construction uses the Arnoldi process (a modified Gram–Schmidt):

  • Inputs: random matrix AA, seed vector bb.
  • Iteratively generate orthonormal basis Vm=[v1,...,vm]V_m = [v_1, ..., v_m] for the subspace.
  • The projection operator is then RK=VR(a+1)×MR_K = V^* \in \mathbb{R}^{(a+1) \times M}.

For data XRM×NX \in \mathbb{R}^{M \times N}, Z=RKX=VXR(a+1)×NZ = R_K X = V^* X \in \mathbb{R}^{(a+1) \times N} yields a compressed representation. The orthogonal projector is P=VVP = VV^*, but RKR_K suffices for right-multiplicative projection. This basis, once constructed, is data-agnostic and can be reused if AA and bb are fixed.

4. Krylov-Projected TDC-DMD: Algorithmic Workflow

The Krylov-projected time-delay DMD framework proceeds as follows:

  1. TDC Embedding: Form augmented snapshot matrices X1,augX_{1,\text{aug}}, X2,augX_{2,\text{aug}} of size qM×(Nq)qM \times (N-q).
  2. Projector Construction: Use Arnoldi on AA, bb to obtain VRM×(a+1)V \in \mathbb{R}^{M \times (a+1)}; set RK=VR_K = V^*.
  3. Data Projection: Z1=RKX1,augZ_1 = R_K X_{1,\text{aug}}, Z2=RKX2,augZ_2 = R_K X_{2,\text{aug}}.
  4. SVD: Z1=UZΣZVZZ_1 = U_Z \Sigma_Z V_Z^*, truncated to ra+1r \ll a+1.
  5. Koopman Matrix: S~=UZZ2VZΣZ1\tilde{S} = U_Z^* Z_2 V_Z \Sigma_Z^{-1}.
  6. Spectral Decomposition: S~yk=μkyk\tilde{S} y_k = \mu_k y_k, ϕk=UZyk\phi_k = U_Z y_k.
  7. Amplitudes and Frequencies: b=Φx1b = \Phi^\dagger x_1, ωk=ln(μk)/Δt\omega_k = \ln(\mu_k)/\Delta t.
  8. Reconstruction: xDMD(t)=Φdiag(eωt)bx_{\rm DMD}(t) = \Phi \,\mathrm{diag}(e^{\omega t}) b.

This workflow compresses the effective row-dimension from qMqM to a+1a+1, where a+1a+1 is the Krylov subspace dimension, delivering a dramatic reduction in resource consumption (Murshed et al., 2020).

5. Computational Complexity and Memory Analysis

A direct comparison of various DMD strategies for data of dimension MM and projection size aa is as follows:

Method SVD Cost Projection/Arnoldi Cost Memory Usage
Standard TDC‐DMD O(min(qM,N)2max(qM,N))O(\min(qM, N)^2 \max(qM, N)) O(qMN)O(qM \cdot N)
Sampling/Gaussian Proj O(a2N)O(a^2 N) (after proj) O(aqMN)O(a q M N) O(aN)O(a N)
Krylov‐DMD O(a2N)O(a^2 N) (after proj) O(aM2)O(a M^2) (Arnoldi, once); O(aMqN)O(a M q N) (proj) O(Ma)O(M a) (basis) + O(aN)O(a N) (proj)

For M=104M = 10^410610^6 and a=102a = 10^210310^3, Krylov DMD achieves a reduction of one to two orders of magnitude in both FLOPs and memory. If the Krylov basis VV is constructed once and reused, the amortized cost is much lower than standard DMD. This enables practical application to very large datasets that would otherwise be infeasible with direct SVD (Murshed et al., 2020).

6. Experimental Benchmarks and Modal Accuracy

Empirical evaluation is performed on two canonical data sets:

  • Double Gyre vorticity ([0,2]×[0,1][0,2] \times [0,1], M=104M=10^4, N=200N=200).
  • 2D compressible signal, two-frequency (M=104M=10^4, N=200N=200).

Time-delay embedding uses q=2q=2. Projection dimensions for Krylov DMD and comparators are selected as a=100a=100 (Double Gyre) and a=50a=50 (Signal). The singular value truncation rank is r=20r=20.

Results summary:

  • Eigenvalue spectra (Im(λ)(\lambda) vs Re(λ)(\lambda)) are nearly identical across projection variants, forming symmetric clusters near the unit circle, indicative of stable dynamics.
  • Long-run reconstruction errors xtrue(t)xDMD(t)2\|x_{\text{true}}(t) - x_{\text{DMD}}(t)\|_2 for Krylov DMD and sparse projection DMD are nearly indistinguishable from classic DMD when aqra \cdot q \geq r (e.g., 100220100 \cdot 2 \geq 20).
  • Sampling-based methods can omit weak modes, resulting in occasional long-term drift.
  • Runtime benchmarks (per trial): classic TDC-DMD 3.2\sim3.2s, sampling DMD 0.12\sim0.12s, Gaussian projection 0.18\sim0.18s, sparse projection 0.15\sim0.15s, Krylov DMD (amortized) 0.20\sim0.20s (Arnoldi step 0.05\sim0.05s).

7. Theoretical Considerations, Limitations, and Extensions

Krylov DMD offers substantive advantages:

  • High data reduction: compresses MqM q to aa, with aMqa \ll M q, without significant loss in modal accuracy.
  • Robust mode identification in highly oscillatory regimes through joint TDC and Krylov projection.
  • Compatibility with sparse storage where appropriate.

Potential limitations include:

  • Initial Arnoldi cost O(aM2)O(a M^2), which can be burdensome for very large MM unless VV is reused.
  • The accuracy and stability depend on the choice of aa, AA, and bb (projection dimension and seeding), necessitating problem-dependent cross-validation.
  • Error bounds for combined TDC/Krylov projection are not fully characterized.

Possible directions for further development include streaming/online Arnoldi for time-varying data, adaptive selection of aa for energy target capture, application to 1\ell_1-based compressive DMD, and extension to PDE-constrained inverse problems or control-oriented settings (Murshed et al., 2020).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Krylov Dynamic Mode Decomposition (DMD).