Continuous-Time Koopman Operators
- Continuous-Time Koopman Operators are infinite-dimensional linear operators that evolve observable functions, providing a rigorous framework to analyze nonlinear dynamics.
- They leverage semigroup theory and spectral decomposition to connect generator properties with system stability and mode identification.
- Finite-dimensional approximations via EDMD, kernel methods, and neural embeddings facilitate practical computation and control of complex dynamical systems.
A continuous-time Koopman operator is a linear, potentially infinite-dimensional operator that encodes the temporal evolution of nonlinear dynamical systems by advancing observable functions of the state, rather than the state itself. This linearization in the space of observables provides a mathematically rigorous framework for analyzing, identifying, and controlling nonlinear, possibly high-dimensional systems via spectral and semigroup theory. The continuous-time variant is fundamentally structured through semigroups of composition operators and their infinitesimal generators, with applications spanning both classical and modern data-driven approaches to dynamical systems, including recent advances in physics-informed machine learning and interpretable generative modeling.
1. Semigroup Foundations, Generator, and Function Spaces
Let evolve under an autonomous or non-autonomous (possibly stochastic) flow . The continuous-time Koopman semigroup is defined by
for suitable observables in a Banach (often , , or a reproducing kernel Hilbert/Banach space) or rigged Hilbert space. For ergodic, measure-preserving systems, is unitary on .
The infinitesimal generator of is defined where the strong limit
exists, yielding the continuous-time Koopman generator. For dynamics , if and are smooth, —i.e., the Lie derivative of along the vector field (Mauroy, 2021, Johnson et al., 2017, Bevanda et al., 2021).
Koopman operator properties depend on the host space: boundedness, strong continuity (the -semigroup property), and spectral theory (as inherited from Hilbert, Banach, or RKHS contexts) determine both theoretical and computational tractability (Ikeda et al., 2022).
2. Spectral Theory and Decomposition
The spectral theory of continuous-time Koopman operators underlies their applicability. In for measure-preserving flows, Stone's theorem provides a spectral representation: with spectral measure : The spectrum of generally has both point (eigenvalue/eigenfunction) and continuous components (Das et al., 2017, Meng et al., 2024, Valva et al., 2023, Colbrook et al., 2024, Črnjarić-Žic et al., 2017). For deterministic linear or finite-dimensional cases, eigenvalues/eigenfunctions can be computed directly; for nonlinear and infinite-dimensional systems, spectral decomposition may require finite-dimensional projections or approximations (EDMD, kernel methods, etc.) and often leverages delay-embedding or occupation measures (Mauroy, 2021, Das et al., 2017, Rosenfeld et al., 2019, Colbrook et al., 2024).
Spectral mapping theorems link the spectrum of the generator to that of : whenever generates a -semigroup. Spectral features, including generalized eigenfunctions (in rigged Hilbert spaces), can encode both regular (e.g., periodic, quasiperiodic) and chaotic (continuous spectrum) behavior, and underly mode decomposition, stability analysis, and forecasting.
3. Finite-Dimensional Approximations and Data-Driven Methods
Practical computation relies on finite-dimensional approximations of the infinite-dimensional operator algebra. This is commonly achieved by:
- Selecting a finite dictionary/basis of observables and projecting the Koopman semigroup and generator to obtain matrices and via Galerkin or least-squares methods (Mauroy, 2021).
- Using data-driven methods such as Extended Dynamic Mode Decomposition (EDMD) and its variants, which construct matrices from time-series data and compute generator estimates via the matrix logarithm or resolvent (Mauroy, 2021, Johnson et al., 2017, Rosenfeld et al., 2019, Meng et al., 2024).
- Applying kernel integral operators for nonlinear, high-dimensional settings, where the kernel is chosen to reflect the geometry and smoothness of the underlying dynamics, and may be combined with delay-coordinate embedding for embedding pure-point spectral subspaces (Das et al., 2017, Valva et al., 2023, Valva et al., 2024).
For learning tasks, contemporary approaches employ neural network parameterizations of the lifting (encoding observables), as in neural Koopman frameworks, which learn encoders/decoders and propagate in latent space using continuous-time matrix exponentials , where is the learned generator (Frion et al., 2023, Turan et al., 27 Jun 2025, Bevanda et al., 2021). Orthogonality or regularity penalties improve stability and interpretability.
An algorithmic workflow often comprises:
- Dictionary or encoder selection (neural or analytic)
- Construction of data matrices from snapshot pairs or trajectory segments
- Approximation of the Koopman evolution operator or generator (logarithm, Yosida-resolvent, kernel smoothing/compactification)
- Eigenvalue/eigenfunction computation for the approximate operator
- Prediction or control via spectral decomposition.
4. Physics-Informed and Interpretable Embeddings
Continuous-time Koopman techniques are increasingly used for interpretable, efficient representation of complex nonlinear dynamics, including closure problems. Notable strategies encompass:
- State-inclusive or physics-informed observable lifts (e.g., logistic SILL dictionaries or kernel methods with derivatives computed via automatic differentiation) to ensure the original state is represented and closure is approximately achieved, with explicit error bounds (Johnson et al., 2017, Valva et al., 2024).
- Decoder-free, "lift-and-linearize" neural architectures used in generative modeling, e.g., turning nonlinear conditional flow matching into a linear latent ODE with analytic sampling, spectral analysis, and temporal decomposition via eigenvalues/eigenfunctions of the generator (Turan et al., 27 Jun 2025).
- Bounded transforms (regularized resolvents) and kernel smoothing (Markov integral operators), leading to compact, skew-adjoint operators whose eigenpairs can be solved by variational generalized eigenproblems for provable spectral convergence (Valva et al., 2024).
- Linear predictors via diffeomorphically constructed coordinates and supervised learning frameworks for guaranteed stable system identification, exploiting Hurwitz constraints and monomial lifts for accurate reconstruction (Bevanda et al., 2021).
Such approaches provide analytic access to mode stability (Re λ < 0), temporal scaling, and factorized latent representations in complex systems. Eigenvalues and eigenvectors of the learned/approximated generators dictate the emergent time scales and enable a decomposition into fast/slow modes or coherent patterns, with direct interpretability in terms of original or latent variables.
5. Spectral Convergence and Consistency Results
Rigorous results guarantee that, under increasing basis size or kernel localization, data-driven and kernel-based compactification methods converge (in strong resolvent or operator-norm topology) to the true continuous-time Koopman generator and its spectral projections (Valva et al., 2023, Colbrook et al., 2024, Valva et al., 2024).
- The limiting spectrum of finite-dimensional approximations approaches that of the infinite-dimensional operator, capturing both point and continuous spectral features.
- Variational methods, rigged Hilbert space constructions (using delay-embedding or weighted spaces), and resolvent compactification all provide frameworks for consistent recovery, with convergence rates controlled by dictionary richness, kernel parameters, and sample complexity (Das et al., 2017, Valva et al., 2023, Colbrook et al., 2024, Valva et al., 2024).
- For stochastic and random dynamical systems, extensions to stochastic Koopman operators and their generator (the Kolmogorov backward operator for SDEs) admit analogous data-driven spectral approximations via expectation-based dynamic mode decomposition (sHankel-DMD) (Črnjarić-Žic et al., 2017).
6. Applications and Illustrative Examples
Continuous-time Koopman operators are ubiquitously applied in:
- System identification and spectral analysis of nonlinear ODEs/PDEs, e.g., Van der Pol oscillators, fluid attractors, Burgers’ equation, with analytic and data-driven reconstructions (Mauroy, 2021, Johnson et al., 2017, Frion et al., 2023).
- Model reduction and latent-dynamics learning, both via analytic dictionary methods/EDMD and modern neural embedding architectures, for tasks including chaotic attractor reconstruction, low-frequency prediction from scarce data, or interpretable flow-based generative modeling (Frion et al., 2023, Bevanda et al., 2021, Turan et al., 27 Jun 2025).
- Spectral approximation of invariant sets and coherent structures in high-dimensional or chaotic systems (Lorenz system, lid-driven cavity flows) via kernel methods, delay-embedding, and rigged DMD, capturing both discrete and continuous spectra with convergence guarantees (Colbrook et al., 2024, Valva et al., 2023, Das et al., 2017).
A concise summary table of core methodological categories and their spectral guarantees:
| Approach Type | Generator Approximation | Convergence Guarantee |
|---|---|---|
| Finite-basis/data-driven (EDMD, SILL) | , regression | Operator norm/strong with increasing basis (Mauroy, 2021, Johnson et al., 2017) |
| Kernel compactification | Smoothing , bounded transforms | Strong resolvent/topology (Valva et al., 2023, Valva et al., 2024) |
| Variational RKHS/Galerkin | Variational eigenproblem | Spectral projection, operator norm (Valva et al., 2024) |
| Rigged Hilbert/delay embedding | Resolvent/wave packet construction | Pointwise/weak, functional calculus (Colbrook et al., 2024) |
| Neural Koopman (deep learning) | Matrix exponential of learned | Empirical, assessed on prediction error (Frion et al., 2023, Turan et al., 27 Jun 2025) |
7. Limitations, Open Problems, and Future Directions
Principal limitations include:
- The curse of dimensionality for traditional dictionary-based finite-dimensional lifting, motivating kernel, sparse, or deep-network approaches (Johnson et al., 2017, Frion et al., 2023).
- Only approximate finite closure is attainable except for special systems; error bounds depend on quantities such as grid spacing, kernel parameters, and model smoothness.
- For stochastic, non-autonomous, or chaotic systems, the spectrum may contain significant continuous (non-eigenfunction) parts, challenging conventional mode decomposition and requiring generalized or rigged Hilbert frameworks (Colbrook et al., 2024, Črnjarić-Žic et al., 2017).
- Operator-theoretic interpretations heavily rely on the regularity, invariance, and boundedness properties of observables and kernels, with some strong continuity theorems subject to technical geometric or dissipativity conditions (Ikeda et al., 2022).
Emerging areas include:
- Systematic design of physics-informed embeddings and differential operators, enabling interpretable and robust learning from sparse or noisy data (Valva et al., 2024).
- Scalable, theoretically justified approximations for high-dimensional and rough (non-smooth) systems, leveraging hybrid approaches (combining neural, kernel, and variational methods).
- Data-driven spectral analysis for non-autonomous, control-affine, or random dynamical systems, including extensions to controllers and verified certificates (Lyapunov, barrier functions) recoverable from Koopman generator estimates (Meng et al., 2024, Črnjarić-Žic et al., 2017).
- Modal analysis and coherent structure detection via continuous spectrum decomposition, enabled by recent advances in Rigged DMD and resolvent-based methods (Colbrook et al., 2024, Valva et al., 2023).
- Domain-adaptive, sparsity-promoting, and adaptive dictionary construction strategies to manage complexity and enhance interpretability in practical applications.
In conclusion, continuous-time Koopman operator theory unifies classical and modern perspectives on nonlinear dynamics by providing a rigorous, linear framework for system identification, spectral decomposition, and prediction, with a wide spectrum of algorithmic realizations, from finite-dimensional projections and kernel methods to neural architectures and variational eigenproblems, and robust convergence theory anchored in functional analysis and spectral theory.