- The paper introduces LKIS, a novel method that learns observables spanning Koopman invariant subspaces to enable efficient DMD on nonlinear systems.
- It leverages neural networks, delay embedding, and RSS minimization to optimize linear approximations of complex dynamics.
- LKIS-DMD demonstrates superior performance in predicting chaotic behaviors and detecting instability even in noisy environments.
Learning Koopman Invariant Subspaces for Dynamic Mode Decomposition
Introduction
The study of nonlinear dynamical systems has become increasingly important due to its applications in various fields such as fluid dynamics, neuroscience, and engineering. Traditional analysis of such systems often relies on state-space models, but these become cumbersome with nonlinear dynamics. Recently, operator-theoretic approaches, specifically those involving the Koopman operator, have gained traction. The Koopman operator, being linear in an infinite-dimensional function space, allows for modal decomposition even in nonlinear systems. Dynamic Mode Decomposition (DMD) is a commonly employed numerical method for this purpose, yet it traditionally relies on manually defined observables—a limitation this study seeks to overcome.
Koopman Operator and Dynamic Mode Decomposition
The Koopman operator K facilitates a transformation from a state-space view to an operator-based perspective, where the dynamics are captured through the evolution of observables. The infinite-dimensional nature of K allows for linear analysis of the underlying nonlinear system. DMD works by approximating the action of Koopman operator through matrices derived from the observed data. A critical aspect for successful DMD is that the data should come from a set of Koopman-invariant observables—a condition not easily met without prior knowledge of the system.
Proposed Method: Learning Koopman Invariant Subspaces
The paper introduces a fully data-driven method, the Learning Koopman Invariant Subspaces (LKIS), to determine observables that span a Koopman invariant subspace. This process involves:
- Residual Sum of Squares (RSS) Minimization: The method minimizes the RSS of linear least-squares regression, transforming the data so that a linear model fits well. This ensures that the derived observables can capture the invariant subspace's dynamics.
- Linear Delay Embedder: To cope with cases where only partial state information is available, the method uses a linear delay embedder. This approximates the state reconstruction essential for defining the RSS loss for the observables.
- Neural Network Implementation: The practical implementation of LKIS employs neural networks to learn the transformation and reconstruction functions. Multi-layer perceptrons (MLPs) are used for flexibility and power in approximating complex functions.
Numerical Examples and Applications
The paper demonstrates the effectiveness of LKIS-DMD across various benchmark nonlinear systems such as fixed-point attractors and limit cycles. Significantly, LKIS-DMD shows superior performance, particularly in noisy environments where traditional methods falter.
Moreover, two applications are highlighted:
- Chaotic Time-Series Prediction: LKIS-DMD outperforms both LSTM networks and traditional Hankel DMD in predicting chaotic dynamics from systems like the Lorenz and Rossler attractors.
- Unstable Phenomena Detection: By examining eigenfunctions corresponding to rapid decay, LKIS-DMD effectively identifies brief unstable conditions in a laser dataset, outperforming standard novelty detection algorithms.
Conclusion
The paper posits that LKIS offers a compelling solution to the need for a priori knowledge in DMD by learning observables directly from data. The method not only matches but often exceeds the capability of existing approaches in handling nonlinear dynamics in both synthetic and real-world data. Future work could explore Bayesian methods for integrating uncertainty and build upon the neural network-based implementation to bypass local optima challenges.
The use of neural networks in LKIS-DMD reflects the growing interdisciplinary approach in dynamical systems analysis, bridging the gap between machine learning and traditional operator-theoretic methods. This advancement paves the way for more robust and generalized applications of Koopman spectral analysis in complex system dynamics.