Reduced-Order LTI Models
- Reduced-order LTI models are surrogate dynamic systems that approximate the input-output behavior of high-dimensional linear systems using techniques like moment-matching and Krylov subspace projections.
- They reduce computational complexity while preserving essential system properties such as stability and performance over specified time or frequency ranges.
- These models facilitate efficient simulation and real-time control in applications like discretized PDEs and networked systems, providing practical benefits for large-scale problems.
A reduced-order linear time-invariant (LTI) model is a surrogate dynamic system, typically of order , constructed to approximate the input-output response of a higher-dimensional LTI system over a specified frequency range, time interval, or parameter domain. Such models enable efficient simulation, control, and optimization in high-dimensional applications by capturing the dominant dynamic features of the original system while drastically reducing computational complexity. The reduced-order LTI approximation forms a foundational paradigm in model reduction, with techniques based on moment-matching, Gramian-based projections, optimality with respect to induced norms, and data-driven methods.
1. Mathematical Formulation and Objectives
Let the full-order continuous-time LTI system be given by: with , , , and state-space matrices , where is Hurwitz. The associated transfer function is .
A reduced-order LTI system has the form: with , , and transfer function .
Model reduction seeks to minimize a normed error , according to system-theoretic metrics such as the norm or norm, or to preserve salient structural properties (e.g., energy, passivity, stability) over prescribed time/frequency/parameter ranges. In many applications, the matching is enforced only over for some (the time-limited scenario) or within specific parameter regimes.
2. Classical Projection and Krylov Subspace Algorithms
Projection-based methods dominate reduced-order LTI model construction for high-dimensional systems. These approaches use Petrov–Galerkin projections to construct low-dimensional subspaces that preserve the controllable and observable directions most relevant to external input-output behavior.
For a set of interpolation points and tangential directions , the rational Krylov right and left subspaces are defined as: The reduced matrices are given via Petrov–Galerkin projection:
The Iterative Rational Krylov Algorithm (IRKA) and its variants iteratively update to achieve first-order optimality conditions (bitangential Hermite interpolation at the mirror images of the reduced poles) (Necoara et al., 2018, Mlinarić et al., 2023). The resulting reduced-order systems often inherit stability, and the dominant input-output behavior of the original model is retained.
For time-limited reduction, the subspaces are modified to embed the finite-horizon constraint (Das et al., 2021): yielding surrogate models optimized over with respect to the time-limited norm.
3. Time-Limited Model Reduction and Optimality Conditions
Finite-horizon applications necessitate time-limited error metrics and interpolation frameworks. For , the impulse response is truncated to and the time-limited norm is
or, equivalently, via
The first-order necessary conditions for (local) -optimality are bi-tangential interpolation constraints: for all reduced-order poles (Das et al., 2021).
Limited Time IRKA (LT-IRKA) enforces these conditions via rational Krylov projection on the modified subspaces and iteratively updates interpolation data; the “nearness” to optimality is quantified in terms of explicit interpolation residuals.
For systems with quadratic outputs, similar time-limited norms and necessary (but not fully achievable) optimality conditions are derived. Iterative projection-based algorithms converge to surrogates that satisfy all algebraic conditions except for one residual, which is small in practice (Zulfiqar et al., 2024).
4. Objective-Driven Optimization and Associated Algorithms
Beyond projection, direct optimization techniques pose reduced-order LTI model construction as (typically nonconvex) minimization problems over families of interpolants, seeking minimal (or related) error:
- Full parametrization and KKT-based optimization: Moment-matching ansätze with tunable free parameters or interpolation points yield nonconvex semidefinite programs, with optimality characterized by Karush–Kuhn–Tucker conditions (Necoara et al., 2018). Gradient-type and partial-minimization algorithms are employed, and convex (semidefinite) relaxations exist that are exact under certain system structure.
- Semi-definite relaxation for SISO cases: For first- and second-order SISO models, the cost can be minimized globally by formulating a convex SDP over interpolation point parameters, recovering the optimal shifts and enabling rational Krylov realization (Zhu et al., 24 Aug 2025).
- Data-driven and sample-optimal reductions: Gradient-based optimization on parameter-separable representations using only frequency samples allows nonintrusive construction of -optimal reduced models, and under regularity, coincides with classical projection (Mlinarić et al., 2022).
5. Extensions: Structured, Parametric, and Time-Limited Systems
The optimality landscape of reduced-order LTI modeling encompasses broad system classes:
- Structured LTI systems: Bitangential Hermite interpolation conditions generalize to second-order, port-Hamiltonian, and time-delay systems under simultaneous diagonalizability assumptions. The reduced transfer is expressed as a sum over residues and structured scalar denominator terms, with optimality enforcing interpolation at all mirror-image poles (Mlinarić et al., 2023).
- Parametric systems: For with parametric dependence, the error is measured with respect to a mixed norm, and interpolatory optimality requires parameter-averaged tangential Hermite matching, or even parameter-differentiated interpolation when poles are parameter-dependent (Mlinarić et al., 2024, Hund et al., 2021).
- Balancing and structure preservation: For second-order models with inhomogeneous initial conditions, distinct projection strategies and tailored Gramians preserve second-order structure and deliver tight error bounds (Przybilla et al., 2022).
6. Practical Impact and Computational Considerations
Reduced-order LTI models are central in applications where simulation, control, and optimization must be executed on systems with very large state dimension ( and beyond), such as discretized PDEs or networked systems.
- Computational scalability: Modern Krylov-based and rational interpolatory methods require only linear solves and matrix-vector operations, and admit rigorous error quantification and robust performance for large-scale problems as long as matrix exponentials and Lyapunov solutions are tractable (Das et al., 2021). Adaptive randomized SVD and block-AAA techniques further enhance scalability when only transfer function samples or impulse responses are available (Yu et al., 2023, Pelling et al., 10 Jun 2025).
- Model-based control (e.g., ROMPC): Reduced-order LTI surrogates underpin real-time model predictive control for high-dimensional plants, with explicit error bounds on output and input constraints, robust setpoint tracking, and provable stability guarantees (Lorenzetti et al., 2020, Lorenzetti et al., 2018).
- Data-driven surrogates: Non-intrusive (operator inference, Loewner) frameworks build efficient LTI (or bilinear, quadratic) surrogates from input-output data alone, bypassing the need for full-access state-space realizations (Poussot-Vassal et al., 2020).
7. Limitations, Unification, and Outlook
No globally convergent algorithm exists for general high-order multi-input or highly parameterized LTI systems; local optimality is standard, and initialization is critical in iterative methods (Das et al., 2021, Necoara et al., 2018). The accuracy of time-limited reduction, especially outside the prescribed interval, depends critically on the choice of and the decay of system modes. For quadratic, nonlinear, or parametric scenarios, achieving all first-order conditions via projection is typically impossible, and alternative formulations provide only near-optimality (Zulfiqar et al., 2024, Hund et al., 2021).
A unifying view emerges: bitangential Hermite interpolation—enforced at the mirror images of the reduced poles—is the fundamental mechanism underlying -optimal reduction across unstructured, structured, time-limited, and parametric LTI models (Mlinarić et al., 2023, Mlinarić et al., 2024). This insight drives the design of efficient, structure-preserving, and application-aware reduced-order modeling algorithms, with ongoing research targeting global optimality, nonlinearity, and efficient handling of parameter and uncertainty.