Control Lyapunov Functions (CLFs)
- Control Lyapunov Functions (CLFs) are scalar functions that guarantee system stability by ensuring exponential decay and providing a certificate for robust feedback control.
- They enable constructive controller synthesis using methods such as Sontag’s formula and convex quadratic programs (CLF-QPs), ensuring safety and optimality in diverse control settings.
- CLFs have been extended through Hamilton–Jacobi theory, sum-of-squares programming, and learning-based techniques, facilitating scalable and robust design for high-dimensional systems.
A Control Lyapunov Function (CLF) is a scalar function that certifies the stabilizability of a dynamical system to a target set or point via feedback, generalizing the classical Lyapunov function by allowing selection of a stabilizing control at each state. CLFs are central in nonlinear control theory for characterizing regions of attraction, synthesizing stabilizing feedback laws, and integrating optimality, robustness, or constraints into closed-loop design. They serve as both a certificate of stabilizability and as a structural tool for constructive controller synthesis, unifying geometric and optimization-based approaches. The concept has been extended via viscosity solutions, Hamilton–Jacobi theory, sum-of-squares relaxations, learning-based synthesis, and compositional analysis for large-scale and uncertain systems.
1. Mathematical Foundations
A CLF is defined for a control-affine system
as a smooth or continuous function on a domain containing the target (usually the origin), satisfying:
- Positive definiteness: , for
- Dissipation condition: For all , there exists such that
for some (exponential decay), or more generally, for some positive definite function .
In minimum form, this is
(Gong et al., 2024). For discrete-time or switched systems, analogous one-step or mode-dependent decreases are required (Noroozi et al., 2019, Ravanbakhsh et al., 2015).
The classical construction is extended to value functions of optimal control or exit-time problems, giving rise to control Lyapunov–value functions (CLVFs) (Gong et al., 2024, Gong et al., 2024, Yegorov et al., 2019): which are viscosity solutions to Hamilton–Jacobi–Bellman or Isaacs variational inequalities (Gong et al., 2024).
2. Feedback Synthesis and Sontag’s Formula
CLFs provide a constructive path to feedback synthesis. For scalar and multi-input affine systems, Sontag’s universal formula yields a continuous feedback law directly from the CLF and its derivatives such that the closed-loop system is globally stabilizing wherever the CLF condition holds (Bongard et al., 4 Feb 2026, Gemert et al., 2024). For a CLF and quadratic weights , this continuously blends Lyapunov decrease with optimality:
with
guaranteeing for all where is a CLF (Bongard et al., 4 Feb 2026). Near equilibrium, this recovers LQR feedback exactly, and more generally, it minimizes an implicit CLF-dependent cost.
For systems with constraints (on state, input, or safety), the stabilizing controller can be computed pointwise as the solution to a convex quadratic program (CLF-QP), ensuring that the CLF decrease condition is respected (Gong et al., 2024, Dai et al., 2022, Taylor et al., 2019):
3. Computational Methods: Value Function, Sum-of-Squares, and Learning
Value function methods. Construction of CLFs via solving Hamilton–Jacobi partial differential equations provides non-smooth CLVFs that are valid for general nonlinear and disturbed systems, encoding both target invariance and explicit exponential convergence (Gong et al., 2024, Gong et al., 2024, Yegorov et al., 2019). The solution is synthesizable by grid-based time-marching or curse-of-dimensionality-free local programming (Yegorov et al., 2019).
Sum-of-squares (SOS) and polynomial techniques. For polynomial systems, convex or bilinear SOS programming provides certificates of the CLF condition (and, when paired with CBFs, joint feasibility) (Schneeberger et al., 2023, Dai et al., 2022, Dai et al., 2024). For fixed polynomial degree, necessary and sufficient conditions become SOS feasibility conditions via the Positivstellensatz. Compositional, bilinear-alternation or specialized merging techniques (e.g., control-sharing) facilitate the construction of CLFs for constrained settings (Blanchini et al., 2018, Dai et al., 2022).
Data-driven and neural methods. Learning-based approaches parameterize the CLF (and sometimes the controller) with neural networks, leveraging inductive biases to enforce CLF positivity, nulling at equilibrium, and monotonic decrease outside actuator-saturation regions (Lu et al., 3 Nov 2025, Wei et al., 2023). End-to-end direct minimization of Lyapunov-risk losses across sampled state points, optionally augmented by geometric shaping or controller regularization, accelerates both convergence rate and region of attraction size over prior learner-verifier frameworks.
Compositional and decomposition-based synthesis. For high-dimensional or large-scale systems, system decomposition and compositional CLF construction enable tractable synthesis: breaking a system into self-contained subsystems, computing low-dimensional CLVFs, and reconstructing a global CLF via max, sum, or admissible control set projection (Gong et al., 2024, Gong et al., 2024).
4. CLFs in Control System Design: Strictness, Barriers, and Constraints
Strict CLFs quantify robustness and rate of convergence by enforcing uniform exponential decay, yielding explicit -estimates and infinite gain margins (Todorovski et al., 29 Sep 2025). For nonholonomic and underactuated systems, special coordinate choices and modular decomposition have produced globally strict CLFs paired with inverse-optimal controller redesigns.
Barrier CLFs encode invariance principles for safety-critical systems: "barrier variants" diverge or penalize trajectories near constraint boundaries, ensuring almost-global stabilization away from a zero-measure exclusion set (Todorovski et al., 29 Sep 2025). Merging CLFs with barrier functions is addressed via joint convex or SOS formulations, with compatibility certificates (e.g., via Farkas’ Lemma and Positivstellensatz) guaranteeing simultaneous feasibility for the closed-loop controller (Gemert et al., 2024, Dai et al., 2024, Schneeberger et al., 2023).
Input and state constraints are incorporated into CLF synthesis through explicit feasibility conditions at control set vertices, via polyhedral, piecewise-affine, or robust convex programming (Houska et al., 20 Mar 2025, Dai et al., 2022). For discrete-time and hybrid systems, finite-step, flexible, or non-zeno CLFs permit contraction (or regulated non-monotonicity) over blocks or dwell intervals (Noroozi et al., 2019, Lazar, 2010, Ravanbakhsh et al., 2015).
5. Extensions: Robust, Learning-based, and High-Dimensional CLFs
Robust CLFs (R-CLVFs) extend the CLF concept to systems under bounded disturbances, via min–max Hamilton–Jacobi–Isaacs value functions with explicit robust control-invariant sets and regions of exponential stabilizability (Gong et al., 2024, Gong et al., 2024). Regions of attraction shrink with target decay rate but guarantee convergence within the constructed domain.
Learning-based frameworks optimize CLFs and controllers directly from data, leveraging online or episodic updates. Neural ISS-CLFs proved for systems with unstructured uncertainties provide forward invariance and input-to-state stability under learned controllers when the CLF gradient is bounded (Wei et al., 2023, Lu et al., 3 Nov 2025).
For high-dimensional systems, decomposition-based CLF/CLVF synthesis (e.g., for quadrotors or coupled ODE-PDE systems) overcomes the curse of dimensionality by reconstructing the global value function from tractable subsystem computations, with exactness or Lipschitz continuity proven under system-specific admissibility conditions (Gong et al., 2024).
6. Applications and Implementation
CLFs underpin a wide spectrum of applications, including:
- Nonlinear tracking and stabilization of robotic manipulators, vehicles, or generic nonlinear plants via CLF-QP, NMPC-CLF, or data-driven feedback (Grandia et al., 2020, Taylor et al., 2019, Ravanbakhsh et al., 2018).
- Global stabilization of nonholonomic systems (e.g., unicycle, ballbot) via modular strict CLFs and robust feedback with explicit convergence guarantees (Todorovski et al., 29 Sep 2025).
- Model predictive control for discrete-time nonlinear systems employing fs-CLFs for contractive or robust stability under interleaved optimization (Noroozi et al., 2019).
- Constrained LQ stabilization via CLF merging, maximizing safety and local optimality under hard state constraints (Blanchini et al., 2018).
- Boundary stabilization of distributed parameter systems (e.g., 1D parabolic PDEs) using structured CLFs for exponential decay (Karafyllis, 2019).
- Certification and synthesis of safe-stabilizable regions for nonlinear plants with polynomial barriers and constraints, utilizing SOS convex programs (Dai et al., 2022, Schneeberger et al., 2023, Dai et al., 2024).
Prototype implementations utilize direct transcription, QP solvers, SOS toolboxes (Mosek, SOSTOOLS), or deep learning frameworks, exploiting parallelization and compositionality to address high dimension or real-time constraints.
7. Theoretical and Practical Impact
The control Lyapunov function framework represents a unifying language for stability analysis, controller synthesis, and performance robustification in nonlinear, hybrid, and high-dimensional systems. Contemporary advances have closed key theoretical gaps in compatibility (e.g., CLF–CBF), computational tractability (e.g., system decomposition, sum-of-squares), and data-driven synthesis (e.g., end-to-end neural CLF optimization with inductive bias). Numerically robust and scalable algorithms now exist for a wide range of control tasks, including global/inverse-optimal stabilization, constraint and barrier handling, and robust safe learning.
Current research trends address compositional and distributed synthesis for networked systems, extensions to stochastic or risk-sensitive settings, online and adaptive CLF learning, and further integration of formal methods for certification and verification in increasingly uncertain and high-dimensional environments (Gong et al., 2024, Dai et al., 2024, Houska et al., 20 Mar 2025, Wei et al., 2023).