Papers
Topics
Authors
Recent
Search
2000 character limit reached

Learning Certified Neural Network Controllers Using Contraction and Interval Analysis

Published 30 Mar 2026 in eess.SY | (2603.28011v1)

Abstract: We present a novel framework that jointly trains a neural network controller and a neural Riemannian metric with rigorous closed-loop contraction guarantees using formal bound propagation. Directly bounding the symmetric Riemannian contraction linear matrix inequality causes unnecessary overconservativeness due to poor dependency management. Instead, we analyze an asymmetric matrix function $G$, where $2n$ GPU-parallelized corner checks of its interval hull verify that an entire interval subset $X$ is a contraction region in a single shot. This eliminates the sample complexity problems encountered with previous Lipschitz-based guarantees. Additionally, for control-affine systems under a Killing field assumption, our method produces an explicit tracking controller capable of exponentially stabilizing any dynamically feasible trajectory using just two forward inferences of the learned policy. Using JAX and $\texttt{immrax}$ for linear bound propagation, we apply this approach to a full 10-state quadrotor model. In under 10 minutes of post-JIT training, we simultaneously learn a control policy $π$, a neural contraction metric $Θ$, and a verified 10-dimensional contraction region $X$.

Summary

  • The paper introduces a novel framework integrating contraction theory and interval analysis to certify neural network controllers with provable stability guarantees.
  • It leverages an asymmetric matrix formulation and GPU-parallelized interval checks to reduce conservatism and scale verification to high-dimensional systems.
  • Empirical evaluations on a 10-dimensional quadrotor demonstrate enhanced tracking performance and efficient certification across complex state regions.

Certified Learning of Neural Network Controllers via Contraction and Interval Analysis

Introduction

The paper "Learning Certified Neural Network Controllers Using Contraction and Interval Analysis" (2603.28011) introduces a formal framework for synthesizing neural feedback controllers with provable closed-loop contraction guarantees. The approach addresses a persistent challenge in nonlinear control—the ability to leverage the expressivity of neural networks for highly nonlinear plants while simultaneously providing mathematically rigorous certification of system behavior. Prior verification methods for neural control, especially those based on Lyapunov and barrier functions, are hindered by sample complexity, over-conservatism from interval over-approximations, and scalability issues to higher-dimensional systems. In contrast, this work leverages an asymmetric matrix formulation of contraction theory and novel interval analysis techniques to resolve critical dependency management challenges, enabling tractable, parallelizable verification over large contraction regions.

Theoretical Foundations

Riemannian Contraction and Certification

The central theoretical contribution is a reformulation of the Riemannian contraction condition to exploit an asymmetric contraction matrix GG, in contrast to the standard symmetric LMI condition SS:

S(t,x):=M(x)∂f∂x(t,x)+(∂f∂x(t,x))TM(x)+∂f(t,x)M(x)+2cM(x)S(t,x) := M(x) \frac{\partial f}{\partial x}(t,x) + \left( \frac{\partial f}{\partial x}(t,x) \right)^T M(x) + \partial_{f}(t,x) M(x) + 2c M(x)

Here, M(x)M(x) is a smooth, positive-definite Riemannian metric. Instead of directly certifying negative definiteness via over-approximations—which incurs significant dependency-related overestimation—the method introduces GG:

G(t,x)=Θ(x)T[∂f(t,x)Θ(x)+Θ(x)(∂f∂x(t,x)+cI)]G(t,x) = \Theta(x)^T \left[ \partial_{f}(t,x) \Theta(x) + \Theta(x) (\frac{\partial f}{\partial x}(t,x) + c I) \right ]

with M(x)=Θ(x)TΘ(x)M(x) = \Theta(x)^T\Theta(x). By focusing on the logarithmic norm μ2(G)\mu_2(G) rather than the eigenvalues of SS, the approach dramatically reduces conservatism in the verification pipeline, leading to significantly larger certifiable contraction regions.

Interval Analysis and GPU-Parallelized Certification

A core technical advancement is the introduction of an interval analysis scheme that employs 2n2^n GPU-parallelized corner checks (where SS0 is the state dimension) for the verification of contraction conditions across an entire interval subset SS1 of the state space. By exploiting the structure of the logarithmic norm and leveraging a key result from interval matrix theory [Rohn 1994], the method computes the maximum over all possible sign permutations on the diagonal of the interval hull, guaranteeing coverage of the worst-case scenario for contraction violations. This approach avoids the exponential scaling inherent to naive sampling (which incurs SS2 checks), making the method applicable in dimensions previously considered intractable.

Algorithmic Framework

The certification-training framework jointly optimizes two neural networks: the control policy SS3 and the upper-triangular metric factor SS4 defining SS5. The algorithm (see Algorithm 1 in the paper) utilizes JAX-based automatic differentiation, interval bound propagation (via immrax), and CROWN-style linear bound propagation, with a composite loss that ensures:

  • The interval hull of SS6 satisfies the contraction logarithmic norm condition uniformly.
  • SS7 remains strictly positive-definite with prescribed bounds.
  • Parallelization efficiently accelerates logarithmic norm maximizations via GPU.

This seamless integration enables end-to-end differentiable training with certified contraction throughout the underlying set SS8.

Explicit Tracking Controller Synthesis

For general affine-in-control systems, the framework extends to the design of explicit tracking controllers under the strong constant curvature (Killing field) condition. If SS9, the control vector field, and the metric S(t,x):=M(x)∂f∂x(t,x)+(∂f∂x(t,x))TM(x)+∂f(t,x)M(x)+2cM(x)S(t,x) := M(x) \frac{\partial f}{\partial x}(t,x) + \left( \frac{\partial f}{\partial x}(t,x) \right)^T M(x) + \partial_{f}(t,x) M(x) + 2c M(x)0 satisfy:

S(t,x):=M(x)∂f∂x(t,x)+(∂f∂x(t,x))TM(x)+∂f(t,x)M(x)+2cM(x)S(t,x) := M(x) \frac{\partial f}{\partial x}(t,x) + \left( \frac{\partial f}{\partial x}(t,x) \right)^T M(x) + \partial_{f}(t,x) M(x) + 2c M(x)1

then a simple feedback form

S(t,x):=M(x)∂f∂x(t,x)+(∂f∂x(t,x))TM(x)+∂f(t,x)M(x)+2cM(x)S(t,x) := M(x) \frac{\partial f}{\partial x}(t,x) + \left( \frac{\partial f}{\partial x}(t,x) \right)^T M(x) + \partial_{f}(t,x) M(x) + 2c M(x)2

exponentially stabilizes the system along an arbitrary feasible reference S(t,x):=M(x)∂f∂x(t,x)+(∂f∂x(t,x))TM(x)+∂f(t,x)M(x)+2cM(x)S(t,x) := M(x) \frac{\partial f}{\partial x}(t,x) + \left( \frac{\partial f}{\partial x}(t,x) \right)^T M(x) + \partial_{f}(t,x) M(x) + 2c M(x)3 in the contraction region S(t,x):=M(x)∂f∂x(t,x)+(∂f∂x(t,x))TM(x)+∂f(t,x)M(x)+2cM(x)S(t,x) := M(x) \frac{\partial f}{\partial x}(t,x) + \left( \frac{\partial f}{\partial x}(t,x) \right)^T M(x) + \partial_{f}(t,x) M(x) + 2c M(x)4. Notably, this controller is explicit—requiring only two policy evaluations per timestep—and obviates the need for online geodesic computation or state-space augmentation prevalent in previous contraction-based tracking algorithms.

Empirical Evaluation: 10-Dimensional Quadrotor

The effectiveness and scalability of the approach are demonstrated on a nonlinear 10-state quadrotor system, a canonical benchmark in nonlinear control. The joint neural policy and metric are successfully trained and certified (in under 10 minutes of training) for a region S(t,x):=M(x)∂f∂x(t,x)+(∂f∂x(t,x))TM(x)+∂f(t,x)M(x)+2cM(x)S(t,x) := M(x) \frac{\partial f}{\partial x}(t,x) + \left( \frac{\partial f}{\partial x}(t,x) \right)^T M(x) + \partial_{f}(t,x) M(x) + 2c M(x)5, encompassing realistic bounds on translation, attitude, and thrust.

Key empirical findings include:

  • Sample Complexity Reduction: The certifiable contraction region is substantially larger compared to Lipschitz or sample-based certification approaches, which become infeasible in such high dimensions.
  • Strong Empirical Tracking: The neural controller, coupled with the certified metric, tracks complex trajectories (e.g., figure-eight, helix, trefoil), consistently regulating state tracking errors and returning to the contraction region even if briefly left due to system nonlinearities or initial condition perturbations.
  • Efficient Verification: All S(t,x):=M(x)∂f∂x(t,x)+(∂f∂x(t,x))TM(x)+∂f(t,x)M(x)+2cM(x)S(t,x) := M(x) \frac{\partial f}{\partial x}(t,x) + \left( \frac{\partial f}{\partial x}(t,x) \right)^T M(x) + \partial_{f}(t,x) M(x) + 2c M(x)6 (1024) corner cases for contraction verification are dispatched in parallel, confirming the practical tractability of the analysis. Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1: Verified contraction and stable trajectory tracking in a 10-state quadrotor; each subplot demonstrates contraction behavior from random initial conditions under the certified policy for distinct reference trajectories.

Practical and Theoretical Implications

This framework offers a tractable certification strategy for learning-based controllers in safety-critical high-dimensional systems. By rigorously verifying contraction over nontrivial subsets of the state space and by reducing conservatism from dependency management, the method closes a critical gap in uniting expressive policy learning with formal guarantees. The explicit tracking form (when Killing fields are satisfied) makes the approach particularly practical, mitigating online computational burdens.

Theoretically, this asymmetric contraction condition and interval analysis technique points toward more general applications in nonlinear stability verification, particularly for scalable synthesis of neural Lyapunov and barrier certificates. The explicit feedback controllers bridge the gap between neural network approximations and classical geometric control synthesis grounded in contraction theory.

Future Directions

Potential extensions of this work include:

  • Forward Invariance: Developing methods for ensuring forward invariance of the certified region to guarantee robust safety under bounded uncertainties.
  • Saturation and Realizability: Integrating bounded control input constraints for deployment on real platforms.
  • Generalized Metrics: Extending certification and synthesis to Finsler-Lyapunov functions and non-Euclidean (e.g., polyhedral) norms for systems where Riemannian structures are restrictive.
  • Modular/Hierarchical Control: Exploiting compositional properties of contraction for modular-certified control design in large-scale systems.

Conclusion

The paper offers a significant advance in certified control synthesis for nonlinear systems with neural network policies. By introducing an efficiently verifiable, tractable, and provably less conservative criterion for contraction certification, the methodology enables the practical deployment of deep learning controllers in environments demanding strong theoretical safety and stability guarantees. The integration of GPU-parallelizable interval analysis and explicit controller synthesis frameworks is likely to catalyze further research in scalable, certified learning for complex dynamical systems.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.