Maximum Hands-Off Sparse Control Framework
- The topic is a control framework that minimizes active control time by optimizing the support of control signals under system constraints.
- It employs an L0 optimality criterion, leading to a bang–off–bang control profile that is often equivalent to L1-optimal control under normality conditions.
- The framework features a continuous, strictly convex value function that ensures energy savings and robust model predictive control implementations.
Maximum hands-off sparse control, also known as -optimal control, is a paradigm in control theory where the objective is to maximize intervals of actuator inactivity—that is, to synthesize control laws that are exactly zero for as much of the time horizon as possible, subject to system constraints and prescribed boundary conditions. The central problem is to find, among all admissible controls, one whose support (the set of times when the control is nonzero) has minimal measure. This minimizes actuator usage and facilitates significant savings in energy, hardware wear, or communication resources, with applications across green control, networked control systems, and embedded platforms.
1. Formal Problem Statement and Plant Class
The canonical setting is a linear time-invariant (LTI) system: with nonsingular, , under a pointwise amplitude constraint: The admissible control set for an initial state is
The maximum hands-off (-optimal) control problem seeks
where is the Lebesgue measure (“total time-on”) of the control’s support.
The domain of is the reachable set
and when . The cost can also be written as with if , $0$ else.
2. Existence, Equivalence to -Optimal Control, and Bang-Off-Bang Structure
Under the foundational normality condition—specifically, controllable and nonsingular—the -optimal control problem
is equivalent to -optimal control: The optimal law takes a bang–off–bang profile: almost everywhere—i.e., the solution alternates between maximal input, zero, and possibly the opposite extreme, with long segments of exactly zero control. The proof uses the Pontryagin Maximum Principle, showing that minimization over under amplitude bounds yields extremal controls which are either at or $0$.
This equivalence does not necessarily hold if normality fails; in such cases, -optimal controls may fill in zero intervals with non-sparse continuous arcs, destroying hands-off properties (Ikeda et al., 2015, Chatterjee et al., 2016).
3. Analytical Properties of the Value Function
The value function possesses critical regularity and convexity properties:
- Domain: is defined and finite on the reachable set .
- Continuity: Under controllability and nonsingular, is continuous on [Theorem 3.3, (Ikeda et al., 2014)]. Open sub-level sets correspond exactly to interior truncated reachable sets parameterized by cost thresholds.
- Strict Convexity: is strictly convex on [Theorem 4.1, (Ikeda et al., 2014)]. For any distinct and ,
The strictness exploits the uniqueness of bang–off–bang profiles and cannot be achieved unless unless the controls coincide almost everywhere.
- Level Set Structure: For , the sets and define closed, convex subsets of .
- Sensitivity and Robustness: The continuity of gives that small perturbations in the initial state produce small changes in the value function, supporting robust bounds on the increase in sparsity cost under model and state uncertainty.
4. Maximum Hands-Off Control in Predictive and Feedback Schemes
The regularity and strict convexity of facilitate its use as a terminal (value) function in model predictive control (MPC) settings:
- Terminal Cost: Choosing the terminal cost and terminal constraint satisfies Lyapunov decrease conditions, ensuring asymptotic stability of the origin in closed-loop MPC:
- Sublevel Sets for Robustness: The closed, convex nature of sublevel sets provides robustness margins for handling initial-state uncertainties and defining invariant sets for terminal constraints.
- Value Function as Lyapunov Function: Under the stated assumptions, is positive definite on , , and otherwise, qualifying as a control Lyapunov function.
5. Practical Computation, Approximations, and Algorithmic Approaches
- Convex Formulation (-relaxation): When normality holds, the -problem is solved by standard -optimal control algorithms (linear programming, indirect shooting, direct collocation).
- Iterative Reweighting: For systems where direct equivalence may not hold or for increased sparsity, iterative reweighted (IRL1) algorithms are recommended:
- Solve weighted problem using current weights to concentrate support on smaller intervals.
- Mixed-Integer Programming: Discretize time and introduce binary variables for on/off control activity as , then minimize using MILP solvers.
- Bang–Off–Bang Enforceability: Numerical schemes must ensure that the controls remain in almost everywhere to uphold hands-off structure; discretization artifacts can lead to suboptimal non-sparse solutions.
- Algorithmic Challenges:
- Accurate enforcement of switching conditions.
- Managing time-discretization resolution versus computational complexity.
- Warm-starting iterative algorithms to ensure fast convergence.
6. Illustrative and Analytical Example: Scalar Case
For , , the exact value function and optimal support can be computed analytically: For , the optimal hands-off control has a single switch time and
This function is continuous and strictly convex in , confirming the general theory. The support control thus stays at as long as possible, only applying maximal input when strictly necessary.
The maximum hands-off sparse control framework is thus formally defined through the minimization of control support measure; its value function possesses key analytical properties of continuity and strict convexity under controllability and nonsingularity; its connection to -optimal control enables convex computation under normality; its value function enables robust and stabilizing MPC implementations; and a range of computational strategies adapt the framework to broader classes of plants and practical implementation scenarios (Ikeda et al., 2014).