Event-Driven Receding Horizon Control
- Event-Driven Receding Horizon Control is an optimal control paradigm that triggers updates based on state events instead of fixed time intervals, reducing computational and communication burdens.
- It involves solving finite-horizon optimization problems at each event using methods like exact enumeration, heuristic approximations, ADMM formulations, and rollout strategies to manage nonconvex constraints.
- ED-RHC delivers provable stability and near-optimal performance in diverse applications such as multi-agent coordination, networked estimation, ride-sharing, and energy-aware robotic systems.
Event-driven receding horizon control (ED-RHC) is a hybrid optimal control paradigm characterized by intermittent, state-driven control updates rather than periodic time-based scheduling. Updates occur at discrete events defined by the system's evolution or the occurrence of exogenous and controllable triggers, with finite-horizon optimization solved at each event for near-future control actions. This methodology has been rigorously developed in linear-quadratic systems with limited actuation, multi-agent coordination, networked estimation, persistent monitoring, ride-sharing, and energy-aware robotic settings. The distinctive features include non-uniform control execution, combinatorial nonconvex optimization, and substantial computational and communication savings while maintaining provable stability and performance bounds.
1. Mathematical Formulation and Problem Classes
ED-RHC applies to a variety of system models, including discrete-time linear systems, multi-agent networked systems, and hybrid automata. In the canonical linear-quadratic setting, the system evolves as:
The cost over finite horizon is:
Event-driven control restricts to zero at time-instants where is inside a dead-zone (e.g., ). The optimization is:
where is state-dependent and nonconvex (Demirel et al., 2017).
In networked and multi-agent contexts (e.g., persistent monitoring, distributed estimation), each agent or controller solves distributed subproblems determining dwell times, target assignments, or action horizons based on local events such as arrivals, departures, uncertainty crossings, or neighbor coverage changes (Welikala et al., 2020, Welikala et al., 2021).
2. Event Definitions and Triggering Rules
Events are precisely defined state transitions or system changes that necessitate re-optimization. These include:
- State threshold crossings: When exits/enters the dead-zone .
- Agent arrivals/departures: In network routing or multi-agent monitoring, events include agents reaching nodes or departing.
- External requests or appearances: New tasks, targets, or disturbances (e.g., passenger requests in ride-sharing (Chen et al., 2019), detection of new targets (Khazaeni et al., 2014)).
- Uncertainty or state milestones: Target uncertainty hitting zero, or estimation error thresholds in distributed estimation (Welikala et al., 2020).
Control is executed open-loop between events; re-optimization is only performed at the occurrence of events, substantially reducing computation relative to periodic MPC.
3. Finite-Horizon Optimal Control under Event Constraints
At each event, a finite-horizon open-loop optimal control problem is solved, subject to combinatorial event-driven constraints (e.g., state-dependent settings, feasible agent-task assignments). The optimization domains are typically nonconvex, due to implicit dependencies of the feasible set on future states and inputs.
Solution strategies include:
- Exact enumeration: Disjunctive quadratic programming over all possible event sequences, yielding the global optimum at exponential complexity (Demirel et al., 2017).
- Greedy and heuristic methods: Greedy region assignment, active-target reduction, or finite heading sets in multi-agent and cooperative missions, enabling polynomial-time computation and near-optimality (Khazaeni et al., 2014).
- ADMM-based heuristics: Consensus ADMM formulations, projecting iterates onto the nonconvex event-trigger constraints, with a final QP polish step (Demirel et al., 2017).
- Rollout and combinatorial search: Rollout optimization over binary actuation sequences to promote sparsity in actuation and control performance, achieving performance guarantees above periodic baselines (Nishida et al., 29 Sep 2025).
4. Receding Horizon Implementation and Distributed Schemes
ED-RHC operates in a receding horizon loop, triggered only at events:
- Measurement: At each event, measure current state or agent locations.
- Finite-horizon solve: Solve the event-constrained OCP for the horizon or a locally optimal planning window.
- Apply control: Execute only the first control action (or dwell-time/heading), then await next event.
- Repeat: Advance to next event and re-solve.
Distributed implementations are prevalent in networked systems, where each agent requires only local data (states of neighboring nodes), admits explicit closed-form or enumerative solutions, and exploits automatic (variable) horizon selection for parameter-free computation (Welikala et al., 2020, Welikala et al., 2021, Welikala et al., 2020).
5. Theoretical Properties: Stability, Optimality, Performance Guarantees
ED-RHC schemes provide rigorous guarantees:
- Practical stability is established under stabilizability and suitable terminal cost/control law conditions. For discrete-time LQ systems, closed-loop trajectories converge to a -norm ball whose radius depends on event threshold parameters and system matrices (Demirel et al., 2017).
- Optimality per event: The event-driven structure allows for global optimality of the local subproblems at each event; the overall process approaches monotonic improvement due to automatic planning horizon optimization (Welikala et al., 2020).
- Unimodality: The local RHCPs possess unimodal cost profiles under mild assumptions, enabling efficient scalar or bivariate minimization per agent (Welikala et al., 2020).
- Performance bounds: Rollout-based event-driven controllers are provably within per-step cost of the optimal periodic controller, while Lyapunov drift conditions ensure mean-square stability (Nishida et al., 29 Sep 2025).
- Non-Zeno behavior: Event-driven updates are guaranteed to occur at nonzero intervals, preventing pathological re-optimization (Welikala et al., 2020).
6. Application Areas and Practical Effectiveness
ED-RHC has been successfully applied across diverse domains:
- Networked linear systems: Threshold-based event-triggered MPC reduces control transmissions and computation, yielding substantial communication savings with limited performance degradation (Demirel et al., 2017).
- Ride-sharing and transport systems: Discrete event-driven RHC drastically reduces optimization frequency and search size compared to time-driven MPC, achieving nearly improvement in weighted-sum metrics over greedy assignment heuristics and real-time implementation on city-scale networks (Chen et al., 2019).
- Multi-agent cooperative reward collection: Event-driven CRH controllers outperform original infinite-dimensional MPC and greedy cycle planners, with $20$– reward increase in uncertain environments, stabilized trajectories, and finite heading set reduction (Khazaeni et al., 2014).
- Persistent monitoring and estimation: Event-driven RHC excels in network surveillance tasks with energy-aware or first-order agent dynamics, parametrically trading off uncertainty reduction and energy usage; closed-form optimization, distributed computation, and robustness to disturbances and partial information are demonstrated (Welikala et al., 2021, Welikala et al., 2020).
- Disturbance-aware predictive control: Event-driven model predictive controllers for hybrid automaton-modeled power inverters exploit disturbance estimation via recursive least squares, outperform traditional PWM controllers in tracking error, settling time, and robustness to load-shifts (Chen et al., 2020).
- Sparsity-promoting control: Rollout-based event-driven schemes balance actuation frequency and control performance, with explicit stability and near-optimal cost guarantees (Nishida et al., 29 Sep 2025).
7. Design Considerations, Parameter Selection, and Extensions
Key guidelines for ED-RHC design:
- Threshold selection: Choosing the event-trigger threshold trades off between communication savings and performance; moderate threshold values typically provide substantial savings with modest impact (Demirel et al., 2017).
- Horizon choice: Larger finite horizons improve performance and stability but increase computational burden; event-driven approaches with automatic horizon optimization avoid manual tuning (Welikala et al., 2020).
- Solution strategy: Exact enumeration is viable for low-dimensional state spaces; otherwise, greedy, ADMM, rollout, or combinatorial heuristics provide scalable approximations with explicit performance bounds.
- Robustness and feasibility: Distributed event-driven controllers inherently adapt to disturbances by triggering earlier/later updates, and always admit a feasible "stay-put" option (Welikala et al., 2021, Welikala et al., 2020).
- Learning-based acceleration: Machine learning classifiers can be integrated to predict optimal actions per event, reducing computational load by bypassing exhaustive search, with negligible loss in average performance (Welikala et al., 2020).
- Potential extensions: Ongoing developments include incorporating online terminal cost estimation, handling nontrivial agent dynamics (acceleration), adapting to time-varying networks, and integrating high-level task allocation (Welikala et al., 2020).
A plausible implication, given the breadth of applications and robust theoretical underpinnings, is that ED-RHC forms a foundational architecture for scalable, efficient, and provably-stable optimal control in cyber-physical, networked, and multi-agent systems subject to resource-constrained or intermittent actuation scenarios.