Scenario Approach and Discarding
- The scenario approach and discarding method converts chance constraints into deterministic ones by evaluating finite random samples.
- Discarding low-impact constraints reduces computational load and enhances solution performance while maintaining bounded violation risk.
- Truncation techniques using convex hull approximations and cascade discarding ensure significant efficiency gains with rigorous feasibility safeguards.
The scenario approach and discarding methodologies address the computational and statistical challenges of handling chance-constrained optimization and control problems with sampled uncertainty. These frameworks systematically convert probabilistic constraints, which are generally infinite-dimensional, into tractable deterministic constraints by considering random samples (“scenarios”) of the uncertainty and develop principled ways to reduce the scenario set via truncation or discarding, while providing formal guarantees on constraint violation and solution feasibility.
1. Classical Scenario Approach for Chance-Constrained Problems
The classical scenario approach is a randomized method for transforming a chance-constrained program into a deterministic program by replacing uncertain constraints with constraints evaluated at finitely many i.i.d. samples. Given a chance-constrained program of the form
where is convex in for every , the scenario program (denoted SP) samples i.i.d. uncertainty scenarios and solves
If is chosen according to appropriate bounds, the minimizer satisfies the original probabilistic constraint with high confidence. The canonical result by Campi and Garatti yields
for decision space dimension and , provided (Romao et al., 2020).
2. Motivation and Methods for Discarding Constraints
Direct application of the scenario approach can require prohibitively large to ensure tight violation probabilities for small and high confidence levels, resulting in significant online computational load. Furthermore, enforcing all sampled constraints often yields overly conservative solutions and suboptimal costs. Sampling-and-discarding methods address these limitations by allowing for deliberate removal of a small number of sampled constraints after solving the full scenario program. The resulting program optimizes over the reduced constraint set, potentially improving performance at the expense of modestly increased violation risk, which is bounded a priori (Romao et al., 2020).
Existing bounds (e.g., Campi–Garatti 2011) for the probability of exceeding the violation threshold after discarding constraints are of the form
but this can be highly conservative due to the combinatorial factor.
3. Truncation and Approximate Convex Hull Methods
Recent advances formalize sample truncation as a structured means to reduce the number of scenarios enforced online, especially in the context of closed-loop, disturbance-feedback chance-constrained trajectory optimization. Instead of discarding arbitrary samples, truncation identifies the most informative scenarios through an importance mapping that quantifies the impact of each sample on the constraints of interest. The goal is to select a minimal subset whose convex hull approximates the original scenario hull within a prescribed Hausdorff distance (“-approximate convex hull,” or -ACH).
This approach proceeds as follows (Sartipizadeh et al., 2018):
- Map each disturbance sample to , capturing its extremality with respect to the relevant constraints.
- Greedily select a minimal subset whose convex hull approximates that of all mapped samples within the allowable error.
- Scenarios not selected are truncated offline.
- The error introduced by truncation is quantified and used to construct buffer “safe sets” that tighten the original constraints, guaranteeing conservative feasibility for all samples.
Buffer sizes are functions of the truncation error and the feedback gain norm, ensuring feasibility for both state and input constraints under the reduced scenario set.
4. Cascade Discarding and Tight Bounds via Compression Learning
A precise discarding methodology can be constructed by removing constraint batches coinciding with the support sets of consecutive optimization problems (“cascade” approach). For convex scenario programs with samples, removing constraints in rounds (each batch size , the dimension of ), and defining the final solution as that from the reduced program, the probability-of-violation bound admits significant improvement over existing results.
By appealing to sample compression learning, it is shown that the final solution is generated by a compression set of size . The tight violation bound is then (Romao et al., 2020): This removes the combinatorial factor in previous results and is tight under the “support always violated” condition.
5. Theoretical Guarantees and Problem Reformulation
The integrity of chance-constraint satisfaction is preserved in both truncation and systematic discarding approaches. For truncated scenario sets with analytic buffer sets constructed as described, the probabilistic constraint satisfaction guarantees are maintained—specifically, with confidence , the likelihood that the optimized solution violates the original constraint by more than is at most , provided the underlying sample size satisfies the classical scenario bound (Sartipizadeh et al., 2018).
The final optimization problem is convex (QP or SOCP, depending on the cost) and has the following form after buffer and truncation,
Similar tightness guarantees hold for the cascade discarding method whenever the active constraints are removed in batches corresponding to support sets.
6. Computational Implications and Illustrative Examples
Both truncation and structured discarding approaches deliver dramatic computational savings. After offline truncation or discarding, the online constraint count is reduced from to , and buffer calculations are handled efficiently using basic vector operations and, in the truncation case, a greedy -ACH construction (Sartipizadeh et al., 2018). In a typical double-integrator example with Gaussian disturbances, can be reduced to or $20$ with negligible loss of statistical guarantee and significant reduction in solution times.
For the cascade discarding method, a resource-sharing linear program (with variables, samples) under the new tight bound enables discarding up to constraints while maintaining breach probabilities (and thus confidence levels) competitive with much more conservative older bounds allowing (Romao et al., 2020).
| Method | Maximum Discarded Constraints | Tightness of Bound | Example (d=2, N=2000) Cost Increase |
|---|---|---|---|
| Classical (no discard) | 0 | Tight | – |
| Campi–Garatti (2011) | Conservative | – | |
| Tight bound (cascade) | Tight for support cases |
7. Extensions, Applicability, and Open Problems
The scenario approach with truncation and structured discarding is broadly applicable in convex problems where the number and structure of active constraints (support sets) can be characterized, and sample importances can be mapped in a manner that admits convex hull and buffer analysis.
Open questions include the design and analysis of removal schemes that discard constraints one-by-one (rather than in batches) while retaining tightness; development of efficient algorithms for support identification in cases with multiple candidate support sets; and extension to nonconvex chance constraints through hierarchical or mixed-integer scenario programs (Romao et al., 2020).
A plausible implication is that further advances in sample compression theory and efficient buffer construction may further reduce the conservativeness and computational burden of scenario-based chance-constrained optimization in high dimensions and closed-loop settings.