Stable Sparse RRT (SST)
- SST is a sampling-based motion planning algorithm that uses a fixed witness set to sparsely represent the state space and prune high-cost trajectories.
- It employs a best-near selection and extension procedure to efficiently grow the search tree while ensuring asymptotic near-optimality under mild conditions.
- HySST extends the approach to hybrid systems by integrating continuous flows and discrete jumps, optimizing motion planning in complex dynamic environments.
Stable Sparse Rapidly-Exploring Random Trees (SST) are advanced sampling-based motion planning algorithms designed for optimal and efficient path finding in high-dimensional state spaces, including those governed by hybrid dynamical systems. Unlike classical RRT and RRT* approaches, SST introduces a sparsification mechanism via a static set of witness points to prune high-cost trajectories, achieving provable asymptotic near-optimality under mild conditions and avoiding excessive tree growth as solutions are refined. Its extension to hybrid dynamics, known as HySST, randomly integrates both continuous (flow) and discrete (jump) state transitions, accommodating systems with mode switches and complex behaviors (Wang et al., 2023).
1. Data Structures and Key Notation
SST operates on a search tree , where:
- is the finite set of tree vertices, each representing a system state and cost-to-come for the trajectory segment concatenated from root to .
- are directed edges denoting feasible state transitions determined by dynamic constraints.
A central feature is the fixed witness set , with each witness associated to a single current representative . The witness set enforces sparsity: all vertices lie within radius of some witness, and only the lowest-cost vertex in each witness neighborhood ball is retained as active. Vertices are partitioned into (eligible for extension) and (preserving tree connectivity but not eligible for further growth).
2. Algorithm Workflow and Pseudocode
The SST (and HySST) algorithm proceeds as follows:
- Initialization
- Sample start states ; initialize the tree with root vertices for each , set , and add witnesses accordingly.
- Main Loop (Iteration for )
- Sampling: Draw random sample from free state space (for HySST, may sample discrete jump states with probability ).
- Best-Near Selection: Identify candidate vertices within radius of , pick the one with minimal cost-to-come () as . If none, select nearest by Euclidean distance.
- Extend: Apply random control input to , producing a new trajectory segment . Update cost: .
- Prune-and-Add:
- Find nearest witness to .
- If , create new witness and associate a new active vertex.
- Else if , update the representative, move previous one to inactive, and recursively prune leaves.
- Otherwise, discard .
A candidate solution is extracted by searching for goal-reaching vertex and reconstructing its trajectory.
3. Formal Properties and Theoretical Guarantees
- Cost-to-Come: For with parent and edge , ; by induction, this accumulates total cost for the trajectory leading to .
- Witness Sparsity: For each , the ball covers all vertices, and distinct witnesses maintain mutual separation of at least .
- Asymptotic Near-Optimality (Main Theorem): Under assumptions of Lipschitz dynamics, additive cost, clearance along the optimal plan, and parameter constraint , for any ,
where is the infimal cost among all feasible plans (Wang et al., 2023).
4. Proof Techniques and Underlying Principles
Established proof strategies utilize:
- Chain of overlapping balls of radius along an optimal plan exploiting positive clearance.
- Inductive arguments show that, given an active vertex within the th ball, there is fixed probability for extension into the next ball, leveraging uniform sampling, Lipschitz continuity, and the extend procedure.
- Stability is maintained since pruning only discards inferior-cost vertices; best-cost representatives persist throughout search.
- Markov chain and Borel-Cantelli lemma arguments guarantee—over infinite iterations—the discovery of near-optimal solutions.
A plausible implication is that the static pruning reduces unnecessary branches, focusing computational effort on promising trajectories.
5. Computational Complexity and Parameter Selection
Per iteration, SST requires:
- Nearest neighbor search in within (naively , improved via spatial indexing such as KD-trees).
- Nearest witness search (), generally tractable as total samples.
- Simulation cost for the Extend procedure.
Memory complexity scales as , with typically bounded by the packing number of at resolution .
Parameter tuning:
- (witness radius): Controls sparsity; smaller values yield finer approximation, increased vertex count.
- (best-near radius): Regulates exploration versus exploitation; too small impedes growth, too large risks suboptimal branches.
- The condition (optimal clearance) is required for convergence guarantees. Practically, – of state space diameter and are recommended.
6. Applications to Hybrid Dynamical Systems
HySST generalizes SST to hybrid systems with both flow (continuous evolution) and jump (discrete transitions) regimes. At each extension, the algorithm may randomly select flow or jump, dynamically growing the tree across hybrid state spaces.
- Actuated Bouncing Ball: State ; cost function is . HySST identifies single-bounce optimal plans with vertices, outperforming unpruned HyRRT, which requires (Wang et al., 2023).
- Collision-Resilient Tensegrity Multicopter: State includes position, velocity, acceleration, with jumps modeling wall collisions. HySST discovers wall-assisted shortening of flight time while controlling tree growth in high (6D) state dimensions.
Summary Table: Example Domains and HySST Features
| Application Domain | Hybrid Elements | HySST Performance |
|---|---|---|
| Actuated Bouncing Ball | Flow & Jump | Single-bounce plan, O(200) vertices |
| Tensegrity Multicopter | Flow & Collision | Efficient bounce exploitation in 6D |
7. Context and Related Research
SST and HySST are grounded in the theoretical foundation of sampling-based kinodynamic planning as discussed by Li, Littlefield, and Bekris (IJRR 2016). The sparsification and pruning principles adapted to hybrid systems by Wang and Sanfelice (UCSC TR-HSL-02-2023) broaden their applicability to real-world robotic domains where discrete transitions and nontrivial cost landscapes are present.
This suggests future work may further generalize SST to richer hybrid systems or integrate adaptive sparsification, but all concrete claims strictly trace to the above sources (Wang et al., 2023).