Papers
Topics
Authors
Recent
Search
2000 character limit reached

Stable Sparse RRT (SST)

Updated 20 January 2026
  • SST is a sampling-based motion planning algorithm that uses a fixed witness set to sparsely represent the state space and prune high-cost trajectories.
  • It employs a best-near selection and extension procedure to efficiently grow the search tree while ensuring asymptotic near-optimality under mild conditions.
  • HySST extends the approach to hybrid systems by integrating continuous flows and discrete jumps, optimizing motion planning in complex dynamic environments.

Stable Sparse Rapidly-Exploring Random Trees (SST) are advanced sampling-based motion planning algorithms designed for optimal and efficient path finding in high-dimensional state spaces, including those governed by hybrid dynamical systems. Unlike classical RRT and RRT* approaches, SST introduces a sparsification mechanism via a static set of witness points to prune high-cost trajectories, achieving provable asymptotic near-optimality under mild conditions and avoiding excessive tree growth as solutions are refined. Its extension to hybrid dynamics, known as HySST, randomly integrates both continuous (flow) and discrete (jump) state transitions, accommodating systems with mode switches and complex behaviors (Wang et al., 2023).

1. Data Structures and Key Notation

SST operates on a search tree T=(V,E)\mathcal{T} = (V, E), where:

  • VXRnV \subset \mathcal{X} \subseteq \mathbb{R}^n is the finite set of tree vertices, each representing a system state xvx_v and cost-to-come c^v=c(σv)\hat{c}_v = c(\sigma_v) for the trajectory segment σv\sigma_v concatenated from root to vv.
  • EV×VE \subset V \times V are directed edges denoting feasible state transitions determined by dynamic constraints.

A central feature is the fixed witness set WRnW \subset \mathbb{R}^n, with each witness ww associated to a single current representative rep(w)V\mathrm{rep}(w) \in V. The witness set enforces sparsity: all vertices lie within radius δs\delta_s of some witness, and only the lowest-cost vertex in each witness neighborhood ball B(w,δs)B(w, \delta_s) is retained as active. Vertices are partitioned into VactiveV_\text{active} (eligible for extension) and VinactiveV_\text{inactive} (preserving tree connectivity but not eligible for further growth).

2. Algorithm Workflow and Pseudocode

The SST (and HySST) algorithm proceeds as follows:

  1. Initialization
    • Sample start states X0X_0; initialize the tree with root vertices v0v_0 for each x0X0x_0 \in X_0, set c^v0=0\hat{c}_{v_0}=0, and add witnesses accordingly.
  2. Main Loop (Iteration for k=1Kk = 1 \ldots K)
    • Sampling: Draw random sample xrandx_\text{rand} from free state space (for HySST, may sample discrete jump states with probability pp).
    • Best-Near Selection: Identify candidate vertices within radius δBN\delta_{BN} of xrandx_\text{rand}, pick the one with minimal cost-to-come (c^v\hat{c}_v) as vnearv_\text{near}. If none, select nearest vnearv_\text{near} by Euclidean distance.
    • Extend: Apply random control input to xvnearx_{v_\text{near}}, producing a new trajectory segment (σ,xnew,Δc)(\sigma, x_\text{new}, \Delta c). Update cost: costnew=c^vnear+Δc\text{cost}_\text{new} = \hat{c}_{v_\text{near}} + \Delta c.
    • Prune-and-Add:
      • Find nearest witness ww^* to xnewx_\text{new}.
      • If xneww>δs||x_\text{new} - w^*|| > \delta_s, create new witness and associate a new active vertex.
      • Else if costnew<c^rep(w)\text{cost}_\text{new} < \hat{c}_{\mathrm{rep}(w^*)}, update the representative, move previous one to inactive, and recursively prune leaves.
      • Otherwise, discard xnewx_\text{new}.

A candidate solution is extracted by searching for goal-reaching vertex vgoalv_{\text{goal}} and reconstructing its trajectory.

3. Formal Properties and Theoretical Guarantees

  • Cost-to-Come: For vv with parent uu and edge e=(u,v)e=(u, v), c^v=c^u+c(σe)\hat{c}_v = \hat{c}_u + c(\sigma_e); by induction, this accumulates total cost for the trajectory leading to vv.
  • Witness Sparsity: For each wWw \in W, the ball B(w,δs)B(w, \delta_s) covers all vertices, and distinct witnesses maintain mutual separation of at least δs\delta_s.
  • Asymptotic Near-Optimality (Main Theorem): Under assumptions of Lipschitz dynamics, additive cost, clearance δ>0\delta > 0 along the optimal plan, and parameter constraint δBN+2δs<δ\delta_{BN} + 2\delta_s < \delta, for any ϵ>0\epsilon > 0,

limKP(min{c^v:xvXf}c+ϵ)=1,\lim_{K \to \infty} P\left(\min\{\hat{c}_v: x_v \in X_f\} \leq c^* + \epsilon\right) = 1,

where cc^* is the infimal cost among all feasible plans (Wang et al., 2023).

4. Proof Techniques and Underlying Principles

Established proof strategies utilize:

  • Chain of overlapping balls of radius δ\delta along an optimal plan φ\varphi^* exploiting positive clearance.
  • Inductive arguments show that, given an active vertex within the iith ball, there is fixed probability for extension into the next ball, leveraging uniform sampling, Lipschitz continuity, and the extend procedure.
  • Stability is maintained since pruning only discards inferior-cost vertices; best-cost representatives persist throughout search.
  • Markov chain and Borel-Cantelli lemma arguments guarantee—over infinite iterations—the discovery of near-optimal solutions.

A plausible implication is that the static pruning reduces unnecessary branches, focusing computational effort on promising trajectories.

5. Computational Complexity and Parameter Selection

Per iteration, SST requires:

  • Nearest neighbor search in VactiveV_\text{active} within δBN\delta_{BN} (naively O(Vactive)O(|V_\text{active}|), improved via spatial indexing such as KD-trees).
  • Nearest witness search (O(W)O(|W|)), generally tractable as W|W| \ll total samples.
  • Simulation cost for the Extend procedure.

Memory complexity scales as O(W+Vactive+Vinactive)O(|W| + |V_\text{active}| + |V_\text{inactive}|), with Vactive|V_\text{active}| typically bounded by the packing number of X\mathcal{X} at resolution δs\delta_s.

Parameter tuning:

  • δs\delta_s (witness radius): Controls sparsity; smaller values yield finer approximation, increased vertex count.
  • δBN\delta_{BN} (best-near radius): Regulates exploration versus exploitation; too small impedes growth, too large risks suboptimal branches.
  • The condition δBN+2δs<δopt\delta_{BN} + 2\delta_s < \delta_\text{opt} (optimal clearance) is required for convergence guarantees. Practically, δs1\delta_s \sim 15%5\% of state space diameter and δBN2δs\delta_{BN} \lesssim 2\delta_s are recommended.

6. Applications to Hybrid Dynamical Systems

HySST generalizes SST to hybrid systems with both flow (continuous evolution) and jump (discrete transitions) regimes. At each extension, the algorithm may randomly select flow or jump, dynamically growing the tree across hybrid state spaces.

  • Actuated Bouncing Ball: State (height, velocity, auxiliary time τ,jump-count k)(\text{height, velocity, auxiliary time } \tau, \text{jump-count } k); cost function is τ+k\tau + k. HySST identifies single-bounce optimal plans with 200\sim 200 vertices, outperforming unpruned HyRRT, which requires 600\sim 600 (Wang et al., 2023).
  • Collision-Resilient Tensegrity Multicopter: State includes position, velocity, acceleration, with jumps modeling wall collisions. HySST discovers wall-assisted shortening of flight time while controlling tree growth in high (6D) state dimensions.

Summary Table: Example Domains and HySST Features

Application Domain Hybrid Elements HySST Performance
Actuated Bouncing Ball Flow & Jump Single-bounce plan, O(200) vertices
Tensegrity Multicopter Flow & Collision Efficient bounce exploitation in 6D

SST and HySST are grounded in the theoretical foundation of sampling-based kinodynamic planning as discussed by Li, Littlefield, and Bekris (IJRR 2016). The sparsification and pruning principles adapted to hybrid systems by Wang and Sanfelice (UCSC TR-HSL-02-2023) broaden their applicability to real-world robotic domains where discrete transitions and nontrivial cost landscapes are present.

This suggests future work may further generalize SST to richer hybrid systems or integrate adaptive sparsification, but all concrete claims strictly trace to the above sources (Wang et al., 2023).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Stable Sparse Rapidly-Exploring Random Trees (SST).