HGS-PRM: Heuristic Greedy Search for PRMs
- HGS-PRM is a method that augments classical probabilistic roadmap planning with landmark-based heuristics for efficient multi-query shortest-path searches.
- It preprocesses the roadmap by computing landmark-rooted shortest-path trees, significantly reducing the number of node expansions during A* search.
- Empirical results demonstrate that HGS-PRM achieves up to 20× speed-ups in cluttered settings by balancing preprocessing costs with rapid per-query performance.
The Heuristic Greedy Search algorithm on a Probabilistic Roadmap (HGS-PRM) is a technique for efficiently answering multiple shortest-path queries on a fixed roadmap by augmenting classical PRM motion planning with a landmark-based admissible heuristic. By paying a one-time preprocessing cost to compute landmark-rooted shortest-path trees, HGS-PRM dramatically reduces per-query search effort, making it effective for multi-query scenarios commonly found in robotic motion planning. The method computes, stores, and exploits distance profiles from selected landmarks to produce a highly informative heuristic for use with the A* search procedure, resulting in query phase speed-ups and efficient search space pruning, particularly in cluttered or complex environments (Paden et al., 2017).
1. Preprocessing and Landmark Construction
The preprocessing phase selects a small set of landmarks from the PRM graph , where typically (for example, or a small constant such as $50$). Landmarks may be chosen uniformly at random from or via a farthest-point strategy, wherein each new landmark maximizes the graph distance to already selected landmarks. For each landmark , Dijkstra’s algorithm (or an equivalent single-source shortest-path algorithm) builds a shortest-path tree rooted at and computes an array , the cost from to every . This preprocessing requires time and memory to store the distance arrays.
2. Heuristic Computation
HGS-PRM defines the landmark heuristic for any two vertices as
For the canonical query from source to target , the per-node heuristic is . By construction, the heuristic is admissible due to the triangle inequality: for each , , where is the actual shortest-path cost, and preserves this upper bound. The heuristic is also consistent (monotone): for every edge , , which ensures optimality in A* search. This property follows from the respective triangle inequality and the nonnegative edge weights.
3. Query Evaluation via A* Search
To answer a shortest-path query from to , HGS-PRM uses A* search with the landmark-based heuristic . The algorithm maintains for each vertex a value (cost from to found so far), parent pointers for path reconstruction, and . The search proceeds by expanding the node in with minimum (using a priority queue), terminating upon expansion of . For every neighbor of , it updates and parent pointers if a shorter path is found, adjusting and the queue accordingly. Because the heuristic is consistent, the value upon expansion is guaranteed optimal.
Illustrative Example:
A 6-node PRM graph with nodes A–F and landmark set yields landmark distance arrays, and for a query , the computed heuristics demonstrate that A* search expands only 4 nodes, compared to the potential of 6 for exhaustive search, showing significant pruning power (Paden et al., 2017).
4. Complexity, Trade-offs, and Empirical Observations
HGS-PRM's one-time preprocessing cost is in time and in space. Per-query cost is in the worst case, matching A*, but is generally much smaller in practice if the heuristic is informative—per node expansion involves edge checks and work to evaluate . In contrast:
| Algorithm | Preprocessing | Query Time | Node Expansions |
|---|---|---|---|
| Dijkstra | None | ||
| A* (Euclidean ) | None | Up to | |
| HGS-PRM | if is sharp |
Empirically, HGS-PRM achieves – speed-ups in cluttered settings once –$100$, as the heuristic eliminates a significant fraction of the search space. For and fraction eliminated by , A* explores nodes; Dijkstra explores .
The break-even point depends on the number of queries : the total run-time is
For , the amortized per-query cost becomes substantially less than running Dijkstra or A* with standard heuristics.
5. Multi-Query Applicability and Example Scenario
HGS-PRM is well suited for multi-query environments where the PRM graph remains static but many shortest-path queries must be answered between arbitrary pairs . Practical robotics tasks, including dynamic replanning in static environments (with moving obstacles or shifting start/goal positions), benefit significantly, as preprocessing investment is amortized over many queries.
In a representative 6-node example with edge weights and chosen landmarks, precomputed landmark distances enable the heuristic to sharply reduce the number of node expansions required for shortest-path queries. Only those nodes with optimal or near-optimal -values are explored, in contrast to exhaustive methods.
6. Relationship to Classical Methods
Compared with Dijkstra's algorithm, which performs no preprocessing and typically expands all reachable nodes, and A* with naive Euclidean heuristics, which may provide weak guidance in cluttered graphs, HGS-PRM offers an advantageous balance in multi-query settings. Its heuristic is tailored to the topology and geometry of the actual roadmap, and consistently tight due to the triangle inequality’s application to multiple, strategically selected landmarks. The method does, however, require additional memory and preprocessing time, representing an explicit trade-off against per-query efficiency (Paden et al., 2017).
7. Limitations and Applicability Scope
The efficiency gains of HGS-PRM arise principally in scenarios dominated by many queries over the same PRM structure, with per-query cost decreasing as the number of queries increases. In environments where the roadmap is frequently reconstructed or landmark distances become obsolete, or where the effective search space is already small, the amortization advantage diminishes. This suggests that HGS-PRM is most beneficial when and the roadmap topology is sufficiently complex to justify a richer heuristic. A plausible implication is that environments with high clustering or labyrinthine connectivity stand to benefit most markedly from this approach (Paden et al., 2017).