Papers
Topics
Authors
Recent
Search
2000 character limit reached

HGS-PRM: Heuristic Greedy Search for PRMs

Updated 3 January 2026
  • HGS-PRM is a method that augments classical probabilistic roadmap planning with landmark-based heuristics for efficient multi-query shortest-path searches.
  • It preprocesses the roadmap by computing landmark-rooted shortest-path trees, significantly reducing the number of node expansions during A* search.
  • Empirical results demonstrate that HGS-PRM achieves up to 20× speed-ups in cluttered settings by balancing preprocessing costs with rapid per-query performance.

The Heuristic Greedy Search algorithm on a Probabilistic Roadmap (HGS-PRM) is a technique for efficiently answering multiple shortest-path queries on a fixed roadmap by augmenting classical PRM motion planning with a landmark-based admissible heuristic. By paying a one-time preprocessing cost to compute landmark-rooted shortest-path trees, HGS-PRM dramatically reduces per-query search effort, making it effective for multi-query scenarios commonly found in robotic motion planning. The method computes, stores, and exploits distance profiles from selected landmarks to produce a highly informative heuristic for use with the A* search procedure, resulting in query phase speed-ups and efficient search space pruning, particularly in cluttered or complex environments (Paden et al., 2017).

1. Preprocessing and Landmark Construction

The preprocessing phase selects a small set LVL \subset V of kk landmarks from the PRM graph G=(V,E)G = (V, E), where typically kVk \ll |V| (for example, k=O(logV)k = O(\log |V|) or a small constant such as $50$). Landmarks may be chosen uniformly at random from VV or via a farthest-point strategy, wherein each new landmark maximizes the graph distance to already selected landmarks. For each landmark L\ell \in L, Dijkstra’s algorithm (or an equivalent single-source shortest-path algorithm) builds a shortest-path tree TT_\ell rooted at \ell and computes an array dist(v)dist_\ell(v), the cost from \ell to every vVv \in V. This preprocessing requires O(k(E+VlogV))O(k \cdot (E + V \log V)) time and O(kV)O(k \cdot |V|) memory to store the distance arrays.

2. Heuristic Computation

HGS-PRM defines the landmark heuristic for any two vertices v,wVv, w \in V as

h(v,w):=maxLdist(v)dist(w).h(v,w) := \max_{\ell \in L} |dist_\ell(v) - dist_\ell(w)|.

For the canonical query from source ss to target tt, the per-node heuristic is h(v):=h(v,t)h(v) := h(v, t). By construction, the heuristic is admissible due to the triangle inequality: for each \ell, dist(v)dist(t)d(v,t)|dist_\ell(v) - dist_\ell(t)| \leq d(v, t), where d(v,t)d(v,t) is the actual shortest-path cost, and max\max preserves this upper bound. The heuristic is also consistent (monotone): for every edge (u,v)E(u, v) \in E, h(u)cost(u,v)+h(v)h(u) \leq cost(u, v) + h(v), which ensures optimality in A* search. This property follows from the respective triangle inequality and the nonnegative edge weights.

To answer a shortest-path query from ss to tt, HGS-PRM uses A* search with the landmark-based heuristic h(v)h(v). The algorithm maintains for each vertex vv a g[v]g[v] value (cost from ss to vv found so far), parent pointers for path reconstruction, and f[v]:=g[v]+h(v)f[v] := g[v] + h(v). The search proceeds by expanding the node vv in VV with minimum f[v]f[v] (using a priority queue), terminating upon expansion of tt. For every neighbor ww of vv, it updates g[w]g[w] and parent pointers if a shorter path is found, adjusting f[w]f[w] and the queue accordingly. Because the heuristic is consistent, the value g[t]g[t] upon expansion is guaranteed optimal.

Illustrative Example:

A 6-node PRM graph with nodes A–F and landmark set L={B,E}L = \{\mathrm{B}, \mathrm{E}\} yields landmark distance arrays, and for a query AFA \rightarrow F, the computed heuristics demonstrate that A* search expands only 4 nodes, compared to the potential of 6 for exhaustive search, showing significant pruning power (Paden et al., 2017).

4. Complexity, Trade-offs, and Empirical Observations

HGS-PRM's one-time preprocessing cost is O(k(E+VlogV))O(k \cdot (E + V \log V)) in time and O(kV)O(k \cdot |V|) in space. Per-query cost is O(E+VlogV)O(E + V \log V) in the worst case, matching A*, but is generally much smaller in practice if the heuristic is informative—per node expansion involves O(deg(v))O(\deg(v)) edge checks and O(k)O(k) work to evaluate h(v)h(v). In contrast:

Algorithm Preprocessing Query Time Node Expansions
Dijkstra None O(E+VlogV)O(E + V \log V) V\approx |V|
A* (Euclidean hEh_E) None O(E+VlogV)O(E + V \log V) Up to g+hEd(s,t)g + h_E \leq d(s,t)
HGS-PRM O(kVlogV)O(k V \log V) O(E+VlogV)O(E + V \log V) V\ll |V| if hh is sharp

Empirically, HGS-PRM achieves 5×5\times20×20\times speed-ups in cluttered settings once k50k \approx 50–$100$, as the heuristic eliminates a significant fraction α\alpha of the search space. For N=VN = |V| and fraction α\alpha eliminated by hh, A* explores (1α)N(1-\alpha)N nodes; Dijkstra explores NN.

The break-even point depends on the number of queries QQ: the total run-time is

Ttotal=k(E+VlogV)+QTquery.T_{\text{total}} = k \cdot (E + V \log V) + Q \cdot T_{\text{query}}.

For QkQ \gg k, the amortized per-query cost becomes substantially less than running Dijkstra or A* with standard heuristics.

5. Multi-Query Applicability and Example Scenario

HGS-PRM is well suited for multi-query environments where the PRM graph remains static but many shortest-path queries must be answered between arbitrary pairs s,tVs, t \in V. Practical robotics tasks, including dynamic replanning in static environments (with moving obstacles or shifting start/goal positions), benefit significantly, as preprocessing investment is amortized over many queries.

In a representative 6-node example with edge weights and chosen landmarks, precomputed landmark distances enable the heuristic to sharply reduce the number of node expansions required for shortest-path queries. Only those nodes with optimal or near-optimal ff-values are explored, in contrast to exhaustive methods.

6. Relationship to Classical Methods

Compared with Dijkstra's algorithm, which performs no preprocessing and typically expands all reachable nodes, and A* with naive Euclidean heuristics, which may provide weak guidance in cluttered graphs, HGS-PRM offers an advantageous balance in multi-query settings. Its heuristic is tailored to the topology and geometry of the actual roadmap, and consistently tight due to the triangle inequality’s application to multiple, strategically selected landmarks. The method does, however, require O(kV)O(k \cdot |V|) additional memory and preprocessing time, representing an explicit trade-off against per-query efficiency (Paden et al., 2017).

7. Limitations and Applicability Scope

The efficiency gains of HGS-PRM arise principally in scenarios dominated by many queries over the same PRM structure, with per-query cost decreasing as the number QQ of queries increases. In environments where the roadmap GG is frequently reconstructed or landmark distances become obsolete, or where the effective search space is already small, the amortization advantage diminishes. This suggests that HGS-PRM is most beneficial when (Qk)(Q \gg k) and the roadmap topology is sufficiently complex to justify a richer heuristic. A plausible implication is that environments with high clustering or labyrinthine connectivity stand to benefit most markedly from this approach (Paden et al., 2017).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Heuristic Greedy Search Algorithm (HGS-PRM).