Papers
Topics
Authors
Recent
Search
2000 character limit reached

Discrete Hierarchical Planning (DHP)

Updated 30 January 2026
  • Discrete Hierarchical Planning is a framework that decomposes complex planning tasks into tractable subproblems using discrete, hierarchical abstractions.
  • It integrates techniques from hybrid generative models, hierarchical MDPs, and symbolic planning to effectively manage sparse rewards and large state spaces.
  • Empirical results demonstrate enhanced sample efficiency and rapid skill transfer in robotics and AI, validating the approach for long-horizon control.

Discrete Hierarchical Planning (DHP) is a class of planning algorithms and modeling frameworks that leverage hierarchical task, skill, or mode abstractions at a discrete level, enabling efficient, scalable, and sample-efficient solutions for long-horizon decision-making, planning, and control. DHP synthesizes ideas from hybrid generative models, hierarchical MDPs, discrete skill libraries, automated planning languages, and hierarchical reinforcement learning. Approaches in this domain have demonstrably advanced planning in robotics, reinforcement learning, and classical AI, especially in environments with sparse rewards, large or unstructured state spaces, and multi-level temporal abstraction.

1. Foundations and Formal Structures

DHP systems are characterized by multi-level architectures where planning at a higher level unfolds over discrete abstractions—such as options, intentions, modes, or compound tasks—while lower levels handle continuous execution or primitive actions. The separation of concerns enables tractability and interpretability in otherwise intractable or high-dimensional systems.

Discrete hierarchical abstraction can be instantiated in multiple ways:

The essential structural property across frameworks is the explicit or implicit constraint that higher-level nodes control when and how low-level controllers, skills, or actions are invoked, thus supporting both temporal abstraction and sample-efficient exploration.

2. Model Architectures and Learning Principles

DHP approaches are unified by their reliance on discrete, symbolic, or categorical structures at higher planning levels, linked via learned or engineered transition, reward, and feasibility models. Notable realizations include:

  • rSLDS Planning: The rSLDS parameterizes both the continuous state transitions:

xt+1=Aztxt+Bztut+ϵt,ϵtN(0,Qzt)x_{t+1} = A_{z_t}\,x_t + B_{z_t}\,u_t + \epsilon_t,\quad \epsilon_t \sim \mathcal{N}(0,Q_{z_t})

and the discrete mode transitions via a softmax over the current xt,utx_t, u_t:

p(zt+1xt,ut)=softmax(Wxxt+Wuut+r)p(z_{t+1} | x_t, u_t) = \text{softmax}(W_x x_t + W_u u_t + r)

Model learning uses conjugate matrix-normal-inverse-Wishart priors and Laplace-Variational EM, alternately inferring (x1:T,z1:T)(x_{1:T}, z_{1:T}) and updating parameters (Collis et al., 2024).

  • Low-Dimensional Latent Planning: Latent variable models encode high-dimensional observations sRns \in \mathbb{R}^n into compact latent states zRdz \in \mathbb{R}^d and plan discrete "intentions" c{1,,C}c \in \{1,\ldots,C\} by simulating latent transitions. Planning proceeds in this space using particle filtering and reward shaping (Ha et al., 2020).
  • Discrete Option Construction and Skill Libraries: Hierarchical abstraction operates by identifying (and recursively constructing) abstract actions (“skills” or “options”) with local precondition-effect structure, enabling backward planning and rapid skill reuse (Morere et al., 2019).
  • Recursive Subgoal Trees and Reachability: A policy πθ\pi_\theta decomposes a long-horizon goal (st,sg)(s_t, s_g) into a binary tree of subgoals, with each node corresponding to a reachability test over a finite horizon KK. Tree-shaped return estimators favor both completeness and plan brevity (Sharma et al., 4 Feb 2025).

3. Planning Algorithms and Execution Mechanisms

Central to DHP is the decomposition of complex planning into tractable subproblems via discrete abstraction:

  • High-Level Discrete MDP Planning: Given S={1,,K}S=\{1,\dots,K\} discrete modes (options, intentions, skills), a high-level Bayesian MDP (S,A,P,R,π0)(S,A,P,R,\pi_0) is constructed where actions AA select target modes or subgoals. Planning minimizes cumulative costs and includes information-theoretic exploration bonuses (parameter and state IG), leading to active uncertainty reduction (Collis et al., 2024). The objective may be formalized as:

J(a0:H1;s0)=EP,a[t=0H1R(St,at)+βpDKL[P(θDt+1)P(θDt)]+βsDKL[P(St+1)P(St+1St,at)]]J(a_{0:H-1}; s_0) = \mathbb{E}_{P,a}\left[\sum_{t=0}^{H-1} R(S_t, a_t) + \beta_p D_{KL}[P(\theta|D_{t+1}) \,\|\, P(\theta|D_t)] + \beta_s D_{KL}[P(S_{t+1}|\cdot) \,\|\, P(S_{t+1}|S_t, a_t)] \right]

  • Low-Level Controllers: Primitive actions or fine-grained continuous control is encapsulated in controllers such as LQRs (for each ordered mode pair, with precomputed Riccati gains cached for efficient deployment), neural feedback policies conditioned on intention embeddings, or primitive action models (Collis et al., 2024, Ha et al., 2020).
  • Discrete Hierarchical Backward Planning: In symbolic (HTN or skill-based) regimes, the planner regresses from goal specifications through skill effects, recursively generating subplans to achieve preconditions. Aggressive hierarchy is enforced by bounding recursion depth or favoring long abstract skills (Morere et al., 2019).
  • Tree-Structured Plan Expansion: Recursive binary decomposition, as in hierarchical RL, builds a planning tree where each subtask must be feasible under the lower-level policy in a bounded number of steps. Achievement is verified by an explicit reachability check rather than value approximation (Sharma et al., 4 Feb 2025).
  • Distributed Planning in Hierarchical MDPs: In multi-agent or factored settings, message-passing algorithms coordinate local plans via reward-sharing over tree-structured decompositions, yielding globally consistent solutions with reuse of flows and value functions among isomorphic subproblems (Guestrin et al., 2012).

4. Temporal Abstraction, Exploration Strategies, and Task Discovery

DHP supports temporal abstraction and sample-efficient exploration by identifying, validating, and exploiting discrete subgoals and skill boundaries:

  • Subgoal/Option Discovery: Discrete modes or intentions are mapped to polyhedral regions in continuous or latent space; each transition is associated with a temporally extended "option" or skill whose completion triggers re-planning (Collis et al., 2024, Ha et al., 2020). Targets xjx_j^* are chosen via gradient ascent in parameterized softmax transition models.
  • Curriculum and Skill Refinement: New abstract skills and their success conditions are directly learned from successful trajectories, forming hierarchical DAGs for recursive skill application. Curriculum learning schedules goal complexity to ensure skill sets expand as needed (Morere et al., 2019).
  • Information-Theoretic and Intrinsic Exploration: Planning objectives include information-gain bonuses (KL-divergence over Dirichlet counts, transition entropy), and exploration agents may be intrinsically rewarded for high reconstruction error under contrastive or variational models, thereby generating new, informative training examples not reliant on expert data (Collis et al., 2024, Sharma et al., 4 Feb 2025).
  • Advantage and Return Estimation: Specialized estimators (e.g., “min-tree” return) ensure that shorter, complete plans are favored and that no partial solutions are encouraged. The operator Gi=min(R2i+1+γG2i+1,R2i+2+γG2i+2)G_i = \min(R_{2i+1} + \gamma G_{2i+1}, R_{2i+2} + \gamma G_{2i+2}) is a contraction and admits stable policy gradient updates (Sharma et al., 4 Feb 2025).

5. Representational Formalisms and Expressivity

DHP is realized in both statistical and symbolic planning formalisms:

  • PDDL/HDDL, HTN Extensions: Languages such as HDDL and HDDL 2.1 enable explicit encoding of hierarchical tasks, methods, and primitive actions, with partial or total ordering, variable-constraint logic, and (in HDDL 2.1) durative actions, numeric fluents, and complex temporal constraints. These models undergird symbolic planners for domains with concurrency, multi-agent coordination, and hybrid temporal structure (Höller et al., 2019, Pellier et al., 2022).
  • Latent Variable, CVAE, and RSSM Implementations: For high-dimensional or unstructured domains (visual planning), latent state representations are constructed via variational methods, and reachability is evaluated as cosine similarity in a compact state or transition space, avoiding direct value regression and reducing sample complexity (Sharma et al., 4 Feb 2025, Ha et al., 2020).
  • Hybrid Models and Polyhedral Partitioning: In rSLDS and related models, piecewise-linear regions of state space correspond to discrete high-level behavioral units, supporting both model-based planning and model-free control (Collis et al., 2024).

6. Empirical Results and Theoretical Guarantees

DHP frameworks achieve marked improvements in both sample efficiency and planning quality:

  • Continuous Mountain Car: rSLDS-based DHP achieves 50%\sim50\% state-space coverage in 10k steps (vs. 20% without IG bonuses), and solves the sparse goal in 5\sim5 episodes, outperforming SAC and standard Actor-Critic methods (which fail in 20 episodes) (Collis et al., 2024).
  • Long-Horizon Visual Navigation: DHP delivers 99% success and 71-step average in 25-room maze planning under visual observations, compared to 82%/158-step for the best prior method (Sharma et al., 4 Feb 2025).
  • Symbolic Planning and Robotic Transfer: Hierarchical planners with effect/condition skill annotation solve environments with up to 21002^{100} states, with plan lengths reduced from 73 to 25\sim25 and planning time from seconds to ms; skills trained in simulation transfer directly to real-robot manipulation (Morere et al., 2019).
  • Distributed and Factored MDPs: Message passing in hierarchical MDPs scales planning to large, multi-agent or multi-room settings, reusing cached flows and message tables among repeated classes and instances (Guestrin et al., 2012).
  • Theoretical Guarantees: Min-tree and related operators are γ\gamma-contractions, ensuring the stable convergence of value and policy iterates in tree-structured hierarchical RL (Sharma et al., 4 Feb 2025).

7. Impact, Limitations, and Forward Directions

DHP represents a crosscutting advance in both practical AI planning and the theory of hierarchical control. The integration of discrete abstraction with learned and engineered models addresses the curse of dimensionality and long-horizon credit assignment. However, limitations remain, including sensitivity to representation quality (latent spaces, adjacencies), the need for robust continuous dynamics models, and restrictions inherited from the expressivity of underlying planning languages.

Future directions include combining DHP with richer temporal and symbolic reasoning (e.g., hold-between, numeric fluents in HDDL 2.1 (Pellier et al., 2022)), extending reachability estimation to text or high-level specification spaces, and deploying DHP variants in real-time, safety-critical control for robotic and multi-agent domains.

References:

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Discrete Hierarchical Planning (DHP).