Papers
Topics
Authors
Recent
Search
2000 character limit reached

PDDLGym: Gym Environments from PDDL Problems

Published 15 Feb 2020 in cs.AI | (2002.06432v2)

Abstract: We present PDDLGym, a framework that automatically constructs OpenAI Gym environments from PDDL domains and problems. Observations and actions in PDDLGym are relational, making the framework particularly well-suited for research in relational reinforcement learning and relational sequential decision-making. PDDLGym is also useful as a generic framework for rapidly building numerous, diverse benchmarks from a concise and familiar specification language. We discuss design decisions and implementation details, and also illustrate empirical variations between the 20 built-in environments in terms of planning and model-learning difficulty. We hope that PDDLGym will facilitate bridge-building between the reinforcement learning community (from which Gym emerged) and the AI planning community (which produced PDDL). We look forward to gathering feedback from all those interested and expanding the set of available environments and features accordingly. Code: https://github.com/tomsilver/pddlgym

Citations (50)

Summary

  • The paper introduces PDDLGym, a framework that automatically converts PDDL tasks into Gym environments to bridge reinforcement learning and symbolic planning.
  • It details an architecture leveraging relational observations and action sampling based on operator preconditions to generate diverse benchmark tasks.
  • Results demonstrate that PDDLGym supports varied planning difficulties, facilitating the direct comparison and integration of different AI methodologies.

PDDLGym: An Overview of Gym Environments from PDDL Problems

The paper "PDDLGym: Gym Environments from PDDL Problems" introduces PDDLGym, a framework that innovatively integrates OpenAI Gym environments with Planning Domain Definition Language (PDDL) tasks. This framework represents a strategic fusion of reinforcement learning constructs and symbolic AI planning tasks, utilizing relational observations and actions as its core feature. The work focuses on facilitating research in relational reinforcement learning and sequential decision-making by providing researchers with a versatile tool for generating diverse benchmarks.

PDDLGym provides a system where environments are generated automatically from PDDL domain and problem files. The conceptual architecture of PDDLGym hinges on utilizing PDDL's symbolic representation strengths to create scenarios within the widely adopted Gym interface. This marriage of PDDL's relational syntax with Gym's interaction model makes PDDLGym particularly potent for conducting research that traverses the boundaries of traditional reinforcement learning and symbolic planning.

The paper explicitly outlines the design principles and implementation mechanics of PDDLGym. In essence, PDDLGym builds a bridge that enables tasks described in PDDL—a language developed for expressing planning tasks in artificial intelligence—to be directly translated into the Gym's environment framework. By doing this, it allows for a seamless interface where planning researchers can compare their methods on RL-style tasks and vice-versa, encouraging cross-pollination of methodologies and insights across research disciplines.

PDDLGym is structured to support episodic interactions where agents receive an observation, perform actions, and perpetuate this loop until an episode's conclusion. This framework leverages the relational characteristics of PDDL, where observations and actions are expressed as sets of ground predicates over objects. This relational structure is imperative for tasks that require understanding the interrelations between entities, a requirement common in many AI applications.

The action space within PDDLGym is carefully articulated, acknowledging the distinction between free and non-free parameters within operators. This distinction is crucial for defining an appropriate action space since it more accurately reflects the choice-making process of agents in planning domains. In line with reinforcement learning conventions, PDDLGym offers robust action sampling methods that ensure only valid actions, as deemed by the operator preconditions, are considered.

PDDLGym's utility is manifold:

  1. It streamlines the creation of benchmark tasks across relational domains, providing a compact and expressive medium for defining these problems through PDDL.
  2. It fosters a comprehensive environment for researchers from reinforcement learning and planning domains to explore algorithms in shared benchmarks, facilitating direct comparison and potentially merging varying approaches.
  3. It opens avenues for further explorations in relational decision-making tasks, enriching research in areas such as learning symbolic operator descriptions and the development of efficient planning strategies through relational configurations.

The environments included in PDDLGym present substantial variation in both planning difficulty and model-learning difficulty. According to the results shared in the paper, environments exhibit a broad spectrum of challenges for both planning tasks and transition model learning. This indicates a wide applicability of the framework in evaluating and advancing both existing and novel algorithms.

In conclusion, PDDLGym represents a practical advancement in the creation and utilization of diverse AI benchmarks. Future developments in PDDLGym might be geared towards expanding its adaptability to more complex PDDL constructs and providing more extensive interface options with online repositories of PDDL tasks. As a tool at the intersection of two significant AI domains, PDDLGym holds promise in catalyzing advanced research trajectories and fostering a closer integration of machine learning and symbolic AI strategies.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.