Papers
Topics
Authors
Recent
Search
2000 character limit reached

Dynamic object goal pushing with mobile manipulators through model-free constrained reinforcement learning

Published 3 Feb 2025 in cs.RO, cs.LG, cs.SY, and eess.SY | (2502.01546v1)

Abstract: Non-prehensile pushing to move and reorient objects to a goal is a versatile loco-manipulation skill. In the real world, the object's physical properties and friction with the floor contain significant uncertainties, which makes the task challenging for a mobile manipulator. In this paper, we develop a learning-based controller for a mobile manipulator to move an unknown object to a desired position and yaw orientation through a sequence of pushing actions. The proposed controller for the robotic arm and the mobile base motion is trained using a constrained Reinforcement Learning (RL) formulation. We demonstrate its capability in experiments with a quadrupedal robot equipped with an arm. The learned policy achieves a success rate of 91.35% in simulation and at least 80% on hardware in challenging scenarios. Through our extensive hardware experiments, we show that the approach demonstrates high robustness against unknown objects of different masses, materials, sizes, and shapes. It reactively discovers the pushing location and direction, thus achieving contact-rich behavior while observing only the pose of the object. Additionally, we demonstrate the adaptive behavior of the learned policy towards preventing the object from toppling.

Summary

  • The paper introduces a learning-based controller using constrained reinforcement learning to achieve precise non-prehensile object pushing.
  • It leverages simulated training with domain randomization, achieving an 80% success rate in hardware experiments despite variable object properties.
  • The method effectively manages constraints such as actuator limits and collision avoidance, reducing object toppling during dynamic manipulation.

Dynamic Object Goal Pushing with Mobile Manipulators Through Model-Free Constrained Reinforcement Learning

The research paper introduces a method for enabling mobile manipulators to effectively push unknown objects to designated target positions and orientations using reinforcement learning (RL). This task is significant due to the inherent challenges presented by unknown object properties such as mass, friction, and size, which complicate precise manipulation. The paper proposes a learning-based controller for non-prehensile object manipulation, specifically employing a constrained reinforcement learning approach to train a policy that handles the complexities of these dynamics.

Methodology

The core contribution of the paper involves deploying a mobile manipulator equipped with an RL-based controller. This controller was evaluated on a quadrupedal robot with an articulated arm, showcasing capability in non-prehensile manipulation tasks. The authors utilized a state-of-the-art constrained RL algorithm to formulate and solve the problem, emphasizing minimal reward engineering while honoring diverse operational constraints like arm actuator limits and collision avoidance.

Training was conducted in a simulated environment with domain randomization to ensure the policy's robustness to real-world variability. This preparation included variations in object properties and friction levels. Observations consisted only of the object's pose, compelling the RL policy to infer other critical physical attributes implicitly.

Key Contributions

The paper makes several notable contributions in robotic control and learning, including:

  1. Learning-based Controller Development: A novel controller design for mobile manipulators that achieves planar object manipulation with a fundamental focus on maintaining object balance.
  2. Robustness to Unknown Objects: Demonstrated adaptability in dealing with unknown object properties, including mass, size, shape, and surface material, through dynamic contact adjustment and push direction regulation.
  3. Avoidance of Object Toppling: By adapting the pushing point to lower heights on objects, especially those prone to toppling, the controller effectively maintained balance and succeeded in diverse scenarios without tasks being disrupted by object imbalance.
  4. Empirical Validation: Hardware experiments validated the approach's efficacy, showing a success rate of at least 80% with various objects, coupled with a notable reduction in object toppling incidents.

Implications and Future Directions

This research holds practical implications, especially in industries where robotic manipulation of diverse objects is necessary, such as logistics and manufacturing. The demonstrated robustness and adaptability suggest that future robotic systems can perform more autonomously even in previously unencountered conditions.

On a theoretical level, this approach encourages further inquiry into constrained RL as a potent tool for robotic manipulation, where conventional techniques might fail due to the dynamic interplay of application constraints and unknown variables. Future research could explore the integration of sensory input for real-time feedback and improvement of decision-making processes in unstructured environments.

In summary, the work significantly advances the field of dynamic object manipulation through mobile manipulators. By addressing key challenges and showcasing effective solutions, this paper sets a foundation for future advancements in mobile manipulation and strengthens the case for broader use of RL in complex robotic tasks.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.