Papers
Topics
Authors
Recent
Search
2000 character limit reached

Bimanual crop manipulation for human-inspired robotic harvesting

Published 13 Sep 2022 in cs.RO | (2209.06074v1)

Abstract: Most existing robotic harvesters utilize a unimanual approach; a single arm grasps the crop and detaches it, either via a detachment movement, or by cutting its stem with a specially designed gripper/cutter end-effector. However, such unimanual solutions cannot be applied for sensitive crops and cluttered environments like grapes and a vineyard where obstacles may occlude the stem and leave no space for the cutter's placement. In such cases, the solution would require a bimanual robot in order to visually unveil the stem and manipulate the grasped crop to create cutting affordances which is similar to the practice used by humans. In this work, a dual-arm coordinated motion control methodology for reaching a stem pre-cut state is proposed. The camera equipped arm with the cutter is reaching the stem, unveiling it as much as possible, while the second arm is moving the grasped crop towards the surrounding free-space to facilitate its stem cutting. Lab experimentation on a mock-up vine setup with a plastic grape cluster evaluates the proposed methodology, involving two UR5e robotic arms and a RealSense D415 camera.

Citations (22)

Summary

  • The paper introduces a bimanual robotic system that mimics human harvesting by coordinating a camera arm for stem unveiling and a grasping arm for crop manipulation.
  • The method uses advanced point cloud processing and velocity control to optimize camera centering and obstacle avoidance, ensuring precise stem pre-cut positioning.
  • Experimental evaluations with UR5e arms validate the approach, demonstrating successful stem exposure and force-controlled crop manipulation in lab setups.

Bimanual Crop Manipulation for Robotic Harvesting

This paper introduces a bimanual robotic system designed for harvesting crops, specifically addressing the challenges posed by sensitive crops and cluttered environments, such as vineyards. The core contribution is a dual-arm coordinated motion control methodology that enables the robot to reach a stem pre-cut state, mimicking the actions of human harvesters.

Problem Formulation

The system consists of two robotic arms: a grasping arm with NgN_g degrees of freedom (DOF) for securing and manipulating the crop, and a camera arm with NcN_c DOF equipped with an RGB-D camera and a cutting tool. The camera arm's task is to approach the stem while maximizing its visibility, whereas the grasping arm manipulates the crop to create space for the cutting tool. The control objectives are:

  1. Camera Arm:
    • Reaching and centering within a region of interest (ROI) surrounding the stem.
    • Unveiling the stem by maximizing the number of visible points from the stem's point cloud.
  2. Grasping Arm:
    • Maximizing free space around the stem to facilitate cutter placement by applying force position control.

Proposed Control Methodology

The control methodology involves a velocity-controlled bimanual robot, where the reference velocity control signal VrV_r is designed to coordinate the motion of both arms. The approach relies heavily on processing the scene's point cloud to estimate critical point positions and identify obstacles.

Scene Point Cloud Processing

The point cloud data is processed to classify points into several subsets (Figure 1):

  • W\mathcal{W}: Whole scene point cloud.
  • S\mathcal{S}: Stem point cloud.
  • T\mathcal{T}: Top cluster of the stem point cloud.
  • B\mathcal{B}: Bottom cluster of the stem point cloud.
  • O\mathcal{O}: Obstacle point cloud.
  • Opr\mathcal{O}_{pr}: Projected obstacle point cloud.
  • F\mathcal{F}: Free-space point cloud. Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1: Scene's point-cloud W\mathcal{W} containing an obstacle subset O\mathcal{O} and stem subset S\mathcal{S}.

The stem's base psb{p}_{sb} is estimated using PCA on the clustered stem point cloud. Obstacles are identified within a sphere centered at psb{p}_{sb}. Free space F\mathcal{F} is determined by projecting obstacle points onto a sphere and sampling the sphere's surface using the Fibonacci lattice methodology. The point pgd{p}_{gd} is then calculated by solving an optimization problem to maximize the distance from surrounding obstacles.

Camera Arm Control

The camera arm's end-effector reference velocity Vc{V}_c is a superposition of two control signals:

  1. Reaching Reference Velocity Vcr{V}_{cr}: Achieves reaching and centering by converging the camera position pc{p}_c to a manifold Ω\Omega and aligning the camera's ZZ axis with the ROI center.
  2. Unveiling Reference Velocity Vcu{V}_{cu}: Maximizes the visible part of the OOI using a barrier artificial potential field around each obstacle, inducing a virtual repulsive velocity uj,k{u}_{j,k} that rotates the camera to increase visibility. Figure 2

    Figure 2: Camera arm control methodology visualizing the initial state and the desired state.

Grasping Arm Control

The grasping arm's reference velocity Vg{V}_g is designed as a force/position controller:

  • A position control signal vp{v}_p minimizes the position error projected onto the space orthogonal to nc{n}_c.
  • A force control signal vf{v}_f applies force along nc{n}_c to stretch the stem.
  • An orientation control signal vgω{v}_{g\omega} aligns the arm's end-effector with the stretched stem. Figure 3

    Figure 3: Grasping arm force/position and orientation control where force control is applied at nc{n}_c, position control is applied at subspace I3×3−ncnc⊺{I}_{3 \times 3} - {n}_c {n}_c^\intercal, and orientation control aligns nc{n}_c with yg{y}_g.

Bimanual Motion Scheduling

The camera arm initiates the reaching/unveiling motion, and once certain thresholds are met, the grasping arm's motion is activated. The grasping arm's translational velocity is superimposed onto the camera arm's reference velocity to assist in avoiding stem occlusions.

Experimental Results

The proposed method was evaluated in a lab setup using two UR5e robotic arms, a RealSense D415 camera, and a mock-up vine with a plastic grape cluster (Figure 4). Figure 4

Figure 4: (a) Lab setup, (b) Camera arm, (c) Grasping arm initial state.

Experimental results demonstrate successful reaching and centering (Figure 5), stem unveiling (Figure 6), and force/position control by the grasping arm (Figures 8, 9). Figure 5

Figure 5

Figure 5: Reaching with desired region radius r = 0.35 m shown as a red dashed line.

Figure 6

Figure 6: Unveiling of visible stem points with respect to the camera.

Figure 7

Figure 7

Figure 7: Position error.

Figure 8

Figure 8: Applied force with desired force magnitude fdf_d shown as a red dashed line.

The in-hand camera's viewpoint at different stages of the process illustrates the stem's unveiling and the creation of cutting affordances (Figure 9). Figure 9

Figure 9: In-hand camera's viewpoint at the process start, bimanual motion's start, and end of the overall task.

Conclusion

The bimanual control methodology effectively enables a robot to reach a pre-cut state for crop stems by coordinating the motions of a camera arm and a grasping arm. Future work involves testing the proposed method in a real-world vineyard environment.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.