Learn to grasp unknown objects in robotic manipulation

Department

Robotics and Mechatronics Engineering

Document Type

Article

Publication Date

9-1-2021

Abstract

Grasping unfamiliar objects (unknown during training) based on limited prior knowledge is a challenging task in robotic manipulation. Recent solutions typically require predefined information of target objects, task-specific training data, or a huge experience data with training time-consuming to achieve usable generalization ability. This paper introduces a robotic grasping strategy based on the model-free deep reinforcement learning, named Deep Reinforcement Grasp Policy. The developed system demands minimal training time and limited simple objects in simulation and generalizes efficiently on novel objects in real-world scenario. Without requiring any type of prior object awareness or task-specific training data. Our scalable visual grasping system is entirely self-learning approach. The model trains end-to-end policies (from only visual observations to decisions-making) to seek optimal grasp strategy. A perception network utilizes a convolutional neural network that maps visual observations to grasp action as dense pixel-wise Q-values represent the location and orientation of a primitive action executed by a robot. In simulation and physical experiments, a six-DOF robot manipulator with a two-finger gripper is utilized to validate the developed method. The empirical results demonstrated successfully based only on minimal previous knowledge of a few hours of simulated training and simple objects.

Journal Title

Intelligent Service Robotics

Journal ISSN

18612776

Volume

14

Issue

4

First Page

571

Last Page

582

Digital Object Identifier (DOI)

10.1007/s11370-021-00380-9

Share

COinS