Learn to grasp unknown objects in robotic manipulation
Robotics and Mechatronics Engineering
Grasping unfamiliar objects (unknown during training) based on limited prior knowledge is a challenging task in robotic manipulation. Recent solutions typically require predefined information of target objects, task-specific training data, or a huge experience data with training time-consuming to achieve usable generalization ability. This paper introduces a robotic grasping strategy based on the model-free deep reinforcement learning, named Deep Reinforcement Grasp Policy. The developed system demands minimal training time and limited simple objects in simulation and generalizes efficiently on novel objects in real-world scenario. Without requiring any type of prior object awareness or task-specific training data. Our scalable visual grasping system is entirely self-learning approach. The model trains end-to-end policies (from only visual observations to decisions-making) to seek optimal grasp strategy. A perception network utilizes a convolutional neural network that maps visual observations to grasp action as dense pixel-wise Q-values represent the location and orientation of a primitive action executed by a robot. In simulation and physical experiments, a six-DOF robot manipulator with a two-finger gripper is utilized to validate the developed method. The empirical results demonstrated successfully based only on minimal previous knowledge of a few hours of simulated training and simple objects.
Intelligent Service Robotics
Digital Object Identifier (DOI)