Real-time Motion Planning for Robotic Teleoperation Using Dynamic-goal Deep Reinforcement Learning

Real-time Motion Planning for Robotic Teleoperation Using Dynamic-goal Deep Reinforcement Learning

Kamali, Kaveh and Bonev, Ilian A. and Desrosiers, Christian

Proceedings – 2020 17th Conference on Computer and Robot Vision, CRV 2020 2020

Abstract : We propose Dynamic-goal Deep Reinforcement Learning (DGDRL) method to address the problem of robot arm motion planning in telemanipulation applications. This method intuitively maps human hand motions to a robot arm in real-time, while avoiding collisions, joint limits and singularities. We further propose a novel hardware setup, based on the HTC VIVE VR system, that enables users to smoothly control the robot tool position and orientation with hand motions, while monitoring its movements in a 3D virtual reality environment. A VIVE controller captures 6D hand movements and gives them as reference trajectories to a deep neural policy network for controlling the robot’s joint movements. Our DGDRL method leverages the state-of-art Proximal Policy Optimization (PPO) algorithm for deep reinforcement learning to train the policy network with the robot joint values and reference trajectory observed at each iteration. Since training the network on a real robot is time-consuming and unsafe, we developed a simulation environment called RobotPath which provides kinematic modeling, collision analysis and a 3D VR graphical simulation of industrial robots. The deep neural network trained using RobotPath is then deployed on a physical robot (ABB IRB 120) to evaluate its performance. We show that the policies trained in the simulation environment can be successfully used for trajectory planning on a real robot. The the codes, data and video presenting our experiments are available at https://github.com/kavehkamali/ppoRobotPath.