Improving the Generalizability of Robot Assembly Tasks Learned from Demonstration via CNN-based Segmentation

Improving the Generalizability of Robot Assembly Tasks Learned from Demonstration via CNN-based Segmentation

Iturrate, Inigo and Roberge, Etienne and Ostergaard, Esben Hallundbak and Duchaine, Vincent and Savarimuthu, Thiusius Rajeeth

IEEE International Conference on Automation Science and Engineering 2019

Abstract : Kinesthetic teaching and Dynamic Movement Primitives (DMPs) enable fast and adaptable learning of robot tasks based on a human demonstration. A task encoded as a dynamic movement primitive can be reused with a different goal position, albeit with a resulting distortion in the approach trajectory with regards to the original task. While this is sufficient for some robotic applications, the accuracy requirements for assembly tasks in an industrial context, where tolerances are tight and workpieces are small, is much higher. In such a context it is also preferable to keep the number of demonstrations and of external sensors low. Our approach relies on a single demonstration and a single force-torque sensor at the robot tool. We make use of a Convolutional Neural Network (CNN) trained on the force-torque sensor data to segment the task into several movement primitives for the different phases: pickup – approach – insertion – retraction, allowing us to achieve better positional accuracy when generalizing the task primitives to new targets. To the best of our awareness, we are the first to utilize a CNN as a segmentation tool to improve the generalization performance of DMPs.