Adversarial learning of general transformations for data augmentation

Adversarial learning of general transformations for data augmentation

Mounsaveng, Saypraseuth and Vazquez, David and Ayed, Ismail Ben and Pedersoli, Marco

arXiv 2019

Abstract : Data augmentation (DA) is fundamental to prevent large convolutional neural networks from overfitting, especially with a limited amount of training samples. In images, DA is usually based on heuristic transformations, such as image flip, crop, rotation or color transformations. Instead of using predefined transformations, DA can be learned directly from the data. Existing methods either learn how to combine a set of predefined transformations or train a generative model used for DA. Our work combines the advantages of the two approaches. It learns to transform images with a spatial transformer network combined with an encoder-decoder architecture in a single end-to-end fully differentiable network architecture. Both parts are trained in an adversarial way so that the transformed images still belong to the same class, but are new, more complex samples for the classifier. Our experiments show that, when training an image classifier, our approach is better than previous generative data augmentation methods, and comparable to methods using predefined transformations, which require prior knowledge.