Keep Drawing It : Iterative language-based image generation and editing

Keep Drawing It : Iterative language-based image generation and editing

El-nouby, Alaaeldin and Sharma, Shikhar and Schulz, Hannes and Hjelm, Devon and Asri, Layla El and Kahou, Samira Ebrahimi and Bengio, Yoshua and Taylor, Graham W.

NIPS 2018 Workshop on ViGIL 2018

Abstract : Conditional text-to-image generation approaches commonly focus on generating a single image in a single step. One practical extension beyond one-step generation is an interactive system that generates an image iteratively, conditioned on ongoing linguistic input / feedback. This is significantly more challenging as such a system must understand and keep track of the ongoing context and history. In this work, we present a recurrent image generation model which takes into account both the generated output up to the current step as well as all past instructions for generation. We show that our model is able to generate the background, add new objects, apply simple transformations to existing objects, and correct previous mistakes. We believe our approach is an important step toward interactive generation. 1