Generative Adversarial Networks have recently demonstrated the capability to synthesize
photo-realistic real-world images. However, they still struggle to offer high controllability of the
output image, even if several constraints are provided as input. In this work, we present a Recur-
sive Text-Image-Conditioned GAN (aRTIC GAN), a novel approach for multi-conditional image
generation under concurrent spatial and text constraints. It employs few line drawings and short
descriptions to provide informative yet human-friendly conditioning. The proposed scenario is based
on accessible constraints with high degrees of freedom: sketches are easy to draw and add strong
restrictions on the generated objects, such as their orientation or main physical characteristics. Text on
its side is so common and expressive that easily enforces information otherwise impossible to provide
with minimal illustrations, such as objects components color, color shades, etc. Our aRTIC GAN
is suitable for the sequential generation of multiple objects due to its compact design. In fact, the
algorithm exploits the previously generated image in conjunction with the sketch and the text caption,
resulting in a recurrent approach. We developed three network blocks to tackle the fundamental
problems of catching captions’ semantic meanings and of handling the trade-off between smoothing
grid-pattern artifacts and visual detail preservation. Furthermore, a compact three-task discriminator
(covering global, local and textual aspects) was developed to preserve a lightweight and robust
architecture. Extensive experiments proved the validity of aRTIC GAN and show that the combined
use of sketch and description allows us to avoid explicit object labeling.
Dettaglio pubblicazione
2022, ELECTRONICS, Pages 1737- (volume: 11)
aRTIC GAN: A Recursive Text-Image-Conditioned GAN (01a Articolo in rivista)
Alati Edoardo, Caracciolo Carlo Alberto, Costa Marco, Sanzari Marta, Russo Paolo, Amerini Irene
keywords