Progressive Compositionality in Text-to-Image Generative Models

Xu Han1, Linghao Jin2, Xiaofeng Liu1, Paul Pu. Liang3,
1Yale University 2University of Southern California 3MIT

Abstract

Despite the impressive text-to-image (T2I) synthesis capabilities of diffusion models, they often struggle to understand compositional relationships between objects and attributes, especially in complex settings. Existing solutions have tackled these challenges by optimizing the cross-attention mechanism or learning from the caption pairs with minimal semantic changes. However, can we generate high-quality complex contrastive images that diffusion models can directly discriminate based on visual representations?

In this work, we leverage largelanguage models (LLMs) to compose realistic, complex scenarios and harness Visual-Question Answering (VQA) systems alongside diffusion models to automatically curate a contrastive dataset, CONPAIR, consisting of 15k pairs of high-quality contrastive images. These pairs feature minimal visual discrepancies and cover a wide range of attribute categories, especially complex and natural scenarios. To learn effectively from these error cases, i.e., hard negative images, we propose EVOGEN, a new multi-stage curriculum for contrastive learning of diffusion models. Through extensive experiments across a wide range of compositional scenarios, we showcase the effectiveness of our proposed framework on compositional T2I benchmarks.

Dataset

Interpolate start reference image.

Dataset construction

To address attribute binding and compositional generation, we propose a new high-quality contrastive dataset, ConPair. Each sample in ConPair consists of a pair of images associated with a positive caption. We construct captions by GPT-4, covering eight categories of compositionality: color, shape, texture, counting, spatial relationship, non-spatial relationship, scene, and complex.

Our key idea is to generate contrastive images that are minimally different in visual representations. By ”minimal,” we mean that, aside from the altered attribute/relation, other elements in the images remain consistent or similar. In practice, we source negative image samples in two ways: 1) generate negative images by prompting negative prompts to diffusion models; 2) edit the positive image by providing instructions. To generate images faithful to the text description, we propose to decompose each text prompt into a set of questions using an LLM and leverage the capabilities of VQA models to rank candidate images by their alignment score, as illustrated in Figure below.

Interpolate start reference image.

Curriculum contrastive learning

A common challenge in training models with data of mixed difficulty is that it can overwhelm the model and lead to suboptimal learning (Bengio et al., 2009). Therefore, we divide the dataset into three stages and introduce a simple but effective multi-stage fine-tuning paradigm, allowing the model to gradually progress from simpler compositional tasks to more complex ones.

We design contrastive loss to maximize the similarity between the positive image and its corresponding text prompt, while minimizing the similarity between the negative image and the same text prompt.

Qualitative Results

Here, we show example how the model perform using the same object, “bear” and “cat,” when we gradually increase the complexity by introducing variations in attributes, counting, scene settings, interactions between objects, and spatial relationships.

Interpolate start reference image.

BibTeX

@article{han2024progressivecompositionalitytexttoimagegenerative,
      title={Progressive Compositionality In Text-to-Image Generative Models}, 
      author={Xu Han and Linghao Jin and Xiaofeng Liu and Paul Pu Liang},
      journal={arXiv preprint arXiv:2410.16719},
      year={2024},
}

📭 Contact

If you have any comments or questions, feel free to contact Xu Han(xu.han.xh365@yale.edu) or Linghao Jin(linghaoj@usc.edu).