11

Abstract

Text-to-image (T2I) generation models have significantly advanced in recent years. However, effective interaction with these models is challenging for average users due to the need for specialized prompt engineering knowledge and the inability to perform multi-turn image generation, hindering a dynamic and iterative creation process. Recent attempts have tried to equip Multi-modal Large Language Models (MLLMs) with T2I models to bring the user's natural language instructions into reality. Hence, the output modality of MLLMs is extended, and the multi-turn generation quality of T2I models is enhanced thanks to the strong multi-modal comprehension ability of MLLMs. However, many of these works face challenges in identifying correct output modalities and generating coherent images accordingly as the number of output modalities increases and the conversations go deeper. Therefore, we propose DialogGen, an effective pipeline to align off-the-shelf MLLMs and T2I models to build a Multi-modal Interactive Dialogue System (MIDS) for multi-turn Text-to-Image generation. It is composed of drawing prompt alignment, careful training data curation, and error correction. Moreover, as the field of MIDS flourishes, comprehensive benchmarks are urgently needed to evaluate MIDS fairly in terms of output modality correctness and multi-modal output coherence. To address this issue, we introduce the Multi-modal Dialogue Benchmark (DialogBen), a comprehensive bilingual benchmark designed to assess the ability of MLLMs to generate accurate and coherent multi-modal content that supports image editing. It contains two evaluation metrics to measure the model's ability to switch modalities and the coherence of the output images. Our extensive experiments on DialogBen and user study demonstrate the effectiveness of DialogGen compared with other State-of-the-Art models.

Paper: https://arxiv.org/abs/2403.08857

Code: https://github.com/tencent/HunyuanDiT

Demo: https://huggingface.co/spaces/multimodalart/HunyuanDiT

Project Page: https://dit.hunyuan.tencent.com/

Model Weights: https://huggingface.co/Tencent-Hunyuan/HunyuanDiT

  • NVIDIA GPU with CUDA support is required.
  • It's only been tested on Linux with V100 and A100 GPUs.
  • A Minimum of 11GB of VRAM is required, 32GB recommended.

top 1 comments
sorted by: hot top controversial new old
[-] tagginator@utter.online 0 points 2 weeks ago

New Lemmy Post: DialogGen: Multi-modal Interactive Dialogue System for Multi-turn Text-to-Image Generation (https://lemmyverse.link/lemmy.dbzer0.com/post/20273514)
Tagging: #StableDiffusion

(Replying in the OP of this thread (NOT THIS BOT!) will appear as a comment in the lemmy discussion.)

I am a FOSS bot. Check my README: https://github.com/db0/lemmy-tagginator/blob/main/README.md

this post was submitted on 14 May 2024
11 points (100.0% liked)

Stable Diffusion

4079 readers
17 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 11 months ago
MODERATORS