A collection of awesome text-to-image generation studies.
Official PyTorch codes for the paper: "ViCo: Detail-Preserving Visual Condition for Personalized Text-to-Image Generation"
[CVPR 2021] Multi-Modal-CelebA-HQ: A Large-Scale Text-Driven Face Generation and Understanding Dataset
[NeurIPS 2024] RealCompo: Balancing Realism and Compositionality Improves Text-to-Image Diffusion Models
Official Pytorch repo of CVPR'23 and NeurIPS'23 papers on understanding replication in diffusion models.
Code for ACM MM'23 paper: LayoutLLM-T2I: Eliciting Layout Guidance from LLM for Text-to-Image Generation
[CVPR '23] Unite and Conquer: Plug & Play Multi-Modal Synthesis using Diffusion Models
[ECCV 2024] Official repository of ECCV 2024 paper: Object-Conditioned Energy-Based Attention Map Alignment in Text-to-Image Diffusion Models
[ECCV'24] T2IShield: Defending Against Backdoors on Text-to-Image Diffusion Models
[ECCV 2024 - Oral] Official PyTorch Implementation of "Adversarial Robustification via Text-to-Image Diffusion Models"
Codebase for the paper ImPoster: Text and Frequency Guidance for Subject Driven Action Personalization using Diffusion Models
Text to Image app with Stable Diffusion Pipeline and tkinter as its UI
Smart and Simple Flux for GPU-poor