Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral)
Spring HATEOAS - Library to support implementing representations for hyper-text driven REST web services.
[ICCV 2023] StableVideo: Text-driven Consistency-aware Diffusion Video Editing
A strong, neutral, principles-driven, open source typeface for text or display
翻译 - 强大,中立,原则驱动的开源字体,用于文本或显示
MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model
ICASSP 2022: "Text2Video: text-driven talking-head video synthesis with phonetic dictionary".
Code for Text2Human (SIGGRAPH 2022). Paper: Text2Human: Text-Driven Controllable Human Image Generation
Scroll-driven story map, with point markers and narrative text in GeoJSON, using Leaflet and jQuery
[SIGGRAPH Asia 2022] Text2Light: Zero-Shot Text-Driven HDR Panorama Generation
[ICCV 2023] Text2Tex: Text-driven Texture Synthesis via Diffusion Models
CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields
[CVPR 2024 Highlight] Code for "HumanGaussian: Text-Driven 3D Human Generation with Gaussian Splatting"
Official implementation for "Blended Diffusion for Text-driven Editing of Natural Images" [CVPR 2022]
[SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars
Official Pytorch Implementation for "Text2LIVE: Text-Driven Layered Image and Video Editing" (ECCV 2022 Oral)
CLIP-Driven Fine-grained Text-Image Person Re-identification
Stable Diffusion 是一个 text-to-image 扩散模型
Official Pytorch Implementation for “Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation” (CVPR 2023)
Official Pytorch Implementation for "Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer""