Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
翻译 - 只是尝试让 VQGAN+CLIP 在本地运行,而不必使用 colab。
Pytorch implementation of VQGAN (Taming Transformers for High-Resolution Image Synthesis) (https://arxiv.org/pdf/2012.09841.pdf)
Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)
VQGAN+CLIP Colab Notebook with user-friendly interface.
JAX implementation of VQGAN
[NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer
#计算机科学#Local image generation using VQGAN-CLIP or CLIP guided diffusion
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
An unofficial implementation of both ViT-VQGAN and RQ-VAE in Pytorch
Making an AI-generated music video from any song with Wav2CLIP and VQGAN-CLIP
#计算机科学#Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, documentation, and smooth video creation.
Art generation using VQGAN + CLIP using docker containers. A simplified, updated, and expanded upon version of Kevin Costa's work. This project tries to make generating art as easy as possible for any...