[SIGGRAPH Asia 2022] VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild
ICASSP 2022: "Text2Video: text-driven talking-head video synthesis with phonetic dictionary".
Pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing"
Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation
Code for "Audio-driven Talking Face Video Generation with Learning-based Personalized Head Pose" (Arxiv 2020) and "Predicting Personalized Head Movement From Short Video and Speech Signal" (TMM 2022)
Talking head video AI generator
V-Express aims to generate a talking head video under the control of a reference image, an audio, and a sequence of V-Kps images.
AI-generated talking head video of fake people responding to your input question text.
The official code of our ICCV2023 work: Implicit Identity Representation Conditioned Memory Compensation Network for Talking Head video Generation
Unofficial implementation of the paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing" (CVPR 2021 Oral)
PantoMatrix: Co-Speech Talking Head and Gestures Generation
Demo for the "Talking Head Anime from a Single Image."
翻译 - “从单个图像说话的头部动漫”演示。
Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation (SIGGRAPH Asia 2021)
翻译 - 现场演讲肖像:实时逼真的谈话头像动画(SIGGRAPH Asia 2021)
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
Demo programs for the Talking Head Anime from a Single Image 2: More Expressive project.
My implementation of Few-Shot Adversarial Learning of Realistic Neural Talking Head Models (Egor Zakharov et al.).
翻译 - 我对逼真的神经说话头部模型进行少量对抗学习的实现(Egor Zakharov等人)。
#计算机科学#Demo Programs for the "Talking Head(?) Anime from a Single Image 3: Now the Body Too" Project
Our implementation of "Few-Shot Adversarial Learning of Realistic Neural Talking Head Models" (Egor Zakharov et al.)
This repository contains a PyTorch implementation of "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis".
[CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"
This is a unofficial re-implementation of the paper "Few-Shot Adversarial Learning of Realistic Neural Talking Head Models"