diff --git "a/CdE2T4oBgHgl3EQfSAcB/content/tmp_files/load_file.txt" "b/CdE2T4oBgHgl3EQfSAcB/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/CdE2T4oBgHgl3EQfSAcB/content/tmp_files/load_file.txt" @@ -0,0 +1,532 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf,len=531 +page_content='DiffTalk: Crafting Diffusion Models for Generalized Talking Head Synthesis Shuai Shen1 Wenliang Zhao1 Zibin Meng1 Wanhua Li1 Zheng Zhu2 Jie Zhou1 Jiwen Lu1 1Tsinghua University 2PhiGent Robotics … Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' We present a crafted conditional Diffusion model for generalized Talking head synthesis (DiffTalk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Given a driven audio, the DiffTalk is capable of synthesizing high-fidelity and synchronized talking videos for multiple identities without further fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Abstract Talking head synthesis is a promising approach for the video production industry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Recently, a lot of effort has been devoted in this research area to improve the gener- ation quality or enhance the model generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' How- ever, there are few works able to address both issues simul- taneously, which is essential for practical applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' To this end, in this paper, we turn attention to the emerging powerful Latent Diffusion Models, and model the Talking head generation as an audio-driven temporally coherent denoising process (DiffTalk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' More specifically, instead of employing audio signals as the single driving factor, we investigate the control mechanism of the talking face, and incorporate reference face images and landmarks as conditions for personality-aware generalized synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In this way, the proposed DiffTalk is capable of producing high-quality talking head videos in synchronization with the source audio, and more importantly, it can be naturally gen- eralized across different identities without any further fine- tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Additionally, our DiffTalk can be gracefully tai- lored for higher-resolution synthesis with negligible extra computational cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Extensive experiments show that the proposed DiffTalk efficiently synthesizes high-fidelity audio- driven talking head videos for generalized novel identi- ties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' For more video results, please refer to this demon- stration https://cloud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='tsinghua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='cn/f/ e13f5aad2f4c4f898ae7/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Introduction Talking head synthesis is a challenging and promising re- search topic, which aims to synthesize a talking video with given audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' This technique is widely applied in various practical scenarios including animation, virtual avatars, on- line education, and video conferencing [4,44,47,50,52].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Recently a lot of effort has been devoted to this re- search area to improve the generation quality or enhance the model generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Among these existing main- stream talking head generation approaches, the 2D-based methods usually depend on generative adversarial networks (GANs) [6, 10, 16, 22, 28] for audio-to-lip mapping, and 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='03786v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='CV] 10 Jan 2023 most of them perform competently on model generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' However, since GANs need to simultaneously optimize a generator and a discriminator, the training process lacks sta- bility and is prone to mode collapse [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Due to this re- striction, the generated talking videos are of limited image quality, and difficult to scale to higher resolutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' By con- trast, 3D-based methods [2,17,42,46,53] perform better in synthesizing higher-quality talking videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Whereas, they highly rely on identity-specific training, and thus cannot generalize across different persons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Such identity-specific training also brings heavy resource consumption and is not friendly to practical applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Most recently, there are some 3D-based works [36] that take a step towards improv- ing the generalization of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' However, further fine- tuning on specific identities is still inevitable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Generation quality and model generalization are two es- sential factors for better deployment of the talking head syn- thesis technique to real-world applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' However, few existing works are able to address both issues well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In this paper, we propose a crafted conditional Diffusion model for generalized Talking head synthesis (DiffTalk), that aims to tackle these two challenges simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Specifically, to avoid the unstable training of GANs, we turn attention to the recently developed generative technology Latent Dif- fusion Models [30], and model the talking head synthe- sis as an audio-driven temporally coherent denoising pro- cess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' On this basis, instead of utilizing audio signals as the single driving factor to learn the audio-to-lip transla- tion, we further incorporate reference face images and land- marks as supplementary conditions to guide the face iden- tity and head pose for personality-aware video synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Under these designs, the talking head generation process is more controllable, which enables the learned model to naturally generalize across different identities without fur- ther fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' As shown in Figure 1, with a sequence of driven audio, our DiffTalk is capable of producing natu- ral talking videos of different identities based on the corre- sponding reference videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Moreover, benefiting from the latent space learning mode, our DiffTalk can be gracefully tailored for higher-resolution synthesis with negligible ex- tra computational cost, which is meaningful for improving the generation quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Extensive experiments show that our DiffTalk can syn- thesize high-fidelity talking videos for novel identities with- out any further fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Figure 1 shows the generated talking sequences with one driven audio across three differ- ent identities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Comprehensive method comparisons show the superiority of the proposed DiffTalk, which provides a strong baseline for the high-performance talking head syn- thesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' To summarize, we make the following contributions: We propose a crafted conditional diffusion model for high-quality and generalized talking head synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' By introducing smooth audio signals as a condition, we model the generation as an audio-driven temporally co- herent denoising process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' For personality-aware generalized synthesis, we further incorporate dual reference images as conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In this way, the trained model can be generalized across different identities without further fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The proposed DiffTalk can generate high-fidelity and vivid talking videos for generalized identities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In exper- iment, our DiffTalk significantly outperforms 2D-based methods in the generated image quality, while surpassing 3D-based works in the model generalization ability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Related Work Audio-driven Talking Head Synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The talking head synthesis aims to generate talking videos with lip movements synchronized with the driving audio [14, 40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In terms of the modeling approach, we roughly divide the existing methods into 2D-based and 3D-based ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In the 2D-based methods, GANs [6, 10, 16, 28] are usually employed as the core technologies for learning the audio- to-lip translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' [52] introduce a speaker- aware audio encoder for personalized head motion model- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Prajwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' [28] boost the lip-visual synchroniza- tion with a well-trained Lip-Sync expert [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' However, since the training process of GANs lacks stability and is prone to mode collapse [11], the generated talking videos are always of limited image quality, and difficult to scale to higher resolutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Recently a series of 3D-based meth- ods [4,20,39–41] have been developed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' [39–41] utilize 3D Morphable Models [2] for parametric control of the talk- ing face.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' More recently, the emerging Neural radiance fields [26] provide a new solution for 3D-aware talking head synthesis [3, 17, 24, 36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' However, most of these 3D-based works highly rely on identity-specific training, and thus cannot generalize across different identities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Shen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' [36] have tried to improve the generalization of the model, how- ever, further fine-tuning on specific identities is still in- evitable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In this work, we propose a brand-new diffusion model-based framework for high-fidelity and generalized talking head synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Latent Diffusion Models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Diffusion Probabilistic Mod- els (DM) [37] have shown strong ability in various im- age generation tasks [11, 19, 29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' However, due to pixel space-based training [30,32], very high computational costs are inevitable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' More recently, Rombach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' [30] pro- pose the Latent Diffusion Models (LDMs), and transfer the training and inference processes of DM to a compressed lower-dimension latent space for more efficient comput- ing [13, 49].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' With the democratizing of this technology, it has been successfully employed in a series of works, in- cluding text-to-image translation [21, 31, 33], super resolu- tion [7, 12, 27], image inpainting [23, 25], motion genera- tion [35,48], 3D-aware prediction [1,34,43].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In this work, 2 Att Att Att Att Att Att Att Att ���0 ������ … ������−1 ���1 ��� ������ ������ ������ Reference Audio Landmark ������ concatenate concatenate ��� ������ ������−1 ������ … … Conditions 0 Diffusion Process Denoising Process ������ ������ ��� ��� Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Overview of the proposed DiffTalk for generalized talking head video synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Apart from the audio signal condition to drive the lip motions, we further incorporate reference images and facial landmarks as extra driving factors for personalized facial modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In this way, the talking head generation process is more controllable, which enables the learned model to generalize across different identities without further fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Furthermore, benefiting from the latent space learning mode, we can graceful improve our DiffTalk for higher-resolution synthesis with slight extra computational cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' drawing on these successful practices, we model the talk- ing head synthesis as an audio-driven temporally coherent denoising process and achieve superior generation results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Methodology 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Overview To tackle the challenges of generation quality and model generalization for better real-world deployment, we model the talking head synthesis as an audio-driven temporally co- herent denoising process, and term the proposed method as DiffTalk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' An overview of the proposed DiffTalk is shown in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' By introducing smooth audio features as a condi- tion, we improve the diffusion model for temporally coher- ent facial motion modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' For further personalized facial modeling, we incorporate reference face images and facial landmarks as extra driving factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In this way, the talking head generation process is more controllable, which enables the learned model to generalize across different identities without any further fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Moreover, benefiting from the latent space learning mode, we can graceful improve our DiffTalk for higher-resolution synthesis with negligible extra computational cost, which contributes to improving the generation quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In the following, we will detail the proposed conditional Diffusion Models for high-fidelity and generalized talking head generation in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In Sec- tion 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='3, the progressive inference stage is introduced for better inter-frame consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Conditional Diffusion Model for Talking Head The emergence of Latent Diffusion Models (LDMs) [19, 30] provides a straightforward and effective way for high- fidelity image synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' To inherit its excellent properties, we adopt this advanced technology as the foundation of our method and explore its potential in modeling the dynamic talking head.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' With a pair of well-trained image encoder EI and decoder DI which are frozen in training [13], the in- put face image x ∈ RH×W ×3 can be encoded into a latent space z0 = EI(x) ∈ Rh×w×3, where H/h = W/w = f, H, W are the height and width of the original image and f is the downsampling factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In this way, the learning is transferred to a lower-dimensional latent space, which is more efficient with fewer train resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' On this basis, the standard LDMs are modeled as a time-conditional UNet- based [32] denoising network M, which learns the reverse process of a Markov Chain [15] of length T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The corre- sponding objective can be formulated as: LLDM := Ez,ϵ∼N (0,1),t � ∥ϵ − M (zt, t)∥2 2 � , (1) where t ∈ [1, · · · , T] and zt is obtained through the forward diffusion process from z0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' ˜zt−1 = zt − M(zt, t) is the denoising result of zt at time step t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The final denoised result ˜z0 is then upsampled to the pixel space with the pre- trained image decoder ˜x = DI(˜z0), where ˜x ∈ RH×W ×3 is the reconstructed face image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Given a source identity and driven audio, our goal is to train a model for generating a natural target talking video in 3 Audio Stream 16 time intervals DeepSpeech RNN Feature Map Feature Extractor Temporal Filtering 16 windown size Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Visualization of the smooth audio feature extractor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' For better temporal coherence, two-stage smoothing operations are in- volved in this module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' synchronization with the audio condition while maintaining the original identity information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Furthermore, the trained model also needs to work for novel identities during infer- ence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' To this end, the audio signal is introduced as a basic condition to guide the direction of the denoising process for modeling the audio-to-lip translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Smooth Audio Feature Extraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' To better incorpo- rate temporal information, we involve two-stage smoothing operations in the audio encoder EA, as shown in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Firstly, following the practice in VOCA [9], we reorganize the raw audio signal into overlapped windows of size 16 time intervals (corresponding to audio clips of 20ms), where each window is centered on the corresponding video frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' A pre-trained RNN-based DeepSpeech [18] module is then leveraged to extract the per-frame audio feature map F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' For better inter-frame consistency, we further introduce a learn- able temporal filtering [41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' It receives a sequence of adja- cent audio features [Fi−w, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' , Fi, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' , Fi+w] with w = 8 as input, and computes the final smoothed audio feature for the i-th frame as a ∈ RDA in a self-attention-based learn- ing manner, where DA denotes the audio feature dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' By encoding the audio information, we bridge the modality gap between the audio signals and the visual information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Introducing such smooth audio features as a condition, we extend the diffusion model for temporal coherence-aware modeling of face dynamics when talking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The objective is then formulated as: LA := Ez,ϵ∼N (0,1),a,t � ∥ϵ − M (zt, t, a)∥2 2 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' (2) Identity-Preserving Model Generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In addi- tion to learning the audio-to-lip translation, another essen- tial task is to realize the model generalization while pre- serving complete identity information in the source image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Generalized identity information includes face appearance, head pose, and image background.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' To this end, a reference mechanism is designed to empower our model to general- ize to new individuals unseen in training, as shown in Fig- ure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Specifically, a random face image xr of the source identity is chosen as a reference condition, which contains appearance and background information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' To prevent train- ing shortcuts, we limit the selection of xr to 60 frames be- yond the target image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' However, since the ground-truth face image has a completely different pose from xr, the model is expected to transfer the pose of xr to the target face without any prior information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' This is somehow an ill-posed problem with no unique solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' For this rea- son, we further incorporate the masked ground-truth im- age xm as another reference condition to provide the target head pose guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The mouth region of xm is completely masked to ensure that the ground truth lip movements are not visible to the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In this way, the reference xr fo- cuses on affording mouth appearance information, which additionally reduces the training difficulty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Before serv- ing as conditions, xr and xm are also encoded into the la- tent space through the trained image encoder, and we have zr = DI(xr) ∈ Rh×w×3, zm = DI(xm) ∈ Rh×w×3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' On this basis, an auxiliary facial landmarks condition is also in- cluded for better control of the face outline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Similarly, land- marks in the mouth area are masked to avoid shortcuts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The landmark feature l ∈ RDL is obtained with an MLP-based encoder EL, where DL is the landmark feature dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In this way, combining these conditions with audio feature a, we realize the precise control over all key elements of a dynamic talking face.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' With C = {a, zr, zm, l} denoting the condition set, the talking head synthesis is finally mod- eled as a conditional denoising process optimized with the following objective: L := Ez,ϵ∼N (0,1),C,t � ∥ϵ − M (zt, t, C)∥2 2 � , (3) where the network parameters of M, EA and EL are jointly optimized via this equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Conditioning Mechanisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Based on the modeling of the conditional denoising process in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 3, we pass these conditions C to the network in the manner shown in Fig- ure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Specifically, following [30], we implement the UNet- based backbone M with the cross-attention mechanism for better multimodality learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The spatially aligned refer- ences zr and zm are concatenated channel-wise with the noisy map zT to produce a joint visual condition Cv = [zT ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' zm;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' zr] ∈ Rh×w×9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Cv is fed to the first layer of the network to directly guide the output face in an image-to- image translation fashion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Additionally, the driven-audio feature a and the landmark representation l are concatenated into a latent condition Cl = [a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' l] ∈ RDA+DL, which serves as the key and value for the intermediate cross-attention layers of M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' To this extent, all condition information C = {Cv, Cl} are properly integrated into the denoising network M to guide the talking head generation process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 4 DDIM-based Denoising ������,1 Random ������,1 ���1 ������,2 ������,2 ���2 ������ DDIM-based Denoising DDIM-based Denoising ������,��� … ������,��� Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Illustration of the designed progressive inference strat- egy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' For the first frame, the setting of the visual condition Cv remains the same as for training, where xr,1 is a random face im- age from the target identity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Subsequently, the synthetic image ˜xi is employed as the reference condition xr,i+1 for the next frame to enhance the temporal coherence of the generated video.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Higher-Resolution Talking Head Synthesis Our pro- posed DiffTalk can also be gracefully extended for higher- resolution talking head synthesis with negligible extra com- putational cost and faithful reconstruction effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Specif- ically, considering the trade-off between the perceptual loss and the compression rate, for training images of size 256 × 256 × 3, we set the downsampling factor as f = 4 and obtain the latent space of 64 × 64 × 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Furthermore, for higher-resolution generation of 512 × 512 × 3, we just need to adjust the paired image encoder EI and decoder DI with a bigger downsampling factor f = 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Then the trained encoder is frozen and employed to transfer the training pro- cess to a 64 × 64 × 3 latent space as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' This helps to relieve the pressure on insufficient resources, and therefore our model can be gracefully improved for higher-resolution talking head video synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Progressive Inference We perform inference with Denoising Diffusion Implicit Model-based (DDIM) [38] iterative denoising steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' DDIM is a variant of the standard DM to accelerate sampling for more efficient synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' To further boost the coherence of the generated talking videos, we develop a progressive ref- erence strategy in the reference process as shown in Fig- ure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Specifically, when rendering a talking video sequence with the trained model, for the first frame, the setting of the visual condition Cv remains the same as for training, where xr,1 is a random face image from the target identity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Sub- sequently, this synthetic face image is exploited as the xr for the next frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In this way, image details between adja- cent frames remain consistent, resulting in a smoother tran- sition between frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' It is worth noting that this strategy is not used for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Since the difference between adja- cent frames is small, we need to eliminate such references to avoid learning shortcuts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 0 100 0 100 GT w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='o.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Smooth w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Smooth Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Ablation study on the audio smoothing operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' We show the differences between adjacent frames as heatmaps for bet- ter visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The results without audio filtering present obvi- ous high heat values in the mouth region, which indicates the jitters in this area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' By contrast, with smooth audio as the condition, the generated video frames show smoother transitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Experiments 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Experimental Settings Dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' To train the audio-driven diffusion model, an audio-visual dataset HDTF [51] is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' It contains 16 hours of talking videos in 720P or 1080P from more than 300 identities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' We randomly select 100 videos with the length of about 5 hours for training, while the remaining data serve as the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Apart from this public dataset, we also use some other videos for cross-dataset evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' We evaluate our proposed method through vi- sual results coupled with quantitative indicators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' PSNR (↑), SSIM (↑) [45] and LPIPS (↓) [49] are three metrics for assessing image quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The LPIPS is a learning-based perceptual similarity measure that is more in line with hu- man perception, we therefore recommend this metric as a more objective indicator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The SyncNet score (Offset↓ / Confidence↑) [8] checks the audio-visual synchronization quality, which is important for the audio-driven talking head generation task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' (‘↓’ indicates that the lower the better, while ‘↑’ means that the higher the better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=') Implementation Details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' We resize the input image to 256 × 256 for experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The downsampling factor f is set as 4, so the latent space is 64 × 64 × 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' For training the model for higher resolution synthesis, the input is resized to 512 × 512 with f = 8 to keep the same size of latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The length of the denoising step T is set as 200 for both the 5 Ground Truth A A + L A + L + R A + M A + L + M + R ID 1 ID 2 Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Ablation study on the design of the conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The marks above these images refer to the following meanings, ‘A’: Audio;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' ‘L’: Landmark;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' ‘R’: Random reference image;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' ‘M’: Masked ground-truth image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' We show the generated results under different condition settings on two test sets, and demonstrate the effectiveness of our final design, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' A+L+M+R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Method PSNR↑ SSIM↑ LPIPS↓ SyncNet↓↑ Test Set A GT 0/9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='610 w/o 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='944 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='024 1/5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='484 w 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='946 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='024 1/6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='287 Test Set B GT 0/9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='553 w/o 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='924 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='031 1/5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='197 w 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='925 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='031 1/5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='387 Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Ablation study to investigate the contribution of the audio smoothing operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' ‘w’ indicates the model is trained with the audio features after temporal filtering and vice versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' training and inference process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The feature dimensions are DA = DL = 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Our model takes about 15 hours to train on 8 NVIDIA 3090Ti GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Ablation Study Effect of the Smooth Audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In this subsection, we in- vestigate the effect of the audio smooth operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Quanti- tative results in Table 1 show that the model equipped with the audio temporal filtering module outperforms the one without smooth audio, especially in the SyncNet score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' We further visualize the differences between adjacent frames as the heatmaps shown in Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The results without audio filtering present obvious high heat values in the mouth re- gion, which indicates the jitters in this area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' By contrast, with smooth audio as the condition, the generated video frames show smoother transitions, which are reflected in the soft differences of adjacent frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Design of the Conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' A major contribution of this work is the ingenious design of the conditions for general and high-fidelity talking head synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In Figure 6, we show the generated results under different condition settings step by step, to demonstrate the superiority of our design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Method PSNR↑ SSIM↑ LPIPS↓ SyncNet↓↑ Test Set A GT 4/7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='762 w/o 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='946 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='024 1/6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='287 w 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='946 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='023 1/6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='662 Test Set B GT 3/8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='947 w/o 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='925 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='031 1/5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='387 w 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='925 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='030 1/5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='999 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Ablation study on the effect of the progressive inference strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' ‘w/o’ indicates that a random reference image is em- ployed as the condition, and ‘w’ means that the reference is the generated result of the previous frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' With pure audio as the condition, the model fails to gener- alize to new identities, and the faces are not aligned with the background in the inpainting-based inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Adding land- marks as another condition tackles the misalignment prob- lem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' A random reference image is further introduced try- ing to provide the identity information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Whereas, since the ground-truth face image has a different pose from this ran- dom reference, the model is expected to transfer the pose of reference to the target face.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' This greatly increases the diffi- culty of training, leading to hard network convergence, and the identity information is not well learned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Using the au- dio and masked ground-truth images as driving factors mit- igates the identity inconsistency and misalignment issues, however the appearance of the mouth can not be learned since this information is not visible to the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' For this reason, we employ the random reference face and the masked ground-truth image together for dual driving, where the random reference provides the lip appearance message and the masked ground-truth controls the head pose and identity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Facial landmarks are also incorporated as a con- dition that helps to model the facial contour better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Results 6 GT ATVG MakeItTalk Wav2Lip Ours DFRF AD-NeRF 3D-based Methods 2D-based Methods Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Visual comparison with some representative 2D-based talking head generation methods ATVGnet [5], MakeitTalk [52] and Wav2Lip [28], and with some recent 3D-based ones AD-NeRF [17] and DFRF [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The results of DFRF are synthesized with the base model without fine-tuning for fair comparisons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' AD-NeRF is trained on these two identities respectively to produce the results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' in Figure 6 show the effectiveness of such design in synthe- sizing realism and controllable face images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Impact of the Progressive Inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Temporal corre- lation inference is developed in this work through the pro- gressive reference strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' We conduct an ablation study in Table 2 to investigate the impact of this design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' ‘w/o’ in- dicates that a random reference image xr is employed, and ‘w’ means that the generated result of the previous frame is chosen as the reference condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' With such progressive inference, the SyncNet scores are further boosted, since the temporal correlation is better modeled and the talking style becomes more coherent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The LPIPS indicator is also en- hanced with this improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' PSNR tends to give higher scores to blurry images [49], so we recommend LPIPS as a more representative metric for visual quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Method Comparison Comparison with 2D-based Methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In this section, we perform method comparisons with some representative 2D-based talking head generation approaches including the ATVGnet [5], MakeitTalk [52] and Wav2Lip [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Figure 7 visualizes the generated frames of these methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' It can be seen that the ATVGnet performs generation based on cropped faces with limited image quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The MakeItTalk synthesizes plausible talking frames, however the back- ground is wrongly wrapped with the mouth movements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' This phenomenon is more observable in the video form result, and greatly affects the visual experience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Gener- ated talking faces of Wav2Lip appear artifacts in the square boundary centered on the mouth, since the synthesized area 7 Method Test Set A Test Set B General PSNR↑ SSIM↑ LPIPS↓ SyncNet↓↑ PSNR↑ SSIM↑ LPIPS↓ SyncNet↓↑ Method GT 1/8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='979 2/7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='924 MakeItTalk [52] 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='77 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='544 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='19 4/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='936 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='648 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='129 3/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='416 ✓ Wav2Lip [28] 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='761 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='140 2/8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='936 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='942 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='027 3/9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='385 ✓ AD-NeRF [17] 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='89 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='885 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='072 2/5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='639 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='947 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='023 3/4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='246 \x15 DFRF [36] 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='892 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='068 1/5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='999 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='949 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='025 2/4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='432 FT Req.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Ours 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='950 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='024 1/6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='381 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='950 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='020 1/5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='639 ✓ Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Comparison with some representative talking head synthesis methods on two test sets as in Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The best performance is highlighted in red (1st best) and blue (2nd best).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Our DiffTalk obtains the best PSNR, SSIM, and LPIPS values, and comparable SyncNet scores simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' It is worth noting that the DFRF is fine-tuned on the specific identity to obtain these results, while our method can directly be utilized for generation without further fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' (‘FT Req.’ means that fine-tuning operation is required for the DFRF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=') and the original image are not well blended.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' By contrast, the proposed DiffTalk generates natural and realistic talk- ing videos with accurate audio-lip synchronization, owing to the crafted conditioning mechanism and stable training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' For more objective comparisons, we further eval- uate the quantitative results in Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Our DiffTalk far surpasses [28] and [52] in all image quality metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' For the audio-visual synchronization metric SyncNet, the pro- posed method reaches a high level and is superior than MakeItTalk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Although the DiffTalk is slightly inferior to Wav2Lip on the SyncNet score, it is far better than Wav2Lip in terms of image quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In conclusion, our method outper- forms these 2D-based methods under comprehensive con- sideration of the qualitative and quantitative results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Comparison with 3D-based Methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' For more com- prehensive evaluations, we further compare with some recent high-performance 3D-based works including AD- NeRF [17] and DFRF [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' They realize implicitly 3D head modeling with the NeRF technology, so we treat them as generalized 3D-based methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The visualization results are shown in Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' AD-NeRF models the head and torso parts separately, resulting in misalignment in the neck re- gion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' More importantly, it is worth noting that AD-NeRF is a non-general method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In contrast, our method is able to handle unseen identities without further fine-tuning, which is more in line with the practical application scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The DFRF relies heavily on the fine-tuning operation for model generalization, and the generated talking faces with only the base model are far from satisfactory as shown in Fig- ure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' More quantitative results in Table 3 also show that our method surpasses [17, 36] on the image quality and audio- visual synchronization indicators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Expand to Higher Resolution In this section, we perform experiments to demonstrate the capacity of our method on generating higher-resolution images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In Figure 8, we show the synthesis frames of two models (a) and (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' (a) is trained on 256 × 256 images with the downsampling factor f = 4, so the latent space is of (a) Resolution: 256 × 256, ���=4 (b) Resolution: 512 × 512, ���=8 Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Generated results with higher resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' size 64 × 64 × 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' For (b), 512 × 512 images with f = 8 are used for training the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Since both models are trained based on a compressed 64 × 63 × 3 latent space, the pressure of insufficient computing resources is relieved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' We can therefore comfortably expand our model for higher- resolution generation just as shown in Figure 8, where the synthesis quality in (b) significantly outperforms that in (a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Conclusion and Discussion In this paper, we have proposed a generalized and high- fidelity talking head synthesis method based on a crafted conditional diffusion model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Apart from the audio signal condition to drive the lip motions, we further incorporate reference images as driving factors to model the personal- ized appearance, which enables the learned model to com- fortably generalize across different identities without any further fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Furthermore, our proposed DiffTalk can be gracefully tailored for higher-resolution synthesis with negligible extra computational cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Limitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The proposed method models talking head generation as an iterative denoising process, which needs more time to synthesize a frame compared with most GAN- based approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' This is also a common problem of LDM- based works which warrants further research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Nonetheless, we have a large speed advantage over most 3D-based meth- ods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Since talking head technology may raise potential mis- use issues, we are committed to combating these malicious behaviors and advocate positive applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Additionally, researchers who want to use our code will be required to get authorization and add watermarks to the generated videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 8 References [1] Miguel Angel Bautista, Pengsheng Guo, Samira Abnar, Wal- ter Talbott, Alexander Toshev, Zhuoyuan Chen, Laurent Dinh, Shuangfei Zhai, Hanlin Goh, Daniel Ulbricht, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Gaudi: A neural architect for immersive 3d scene genera- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [2] Volker Blanz and Thomas Vetter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' A morphable model for the synthesis of 3d faces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In SIGGRAPH, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [3] Eric R Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' pi-gan: Periodic implicit genera- tive adversarial networks for 3d-aware image synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [4] Lele Chen, Guofeng Cui, Celong Liu, Zhong Li, Ziyi Kou, Yi Xu, and Chenliang Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Talking-head generation with rhyth- mic head motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 1, 2 [5] Lele Chen, Ross K Maddox, Zhiyao Duan, and Chenliang Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Hierarchical cross-modal talking face generation with dynamic pixel-wise loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In CVPR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 7 [6] Michail Christos Doukas, Stefanos Zafeiriou, and Viktoriia Sharmanska.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Headgan: Video-and-audio-driven talking head synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 1, 2 [7] Hyungjin Chung, Jeongsol Kim, Michael T Mccann, Marc L Klasky, and Jong Chul Ye.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Diffusion posterior sampling for general noisy inverse problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [8] Joon Son Chung and Andrew Zisserman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Out of time: auto- mated lip sync in the wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In ACCV, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2, 5 [9] Daniel Cudeiro, Timo Bolkart, Cassidy Laidlaw, Anurag Ranjan, and Michael J Black.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Capture, learning, and syn- thesis of 3d speaking styles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In CVPR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 4 [10] Dipanjan Das, Sandika Biswas, Sanjana Sinha, and Brojesh- war Bhowmick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Speech-driven facial animation using cas- caded gans for learning of motion and texture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 1, 2 [11] Prafulla Dhariwal and Alexander Nichol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Diffusion models beat gans on image synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' NeurIPS, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [12] Marcelo dos Santos, Rayson Laroca, Rafael O Ribeiro, Jo˜ao Neves, Hugo Proenc¸a, and David Menotti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Face super- resolution using stochastic differential equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [13] Patrick Esser, Robin Rombach, and Bjorn Ommer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Taming transformers for high-resolution image synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2, 3 [14] Pablo Garrido, Levi Valgaerts, Hamid Sarmadi, Ingmar Steiner, Kiran Varanasi, Patrick Perez, and Christian Theobalt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Vdub: Modifying face video of actors for plau- sible visual alignment to a dubbed audio track.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In Computer Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Forum, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [15] Charles J Geyer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Practical markov chain monte carlo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Statis- tical science, 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 3 [16] Kuangxiao Gu, Yuqian Zhou, and Thomas Huang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Flnet: Landmark driven fetching and learning network for faithful talking facial animation synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In AAAI, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 1, 2 [17] Yudong Guo, Keyu Chen, Sen Liang, Yongjin Liu, Hujun Bao, and Juyong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Ad-nerf: Audio driven neural radi- ance fields for talking head synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In ICCV, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2, 7, 8 [18] Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Deep speech: Scaling up end-to-end speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 4 [19] Jonathan Ho, Ajay Jain, and Pieter Abbeel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Denoising diffu- sion probabilistic models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' NeurIPS, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2, 3 [20] Xinya Ji, Hang Zhou, Kaisiyuan Wang, Wayne Wu, Chen Change Loy, Xun Cao, and Feng Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Audio-driven emotional video portraits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [21] Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, and Michal Irani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Imagic: Text-based real image editing with diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [22] Prajwal KR, Rudrabha Mukhopadhyay, Jerin Philip, Ab- hishek Jha, Vinay Namboodiri, and CV Jawahar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Towards automatic face-to-face translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In ACMMM, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 1 [23] Wing-Fung Ku, Wan-Chi Siu, Xi Cheng, and H Anthony Chan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Intelligent painter: Picture composition with resam- pling diffusion model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [24] Xian Liu, Yinghao Xu, Qianyi Wu, Hang Zhou, Wayne Wu, and Bolei Zhou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Semantic-aware implicit neural audio- driven video portrait generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [25] Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, and Luc Van Gool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Repaint: Inpainting using denoising diffusion probabilistic models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In CVPR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [26] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Nerf: Representing scenes as neural radiance fields for view syn- thesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [27] Kushagra Pandey, Avideep Mukherjee, Piyush Rai, and Ab- hishek Kumar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Diffusevae: Efficient, controllable and high- fidelity generation from low-dimensional latents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [28] KR Prajwal, Rudrabha Mukhopadhyay, Vinay P Nambood- iri, and CV Jawahar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' A lip sync expert is all you need for speech to lip generation in the wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In ACMMM, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 1, 2, 7, 8 [29] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Zero-shot text-to-image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In ICML, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [30] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj¨orn Ommer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' High-resolution image syn- thesis with latent diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In CVPR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2, 3, 4 [31] Robin Rombach, Andreas Blattmann, and Bj¨orn Ommer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Text-guided synthesis of artistic images with retrieval- augmented diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [32] Olaf Ronneberger, Philipp Fischer, and Thomas Brox.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' U-net: Convolutional networks for biomedical image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In MICCAI, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2, 3 [33] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [34] Saeed Saadatnejad, Ali Rasekh, Mohammadreza Mofayezi, Yasamin Medghalchi, Sara Rajabzadeh, Taylor Mordan, and Alexandre Alahi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' A generic diffusion-based approach for 3d human pose prediction in the wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 9 [35] Ruizhi Shao, Zerong Zheng, Hongwen Zhang, Jingxiang Sun, and Yebin Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Diffustereo: High quality human re- construction via diffusion-based stereo using sparse cameras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [36] Shuai Shen, Wanhua Li, Zheng Zhu, Yueqi Duan, Jie Zhou, and Jiwen Lu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Learning dynamic facial radiance fields for few-shot talking head synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In ECCV, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2, 7, 8 [37] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Deep unsupervised learning using nonequilibrium thermodynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In ICML, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [38] Jiaming Song, Chenlin Meng, and Stefano Ermon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Denois- ing diffusion implicit models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 5 [39] Linsen Song, Wayne Wu, Chen Qian, Ran He, and Chen Change Loy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Everybody’s talkin’: Let me talk as you want.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [40] Supasorn Suwajanakorn, Steven M Seitz, and Ira Kemelmacher-Shlizerman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Synthesizing obama: learn- ing lip sync from audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' TOG, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [41] Justus Thies, Mohamed Elgharib, Ayush Tewari, Christian Theobalt, and Matthias Nießner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Neural voice puppetry: Audio-driven facial reenactment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2, 4 [42] Justus Thies, Michael Zollhofer, Marc Stamminger, Chris- tian Theobalt, and Matthias Nießner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Face2face: Real-time face capture and reenactment of rgb videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In CVPR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [43] Dominik JE Waibel, Ernst R¨ooell, Bastian Rieck, Raja Giryes, and Carsten Marr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' A diffusion model predicts 3d shapes from 2d microscopy images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [44] Ting-Chun Wang, Arun Mallya, and Ming-Yu Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' One-shot free-view neural talking-head synthesis for video conferenc- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 1 [45] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Image quality assessment: from error visibility to structural similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' TIP, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 5 [46] Shunyu Yao, RuiZhe Zhong, Yichao Yan, Guangtao Zhai, and Xiaokang Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Dfa-nerf: Personalized talking head generation via disentangled face attributes neural rendering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [47] Egor Zakharov, Aliaksandra Shysheya, Egor Burkov, and Victor Lempitsky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Few-shot adversarial learning of realis- tic neural talking head models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In ICCV, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 1 [48] Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, and Ziwei Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Motiondif- fuse: Text-driven human motion generation with diffusion model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' arXiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 [49] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' The unreasonable effectiveness of deep features as a perceptual metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2, 5, 7 [50] Xi Zhang, Xiaolin Wu, Xinliang Zhai, Xianye Ben, and Chengjie Tu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Davd-net: Deep audio-aided video decompres- sion of talking heads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 1 [51] Zhimeng Zhang, Lincheng Li, Yu Ding, and Changjie Fan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Flow-guided one-shot talking face generation with a high- resolution audio-visual dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 5 [52] Yang Zhou, Xintong Han, Eli Shechtman, Jose Echevar- ria, Evangelos Kalogerakis, and Dingzeyu Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' Makelttalk: speaker-aware talking-head animation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' TOG, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 1, 2, 7, 8 [53] Michael Zollh¨ofer, Justus Thies, Pablo Garrido, Derek Bradley, Thabo Beeler, Patrick P´erez, Marc Stamminger, Matthias Nießner, and Christian Theobalt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' State of the art on monocular 3d face reconstruction, tracking, and applica- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' In Computer Graphics Forum, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'} +page_content=' 2 10' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE2T4oBgHgl3EQfSAcB/content/2301.03786v1.pdf'}