Dynamic NeRFs have recently garnered growing attention for 3D talking portrait synthesis. Despite advances in rendering speed and visual quality, challenges persist in enhancing efficiency and effectiveness. We present R2-Talker, an efficient and effective framework enabling realistic real-time talking head synthesis. Specifically, using multi-resolution hash grids, we introduce a novel approach for encoding facial landmarks as conditional features. This approach losslessly encodes landmark structures as conditional features, decoupling input diversity, and conditional spaces by mapping arbitrary landmarks to a unified feature space. We further propose a scheme of progressive multilayer conditioning in the NeRF rendering pipeline for effective conditional feature fusion.
Our new approach has the following advantages as demonstrated by extensive experiments compared with the state-of-the-art works: 1. The lossless input encoding enables acquiring more precise features, yielding superior visual quality. The decoupling of inputs and conditional spaces improves generalizability. 2. The fusing of conditional features and MLP outputs at each MLP layer enhances conditional impact, resulting in more accurate lip synthesis and better visual quality. 3. It compactly structures the fusion of conditional features, significantly enhancing computational efficiency.
Under the self-driven settings, we visually compare the visual quality of the results generated by different methods.
We compare the visual quality of the results of different methods under the cross gender and cross lingual settings
There's a lot of excellent work that was introduced around the same time as ours.
RAD-NeRF: Real-time Neural Radiance Talking Portrait Synthesis via Audio-spatial Decomposition.
ER-NeRF: Efficient Region-Aware Neural Radiance Fields for High-Fidelity Talking Portrait Synthesis.
Geneface++: Generalized and Stable Real-Time Audio-Driven 3D Talking Face Generation.
@article{zhiling2023r2talker,
author = {Zhiling Ye, Liangguo Zhang, Dingheng Zeng, Quan Lu, Ning Jiang},
title = {R2-Talker: Realistic Real-Time Talking Head Synthesis with Hash Grid Landmarks Encoding
and Progressive Multilayer Conditioning},
journal = {arXiv preprint arXiv:2312.05572},
year = {2023},
}