Papers
arxiv:2506.05554

EX-4D: EXtreme Viewpoint 4D Video Synthesis via Depth Watertight Mesh

Published on Jun 5
Authors:
Tao Hu ,
,
,

Abstract

EX-4D addresses challenges in generating high-quality, camera-controllable videos from monocular input using a Depth Watertight Mesh representation and a LoRA-based video diffusion adapter, achieving superior physical consistency and extreme-view quality.

AI-generated summary

Generating high-quality camera-controllable videos from monocular input is a challenging task, particularly under extreme viewpoint. Existing methods often struggle with geometric inconsistencies and occlusion artifacts in boundaries, leading to degraded visual quality. In this paper, we introduce EX-4D, a novel framework that addresses these challenges through a Depth Watertight Mesh representation. The representation serves as a robust geometric prior by explicitly modeling both visible and occluded regions, ensuring geometric consistency in extreme camera pose. To overcome the lack of paired multi-view datasets, we propose a simulated masking strategy that generates effective training data only from monocular videos. Additionally, a lightweight LoRA-based video diffusion adapter is employed to synthesize high-quality, physically consistent, and temporally coherent videos. Extensive experiments demonstrate that EX-4D outperforms state-of-the-art methods in terms of physical consistency and extreme-view quality, enabling practical 4D video generation.

Community

Paper author

I am the main author of Ex-4D. Please checkout the following link:
Homepage: https://tau-yihouxiang.github.io/projects/EX-4D/EX-4D.html
Code: https://github.com/tau-yihouxiang/EX-4D

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.05554 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.05554 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.05554 in a Space README.md to link it from this page.

Collections including this paper 2