|
--- |
|
library_name: shape-for-motion |
|
license: other |
|
license_name: shape-for-motion |
|
license_link: https://github.com/Tencent-Hunyuan/HunyuanWorld-Voyager/blob/main/LICENSE |
|
language: |
|
- en |
|
- zh |
|
tags: |
|
- video_editing |
|
pipeline_tag: video-to-video |
|
extra_gated_eu_disallowed: true |
|
base_model: |
|
- model-hub/stable-video-diffusion-img2vid |
|
--- |
|
<div align="center"> |
|
<a href="https://shapeformotion.github.io/"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Web&color=green"></a>   |
|
<a href="https://arxiv.org/pdf/2506.22432"><img src="https://img.shields.io/static/v1?label=Tech%20Report&message=Arxiv&color=red"></a>   |
|
<a href="https://huggingface.co/LeoLau/Shape-for-Motion"><img src="https://img.shields.io/static/v1?label=shape_for-motion&message=HuggingFace&color=yellow"></a> |
|
</div> |
|
|
|
We introduce Shape-for-Motion, a 3D-aware video editing framework to support precise and consistent video object manipulation by reconstructing an editable 3D mesh to serve as control signals for video generation. |
|
|
|
 |
|
|
|
## ๐ BibTeX |
|
|
|
If you find [Shape-for-Motion](https://arxiv.org/abs/2506.22432) useful for your research and applications, please cite using this BibTeX: |
|
|
|
```BibTeX |
|
@article{liu2025shape, |
|
title={Shape-for-Motion: Precise and Consistent Video Editing with 3D Proxy}, |
|
author={Liu, Yuhao and Wang, Tengfei and Liu, Fang and Wang, Zhenwei and Lau, Rynson WH}, |
|
journal={arXiv preprint arXiv:2506.22432}, |
|
year={2025} |
|
} |
|
``` |
|
|
|
|
|
|
|
## Acknowledgements |
|
|
|
We would like to thank [DG-Mesh](https://github.com/Isabella98Liu/DG-Mesh), [Deformable 3DGS](https://ingra14m.github.io/Deformable-Gaussians), and [Diffusers](https://github.com/huggingface/diffusers) for their open research and exploration. |