Cross-Frame Representation Alignment for Fine-Tuning Video Diffusion Models
Abstract
Cross-frame Representation Alignment (CREPA) enhances video diffusion model fine-tuning by improving visual fidelity and semantic coherence across frames using parameter-efficient methods.
Fine-tuning Video Diffusion Models (VDMs) at the user level to generate videos that reflect specific attributes of training data presents notable challenges, yet remains underexplored despite its practical importance. Meanwhile, recent work such as Representation Alignment (REPA) has shown promise in improving the convergence and quality of DiT-based image diffusion models by aligning, or assimilating, its internal hidden states with external pretrained visual features, suggesting its potential for VDM fine-tuning. In this work, we first propose a straightforward adaptation of REPA for VDMs and empirically show that, while effective for convergence, it is suboptimal in preserving semantic consistency across frames. To address this limitation, we introduce Cross-frame Representation Alignment (CREPA), a novel regularization technique that aligns hidden states of a frame with external features from neighboring frames. Empirical evaluations on large-scale VDMs, including CogVideoX-5B and Hunyuan Video, demonstrate that CREPA improves both visual fidelity and cross-frame semantic coherence when fine-tuned with parameter-efficient methods such as LoRA. We further validate CREPA across diverse datasets with varying attributes, confirming its broad applicability. Project page: https://crepavideo.github.io
Community
Start discussing the paper
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ImageReFL: Balancing Quality and Diversity in Human-Aligned Diffusion Models (2025)
- Zero-Shot Adaptation of Parameter-Efficient Fine-Tuning in Diffusion Models (2025)
- Temporal In-Context Fine-Tuning for Versatile Control of Video Diffusion Models (2025)
- From Generation to Generalization: Emergent Few-Shot Learning in Video Diffusion Models (2025)
- UniAnimate-DiT: Human Image Animation with Large-Scale Video Diffusion Transformer (2025)
- FlowMo: Variance-Based Flow Guidance for Coherent Motion in Video Generation (2025)
- Noise Consistency Regularization for Improved Subject-Driven Image Synthesis (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper