Papers
arxiv:2509.24317

Rethinking JEPA: Compute-Efficient Video SSL with Frozen Teachers

Published on Sep 29
· Submitted by xhl on Sep 30
Authors:
,
,
,
,

Abstract

SALT, a two-stage training method using a frozen teacher, achieves better video representation learning with higher efficiency and scalability compared to EMA-based approaches.

AI-generated summary

Video Joint Embedding Predictive Architectures (V-JEPA) learn generalizable off-the-shelf video representation by predicting masked regions in latent space with an exponential moving average (EMA)-updated teacher. While EMA prevents representation collapse, it complicates scalable model selection and couples teacher and student architectures. We revisit masked-latent prediction and show that a frozen teacher suffices. Concretely, we (i) train a target encoder with a simple pixel-reconstruction objective under V-JEPA masking, then (ii) freeze it and train a student to predict the teacher's latents on masked regions. This leads to a two-stage, unregularized scheme that we refer to as SALT (Static-teacher Asymmetric Latent Training). SALT decouples optimization into pixel reconstruction (teacher) and masked latent prediction (student), increasing transparency, efficiency, and scalability while preserving the ability of representation to generalize under frozen evaluation. Empirically, our student models outperform recently proposed V-JEPA 2 encoders under frozen backbone evaluation across diverse benchmarks. They are also more compute-optimal: at matched pretraining FLOPs, our method achieves higher probing accuracy, and its scaling curves dominate V-JEPA's accuracy-FLOPs Pareto frontier. Finally, we find that student quality is remarkably robust to teacher quality: high-performing students emerge even with small, sub-optimal teachers. This points to a compute budget allocation that should overwhelmingly favor the student. These results position SALT as a simple, scalable, and compute-efficient alternative to EMA-based self-distillation for video representation learning.

Community

Paper author Paper submitter

Rethinking JEPA: Compute-Efficient Video SSL with Frozen Teachers

We propose SALT (Static-teacher Asymmetric Latent Training), a simple, scalable, and compute-efficient alternative to EMA-based self-distillation for video representation learning.

Method

SALT follows a two-stage recipe:

  1. Teacher training: train an encoder with a pixel-reconstruction objective under V-JEPA–style masking.
  2. Student training: freeze this teacher and train the student to predict its latents on masked regions.

Key Findings

Unlike prior work that depends on large pretrained encoders or fine-tuning, SALT demonstrates that:

  • Small, sub-optimal teachers suffice: strong students emerge even from modest frozen teachers.
  • Compute-efficient: SALT achieves a superior accuracy–FLOPs trade-off compared to EMA-based self-distillation, even after accounting for teacher cost.
  • Interpretable model selection: student loss directly predicts downstream accuracy, removing the need for proxy heuristics.

method_9_23_v1-1

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.24317 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.24317 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.24317 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.