V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning Paper • 2506.09985 • Published 15 days ago • 26
view post Post 3504 IN: video fine-tuning support for facebook V-JEPA 2 in HF transformers 🔥it comes with> four models fine-tuned on Diving48 and SSv2 dataset facebook/v-jepa-2-6841bad8413014e185b497a6> FastRTC demo on V-JEPA2 SSv2 qubvel-hf/vjepa2-streaming-video-classification> fine-tuning script on UCF-101 https://gist.github.com/ariG23498/28bccc737c11d1692f6d0ad2a0d7cddb> fine-tuning notebook on UCF-101 https://colab.research.google.com/drive/16NWUReXTJBRhsN3umqznX4yoZt2I7VGc?usp=sharingwe're looking forward to see what you will build! 🤗 See translation 🧠 9 9 🔥 7 7 🚀 4 4 ❤️ 3 3 + Reply
Running on L4 29 29 V-JEPA 2 - Streaming Video Classification 🌍 Run V-JEPA 2 on a video stream for Video Classification
Running on L4 29 29 V-JEPA 2 - Streaming Video Classification 🌍 Run V-JEPA 2 on a video stream for Video Classification
V-JEPA 2 Collection A frontier video understanding model developed by FAIR, Meta, which extends the pretraining objectives of https://ai.meta.com/blog/v-jepa-yann • 8 items • Updated 13 days ago • 128
Running on L4 29 29 V-JEPA 2 - Streaming Video Classification 🌍 Run V-JEPA 2 on a video stream for Video Classification