Post
2517
Many VLMs claim to process hours of video. But can they follow the story?π€
Today, we introduce TimeScope: The benchmark that separates true temporal understanding from marketing hype. Let's see how much VLMs really understand!β³
We test three skills that matter for real-world use:
π Localized Retrieval: Find a specific action.
π§© Information Synthesis: Piece together scattered clues.
π Fine-Grained Perception: Analyze detailed motion (e.g., count how many times a person swings an axe).
The results are in, and they're revealing. Only Gemini 2.5 pro handles 1-hour-long videos.
Performance drops sharply with duration, proving that long video understanding is still challenging. We've found the breaking pointsβnow the community can start fixing them.π
Want to learn more? TimeScope is 100% open-source. Benchmark your model and help us build the next generation of video AI.
π Blog:
https://huggingface.co/blog/timescope-video-lmm-benchmark
π©βπ» Leaderboard & Demo: Apollo-LMMs/TimeScope
π Dataset: Apollo-LMMs/TimeScope
βοΈ Eval Code: https://github.com/EvolvingLMMs-Lab/lmms-eval
Today, we introduce TimeScope: The benchmark that separates true temporal understanding from marketing hype. Let's see how much VLMs really understand!β³
We test three skills that matter for real-world use:
π Localized Retrieval: Find a specific action.
π§© Information Synthesis: Piece together scattered clues.
π Fine-Grained Perception: Analyze detailed motion (e.g., count how many times a person swings an axe).
The results are in, and they're revealing. Only Gemini 2.5 pro handles 1-hour-long videos.
Performance drops sharply with duration, proving that long video understanding is still challenging. We've found the breaking pointsβnow the community can start fixing them.π
Want to learn more? TimeScope is 100% open-source. Benchmark your model and help us build the next generation of video AI.
π Blog:
https://huggingface.co/blog/timescope-video-lmm-benchmark
π©βπ» Leaderboard & Demo: Apollo-LMMs/TimeScope
π Dataset: Apollo-LMMs/TimeScope
βοΈ Eval Code: https://github.com/EvolvingLMMs-Lab/lmms-eval