SmolVLM: Redefining small and efficient multimodal models Paper • 2504.05299 • Published 8 days ago • 158
Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers Paper • 2503.11579 • Published Mar 14 • 18
Comics Pick-A-Panel Collection Dataset, Models and Paper from ComicsPAP: understanding comic strips by picking the correct panel • 4 items • Updated Mar 14 • 3
HoloMine: A Synthetic Dataset for Buried Landmines Recognition using Microwave Holographic Imaging Paper • 2502.21054 • Published Feb 28
ComicsPAP: understanding comic strips by picking the correct panel Paper • 2503.08561 • Published Mar 11 • 2
R1-Omni: Explainable Omni-Multimodal Emotion Recognition with Reinforcing Learning Paper • 2503.05379 • Published Mar 7 • 34
R1-Zero's "Aha Moment" in Visual Reasoning on a 2B Non-SFT Model Paper • 2503.05132 • Published Mar 7 • 55
Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models Paper • 2503.06749 • Published Mar 9 • 27
Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language Models via Mixture-of-LoRAs Paper • 2503.01743 • Published Mar 3 • 83