VL-Cogito: Progressive Curriculum Reinforcement Learning for Advanced Multimodal Reasoning
Abstract
VL-Cogito, a multimodal reasoning model, uses a Progressive Curriculum Reinforcement Learning framework to improve performance across diverse tasks by dynamically adjusting difficulty and reasoning path length.
Reinforcement learning has proven its effectiveness in enhancing the reasoning capabilities of large language models. Recent research efforts have progressively extended this paradigm to multimodal reasoning tasks. Due to the inherent complexity and diversity of multimodal tasks, especially in semantic content and problem formulations, existing models often exhibit unstable performance across various domains and difficulty levels. To address these limitations, we propose VL-Cogito, an advanced multimodal reasoning model trained via a novel multi-stage Progressive Curriculum Reinforcement Learning (PCuRL) framework. PCuRL systematically guides the model through tasks of gradually increasing difficulty, substantially improving its reasoning abilities across diverse multimodal contexts. The framework introduces two key innovations: (1) an online difficulty soft weighting mechanism, dynamically adjusting training difficulty across successive RL training stages; and (2) a dynamic length reward mechanism, which encourages the model to adaptively regulate its reasoning path length according to task complexity, thus balancing reasoning efficiency with correctness. Experimental evaluations demonstrate that VL-Cogito consistently matches or surpasses existing reasoning-oriented models across mainstream multimodal benchmarks spanning mathematics, science, logic, and general understanding, validating the effectiveness of our approach.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Metis-RISE: RL Incentivizes and SFT Enhances Multimodal Reasoning Model Learning (2025)
- M2-Reasoning: Empowering MLLMs with Unified General and Spatial Reasoning (2025)
- Advancing Multimodal Reasoning: From Optimized Cold Start to Staged Reinforcement Learning (2025)
- Multimodal Mathematical Reasoning with Diverse Solving Perspective (2025)
- GHPO: Adaptive Guidance for Stable and Efficient LLM Reinforcement Learning (2025)
- The Synergy Dilemma of Long-CoT SFT and RL: Investigating Post-Training Techniques for Reasoning VLMs (2025)
- EFRame: Deeper Reasoning via Exploration-Filter-Replay Reinforcement Learning Framework (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
I have a question about Equation 6 in the paper:
This expression attains its maximum when $L_{i} = 0, but attains its minimum when (L_{i} = L_{{tgt}}).
Could it be missing a negative sign?
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper