Skywork Logo ======================================
Skywork-R1V3
======================================

πŸ“– R1V3 Report | πŸ’» GitHub

GitHub Stars GitHub Forks License

1. Model Introduction

Skywork-R1V3-38B is the latest and most powerful open-source multimodal reasoning model in the Skywork-R1V series. Built on InternVL-38B, it significantly pushes the boundaries of multimodal and cross-disciplinary intelligence. Mainly through RL algorithm in post-training, R1V3 boasts enhanced reasoning ability, achieving open-source state-of-the-art (SOTA) performance across numerous multimodal reasoning benchmarks.

2. Technical Highlights

Skywork-R1V3 is an advanced, open-source Vision-Language Model (VLM) built on several core innovations:

  • Refined Post-Training RL: Instead of relying on reasoning pre-training, our fine-grained cold-start finetuning effectively primes the model for Reinforcement Learning (RL), which dramatically enhances its reasoning ability.

  • Essential Connector Module: We've uncovered the critical role of the connector module in achieving robust cross-modal alignment for strong multimodal reasoning. What's more, Connector-only Finetuning can further boost the model's performance post-RL.

  • Entropy of Critical Reasoning Tokens: This unique indicator effectively gauges reasoning capability, guiding checkpoint selection during RL training.

These innovations lead to Broad Reasoning Generalization, allowing our RL-powered approach to successfully extend mathematical reasoning to diverse subject areas. Additionally, our work delves into RL-specific explorations like curriculum learning and learning rate strategies, alongside a broader discussion on multimodal reasoning. For more details, refer to our [πŸ“– R1V3 Report] .

3. Evaluation

🌟 Key Results

  • MMMU: 76.0
  • EMMA-Mini(CoT): 40.3
  • MMK12: 78.5
  • Physics Reasoning: PhyX-MC-TM (52.8), SeePhys (31.5)
  • Logic Reasoning: MME-Reasoning (42.8) VisuLogic (28.5)
  • Math Benchmarks: MathVista (77.1), MathVerse (59.6), MathVision (52.6)

4. Usage

If you need the detailed inference code and evaluation script, please refer to our GitHub.

Run the Inference Script

hf inference

import torch
from transformers import AutoModel, AutoTokenizer
from utils import load_image, split_model
import argparse

def main():
    parser = argparse.ArgumentParser(description="Run inference with Skywork-R1V model.")
    parser.add_argument('--model_path', type=str, default='Skywork/Skywork-R1V3-38B', help="Path to the model.")
    parser.add_argument('--image_paths', type=str, nargs='+', required=True, help="Path(s) to the image(s).")
    parser.add_argument('--question', type=str, required=True, help="Question to ask the model.")
    args = parser.parse_args()

    device_map = split_model(args.model_path)
    model = AutoModel.from_pretrained(
        args.model_path,
        torch_dtype=torch.bfloat16,
        load_in_8bit=False,
        low_cpu_mem_usage=True,
        use_flash_attn=True,
        trust_remote_code=True,
        device_map=device_map
    ).eval()
    tokenizer = AutoTokenizer.from_pretrained(args.model_path, trust_remote_code=True, use_fast=False)

    pixel_values = [load_image(img_path, max_num=12).to(torch.bfloat16).cuda() for img_path in args.image_paths]
    if len(pixel_values) > 1:
        num_patches_list = [img.size(0) for img in pixel_values]
        pixel_values = torch.cat(pixel_values, dim=0)
    else:
        pixel_values = pixel_values[0]
        num_patches_list = None
        
    prompt = "<image>\n"*len(args.image_paths) + args.question
    generation_config = dict(max_new_tokens=64000, do_sample=True, temperature=0.6, top_p=0.95, repetition_penalty=1.05)
    response = model.chat(tokenizer, pixel_values, prompt, generation_config, num_patches_list=num_patches_list)

    print(f'User: {args.question}\nAssistant: {response}')

if __name__ == '__main__':
    main()

vllm inference

python -m vllm.entrypoints.openai.api_server --model $MODEL_PATH  --max_model_len 32768  --limit-mm-per-prompt "image=20" --tensor-parallel-size $N_GPU --dtype auto  --trust-remote-code

5. Citation

If you use Skywork-R1V in your research, please cite:

@misc{shen2025skyworkr1v3technicalreport,
      title={Skywork-R1V3 Technical Report}, 
      author={Wei Shen and Jiangbo Pei and Yi Peng and Xuchen Song and Yang Liu and Jian Peng and Haofeng Sun and Yunzhuo Hao and Peiyu Wang and Jianhao Zhang and Yahui Zhou},
      year={2025},
      eprint={2507.06167},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.06167}, 
}

6.License

This project is released under the MIT License. This project uses the InternVL3-38B as the base model, which is licensed under the MIT License.

Downloads last month
5
Safetensors
Model size
38.4B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ 4 Ask for provider support

Model tree for Skywork/Skywork-R1V3-38B

Finetuned
(4)
this model
Quantizations
4 models

Collection including Skywork/Skywork-R1V3-38B