lihaoxin2020/Qwen3-4B-Thinking-GSM8K-PRIME-global_step_420
This repository was uploaded programmatically.
Usage
Load with transformers (or your library) using from_pretrained():
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("lihaoxin2020/Qwen3-4B-Thinking-GSM8K-PRIME-global_step_420")
tokenizer = AutoTokenizer.from_pretrained("lihaoxin2020/Qwen3-4B-Thinking-GSM8K-PRIME-global_step_420")
If you're using a custom library, ensure your loader expects the same files
(config.json
, weight files like pytorch_model.bin
or model.safetensors
,
and any tokenizer assets) at the repository root.