Text Generation
Transformers
Safetensors
qwen2
conversational
text-generation-inference
quyanh nielsr HF staff commited on
Commit
94a8bd9
·
verified ·
1 Parent(s): 2fce0b1

Add library_name and improve model card description (#2)

Browse files

- Add library_name and improve model card description (d07d7befb44fefbdda800311e7e87ab128d9c2c6)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +11 -20
README.md CHANGED
@@ -1,28 +1,20 @@
1
  ---
 
 
 
 
 
 
 
2
  pipeline_tag: text-generation
3
  inference: true
4
- license: mit
5
- datasets:
6
- - knoveleng/open-rs
7
- - knoveleng/open-s1
8
- - knoveleng/open-deepscaler
9
- base_model:
10
- - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
11
  ---
12
 
13
  # Model Summary
14
 
15
- This repository hosts model for the **Open RS** project, accompanying the paper *Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn’t*. The project explores enhancing reasoning capabilities in small large language models (LLMs) using reinforcement learning (RL) under resource-constrained conditions.
16
-
17
- We focus on a 1.5-billion-parameter model, `DeepSeek-R1-Distill-Qwen-1.5B`, trained on 4 NVIDIA A40 GPUs (48 GB VRAM each) within 24 hours. By adapting the Group Relative Policy Optimization (GRPO) algorithm and leveraging a curated, compact mathematical reasoning dataset, we conducted three experiments to assess performance and behavior. Key findings include:
18
-
19
- - Significant reasoning improvements, e.g., AMC23 accuracy rising from 63% to 80% and AIME24 reaching 46.7%, outperforming `o1-preview`.
20
- - Efficient training with just 7,000 samples at a cost of $42, compared to thousands of dollars for baseline models.
21
- - Challenges like optimization instability and length constraints with extended training.
22
-
23
- These results showcase RL-based fine-tuning as a cost-effective approach for small LLMs, making reasoning capabilities accessible in resource-limited settings. We open-source our code, models, and datasets to support further research.
24
 
25
- For more details, please refer our [github](https://github.com/knoveleng/open-rs).
26
 
27
  ## Evaluation
28
  ### Performance Highlights
@@ -34,9 +26,7 @@ For more details, please refer our [github](https://github.com/knoveleng/open-rs
34
  ![Performance Metrics](assets/performances.png)
35
 
36
  ### Cost Efficiency
37
- Our approach uses 7,000 samples (42,000 total outputs) and costs ~$42 on 4x A40 GPUs in 24 hours, compared to:
38
- - 7B models: `Qwen2.5-7B-SimpleRL` ($1,633), `Eurus-2-7B-PRIME` ($1,088)
39
- - 1.5B models: `DeepScaleR-1.5B-Preview` ($3,629), `Still-3-1.5B-Preview` ($2,268)
40
 
41
  ![7B Model Costs](assets/costs-7b.png)
42
  ![1.5B Model Costs](assets/costs-1.5b.png)
@@ -56,3 +46,4 @@ If this project aids your work, please cite it as:
56
  }
57
  ```
58
 
 
 
1
  ---
2
+ base_model:
3
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
4
+ datasets:
5
+ - knoveleng/open-rs
6
+ - knoveleng/open-s1
7
+ - knoveleng/open-deepscaler
8
+ license: mit
9
  pipeline_tag: text-generation
10
  inference: true
11
+ library_name: transformers
 
 
 
 
 
 
12
  ---
13
 
14
  # Model Summary
15
 
16
+ This model enhances the reasoning capabilities of the small 1.5B parameter `DeepSeek-R1-Distill-Qwen-1.5B` LLM using reinforcement learning (RL). Trained efficiently on 4 A40 GPUs in under 24 hours, it achieves significant gains in mathematical reasoning benchmarks (e.g., 80% accuracy on AMC23, 46.7% on AIME24, surpassing `o1-preview`). This cost-effective approach demonstrates the potential of RL for boosting reasoning in resource-constrained settings.
 
 
 
 
 
 
 
 
17
 
 
18
 
19
  ## Evaluation
20
  ### Performance Highlights
 
26
  ![Performance Metrics](assets/performances.png)
27
 
28
  ### Cost Efficiency
29
+ Our approach uses 7,000 samples (42,000 total outputs) and costs ~$42 on 4x A40 GPUs in 24 hours, compared to thousands of dollars for baseline models.
 
 
30
 
31
  ![7B Model Costs](assets/costs-7b.png)
32
  ![1.5B Model Costs](assets/costs-1.5b.png)
 
46
  }
47
  ```
48
 
49
+ For more details, including usage instructions and further evaluation results, please refer to our [GitHub repository](https://github.com/knoveleng/open-rs).