|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
base_model: Qwen/Qwen2.5-7B-Instruct |
|
tags: |
|
- llama-factory |
|
- full |
|
- generated_from_trainer |
|
model-index: |
|
- name: OpenThinker2-7B |
|
results: [] |
|
datasets: |
|
- open-thoughts/OpenThoughts2-1M |
|
--- |
|
|
|
<p align="center"> |
|
<img src="https://huggingface.co/datasets/open-thoughts/open-thoughts-114k/resolve/main/open_thoughts.png" width="50%"> |
|
</p> |
|
|
|
# OpenThinker2-7B |
|
|
|
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the |
|
[OpenThoughts2-1M](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M) dataset. |
|
|
|
The [OpenThinker2-7B](https://huggingface.co/open-thoughts/OpenThinker2-7B) model is the top 7B open-data reasoning model. It delivers performance comparable to state of the art 7B models like [DeepSeek-R1-Distill-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) across a suite of tasks. |
|
This model improves upon our previous [OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B) model, which was trained on 114k examples from [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/open-thoughts-114k). |
|
The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy). |
|
|
|
| Model | Data | AIME24 | AIME25 | AMC23 | MATH500 | GPQA-D | LCBv2 | |
|
| --------------------------------------------------------------------------------------------- | ---- | ------ | ------ | ----- | ------- | ------ | ----------- | |
|
| [OpenThinker2-7B](https://huggingface.co/open-thoughts/OpenThinker2-7B) | β
| 50.0 | 33.3 | 89.5 | 88.4 | 49.3 | 55.6 | |
|
| [OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B) | β
| 31.3 | 23.3 | 74.5 | 83.2 | 42.9 | 38.0 | |
|
| [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) | β | 57.3 | 33.3 | 92.0 | 89.6 | 47.3 | 48.4 | |
|
| [OlympicCoder-7B](https://huggingface.co/open-r1/OlympicCoder-7B) | β
| 20.7 | 15.3 | 63.0 | 74.8 | 25.3 | 55.4 | |
|
| [OpenR1-Qwen-7B](https://huggingface.co/open-r1/OpenR1-Qwen-7B) | β
| 48.7 | 34.7 | 88.5 | 87.8 | 21.2 | 9.5<br><br> | |
|
|
|
## Data |
|
|
|
This model was trained on the [OpenThoughts2-1M](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M) dataset. |
|
|
|
The [OpenThoughts2-1M](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M) dataset was constructed by augmenting [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/open-thoughts-114k) with existing datasets like [OpenR1](https://huggingface.co/open-r1), as well as additional math and code reasoning data. |
|
We generate the additional math and code data by ablating over 26 different question generation methodologies and sampling from the highest performing ones. |
|
|
|
See the [OpenThoughts2-1M](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M) dataset page or our [blog post](https://www.open-thoughts.ai/blog/thinkagain) for additional information. |
|
|
|
|
|
## Intended uses & limitations |
|
|
|
Apache 2.0 License |
|
|
|
|
|
## Training procedure |
|
|
|
We used 32 8xA100 nodes to train the model for 36 hours. |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 8e-05 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- num_devices: 256 |
|
- gradient_accumulation_steps: 2 |
|
- total_train_batch_size: 512 |
|
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_ratio: 0.1 |
|
- num_epochs: 5.0 |
|
|
|
### Framework versions |
|
|
|
- Transformers 4.46.1 |
|
- Pytorch 2.3.0 |
|
- Datasets 3.1.0 |
|
- Tokenizers 0.20.3 |
|
|
|
More info can be found in our repository: [https://github.com/open-thoughts/open-thoughts](https://github.com/open-thoughts/open-thoughts). |
|
|
|
# Citation |
|
``` |
|
@misc{openthoughts, |
|
author = {Team, OpenThoughts}, |
|
month = jan, |
|
title = {{Open Thoughts}}, |
|
howpublished = {https://open-thoughts.ai}, |
|
year = {2025} |
|
} |
|
``` |
|
|
|
# Links |
|
- π [OpenThoughts2 and OpenThinker2 Blog Post](https://www.open-thoughts.ai/blog/thinkagain) |
|
- π» [Open Thoughts GitHub Repository](https://github.com/open-thoughts/open-thoughts) |
|
- π§ [OpenThoughts2-1M dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M) |
|
- π€ [OpenThinker2-7B model](https://huggingface.co/open-thoughts/OpenThinker2-7B) - this model. |
|
- π€ [OpenThinker2-32B model](https://huggingface.co/open-thoughts/OpenThinker2-32B) |
|
|