Triangle104 commited on
Commit
0794dee
·
verified ·
1 Parent(s): d5e1303

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -6
README.md CHANGED
@@ -20,15 +20,17 @@ This model was converted to GGUF format from [`open-thoughts/OpenThinker-32B`](h
20
  Refer to the [original model card](https://huggingface.co/open-thoughts/OpenThinker-32B) for more details on the model.
21
 
22
  ---
23
- This model is a fine-tuned version of Qwen/Qwen2.5-32B-Instruct on the
24
- OpenThoughts-114k dataset.
25
 
 
26
 
27
- The dataset is derived by distilling DeepSeek-R1 using the data pipeline available on github.
28
- More info about the dataset can be found on the dataset card at OpenThoughts-114k dataset.
 
29
 
30
-
31
- The numbers reported in the table below are evaluated with our open-source tool Evalchemy.
 
32
 
33
  ---
34
  ## Use with llama.cpp
 
20
  Refer to the [original model card](https://huggingface.co/open-thoughts/OpenThinker-32B) for more details on the model.
21
 
22
  ---
23
+ This model is a fine-tuned version of Qwen/Qwen2.5-32B-Instruct on the OpenThoughts-114k dataset.
 
24
 
25
+ The dataset is derived by distilling DeepSeek-R1 using the data pipeline available on github. More info about the dataset can be found on the dataset card at OpenThoughts-114k dataset.
26
 
27
+ Intended uses & limitations
28
+ -
29
+ Apache 2.0 License
30
 
31
+ Training procedure
32
+ -
33
+ We finetune Qwen2.5-32B-Instruct on OpenThoughts-114k for 3 epochs with a 16k context length using LlamaFactory. Our full training configuration is provided in our repository. Training the 32B model on OpenThoughts-114k was done on AWS SageMaker with 8xH100 P5 nodes. On 4 nodes, this took around 90 hours. Meanwhile, for training on OpenThoughts-Unverified-173k, we used 96 nodes of 4xA100 (64 GB per GPU), training took 30 hours, spending 11,520 A100 hours on the Leonardo Supercomputer.
34
 
35
  ---
36
  ## Use with llama.cpp