Nohobby commited on
Commit
445609c
·
verified ·
1 Parent(s): 61c8ec0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -78
README.md CHANGED
@@ -1,79 +1,3 @@
1
- ---
2
- license: apache-2.0
3
- datasets:
4
- - simplescaling/s1K
5
- language:
6
- - en
7
- base_model:
8
- - mistralai/Mistral-Small-24B-Instruct-2501
9
- base_model_relation: finetune
10
- ---
11
 
12
- # Mistral-Small-3-Reasoner-s1
13
- A simple [Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) finetune on the LIMA-like [s1K reasoning dataset by Muennighoff et al.](https://huggingface.co/datasets/simplescaling/s1K) to give the original model basic reasoning capabilities within `<think>` tags, like DeepSeek R1. Surprisingly, the model can reason even outside math/STEM subjects.
14
-
15
- ## Usage notes
16
- Prepend the assistant response with `<think>` to make the model engage in a chain-of-thought. This should happen automatically with math questions on an empty context, but it needs to be forced in longer conversations. When done thinking, it will generate `</think>` and then the final response.
17
-
18
- Make sure that the model's output length is long enough; be prepared to make the it continue its response if it stops prematurely.
19
-
20
- Low-depth instructions (perhaps at depth-0, just before the assistant's response) can be beneficial in steering how the model should think. An additional `[SYSTEM_PROMPT]` could be used there.
21
-
22
- From tests it seems beneficial to keep at least one chain-of-thought in the context in addition to the one being generated. More experimentation required here.
23
-
24
- ## Prompting format
25
- ```
26
- [INST]User question.[/INST]<think>
27
- Chain of thought.
28
- </think>
29
-
30
- Model response.</s>
31
- ```
32
-
33
- ## Observed quirks and issues
34
- - Assistant responses may lack the final punctuation mark as a probable result of how the source training data was formatted.
35
- - Not really a true issue, but information in the system prompt that contradicts that in the chat history or that is meant to be only temporarily valid may cause coherency issues with this model, since it will follow instructions very precisely.
36
- - Without user control on chain-of-thought length, the model can ramble for several thousands of tokens.
37
- - Besides multi-turn capabilities, other non-reasoning capabilities of the original `Mistral-Small-24B-Instruct-2501` model might have degraded.
38
- - Most default guardrails apparently still work, but can be very easily bypassed with a suitable prompt as with the original model.
39
-
40
- # What's in this repository
41
- - Checkpoints for epochs 1~5
42
- - LoRA adapter for the final model
43
- - Static GGUF quantizations
44
- - HF FP16 weights
45
-
46
- ## Dataset
47
- Almost the entirety of the [s1K dataset](https://huggingface.co/datasets/simplescaling/s1K) was used with minimal modifications to make it properly work with Mistral-Small-3-Instruct, except 4 rows that didn't fit within the training sequence length of 8192 tokens and 16 of the shortest ones that have been used as the test set instead. No samples have been clipped and no system prompt was added. All were single-turn.
48
-
49
- ## Training hyperparameters
50
- I tried to roughly follow indications from [appendix C in the paper](https://arxiv.org/abs/2501.19393) with the notable exception of using 4-bit LoRA finetuning and tuning the learning rate accordingly. The loss was not computed on questions within `[INST]...[/INST]` tags (including the tags), just on reasoning traces and solutions. The training sequence length was close to the maximum I could use on one NVidia RTX3090 24GB GPU. The total training time was about 18 hours.
51
-
52
- Overfitting (increasing eval loss) occurred after 1 epoch, but the training loss behavior was similar to that observed in the paper.
53
-
54
- ```python
55
- lora_r = 32
56
- lora_alpha = 64
57
- lora_dropout = 0.05
58
- load_in_4bit = True
59
- max_seq_length = 8192
60
- gradient_accumulation_steps = 16
61
- eval_accumulation_steps = 16
62
- learning_rate = 0.000400
63
- weight_decay = 1e-4
64
- num_train_epochs = 5
65
- warmup_ratio = 0.05
66
- lr_scheduler_type = "cosine"
67
- optim = "adamw_8bit"
68
- ```
69
-
70
- ## Training Graphs
71
- ![train loss](https://files.catbox.moe/rt4rg7.png)
72
- Appears to more or less correspond to the general trend and values seen in Appendix C in the paper.
73
-
74
- ![eval loss](https://files.catbox.moe/5uot9p.png)
75
-
76
- ![grad norm](https://files.catbox.moe/gspthh.png)
77
-
78
- ![learning rate](https://files.catbox.moe/6z54dd.png)
79
- Some hyperparameters including the learning rate were changed before the end of the first epoch to better match those of the paper.
 
1
+ Just a repo clone to use directly in mergekit.
 
 
 
 
 
 
 
 
 
2
 
3
+ OG: https://huggingface.co/lemonilia/Mistral-Small-3-Reasoner-s1