Update README.md
Browse files
README.md
CHANGED
@@ -85,13 +85,14 @@ Note: vLLM for OLMo2 32B does not correctly handle attention when the number of
|
|
85 |
|
86 |
### Fine-tuning
|
87 |
Model fine-tuning can be done from the final checkpoint (the `main` revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.
|
88 |
-
1. Fine-tune with the OLMo repository:
|
89 |
```bash
|
90 |
-
|
91 |
```
|
92 |
-
|
|
|
93 |
```bash
|
94 |
-
|
95 |
```
|
96 |
For more documentation, see the [GitHub readme](https://github.com/allenai/OLMo-core).
|
97 |
|
|
|
85 |
|
86 |
### Fine-tuning
|
87 |
Model fine-tuning can be done from the final checkpoint (the `main` revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.
|
88 |
+
1. Fine-tune with the OLMo-core repository:
|
89 |
```bash
|
90 |
+
torchrun --nproc-per-node=8 ./src/scripts/official/OLMo2-0325-32B-train.py run01
|
91 |
```
|
92 |
+
You can override most configuration options from the command-line. For example, to override the learning rate you could launch the script like this:
|
93 |
+
|
94 |
```bash
|
95 |
+
torchrun --nproc-per-node=8 ./src/scripts/train/OLMo2-0325-32B-train.py run01 --train_module.optim.lr=6e-3
|
96 |
```
|
97 |
For more documentation, see the [GitHub readme](https://github.com/allenai/OLMo-core).
|
98 |
|