Imadken/mistral-7b-platypus-lamini-vxxiii-chat-real_augmented_costumer-dpo
Browse files- README.md +1 -1
- adapter_model.safetensors +1 -1
- training_args.bin +1 -1
README.md
CHANGED
@@ -56,7 +56,7 @@ The following hyperparameters were used during training:
|
|
56 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
57 |
- lr_scheduler_type: cosine
|
58 |
- lr_scheduler_warmup_steps: 100
|
59 |
-
- training_steps:
|
60 |
- mixed_precision_training: Native AMP
|
61 |
|
62 |
### Training results
|
|
|
56 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
57 |
- lr_scheduler_type: cosine
|
58 |
- lr_scheduler_warmup_steps: 100
|
59 |
+
- training_steps: 800
|
60 |
- mixed_precision_training: Native AMP
|
61 |
|
62 |
### Training results
|
adapter_model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 113271504
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:06373a9219f4fca45aa53630c954d04f2e0654aad58f6d6d04d9679a8db0648e
|
3 |
size 113271504
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4792
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:eeb1cc69fce4b8ed14bde5482bc287425c4117ef4dc3a509ec483f3f2ec10ddc
|
3 |
size 4792
|