andrewAmani commited on
Commit
bd68b10
·
verified ·
1 Parent(s): 77966b9

Model save

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -1,9 +1,8 @@
1
  ---
2
- base_model: meta-llama/Meta-Llama-3-8B
3
  datasets:
4
  - generator
5
  library_name: peft
6
- license: llama3
7
  tags:
8
  - generated_from_trainer
9
  model-index:
@@ -16,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # results_packing
18
 
19
- This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the generator dataset.
20
 
21
  ## Model description
22
 
@@ -43,7 +42,7 @@ The following hyperparameters were used during training:
43
  - total_train_batch_size: 17
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
- - num_epochs: 8
47
 
48
  ### Training results
49
 
 
1
  ---
2
+ base_model: hivaze/ParaLex-Llama-3-8B-SFT
3
  datasets:
4
  - generator
5
  library_name: peft
 
6
  tags:
7
  - generated_from_trainer
8
  model-index:
 
15
 
16
  # results_packing
17
 
18
+ This model is a fine-tuned version of [hivaze/ParaLex-Llama-3-8B-SFT](https://huggingface.co/hivaze/ParaLex-Llama-3-8B-SFT) on the generator dataset.
19
 
20
  ## Model description
21
 
 
42
  - total_train_batch_size: 17
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: cosine
45
+ - num_epochs: 32
46
 
47
  ### Training results
48