sedrickkeh commited on
Commit
d6aceaf
·
verified ·
1 Parent(s): c90c9cf

Model save

Browse files
Files changed (2) hide show
  1. README.md +11 -14
  2. generation_config.json +1 -1
README.md CHANGED
@@ -4,7 +4,6 @@ license: llama3.1
4
  base_model: meta-llama/Meta-Llama-3.1-8B
5
  tags:
6
  - llama-factory
7
- - full
8
  - generated_from_trainer
9
  model-index:
10
  - name: oh-dcft-v3.1-gpt-4o-mini
@@ -16,9 +15,9 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # oh-dcft-v3.1-gpt-4o-mini
18
 
19
- This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the mlfoundations-dev/oh-dcft-v3.1-gpt-4o-mini dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.6413
22
 
23
  ## Model description
24
 
@@ -42,28 +41,26 @@ The following hyperparameters were used during training:
42
  - eval_batch_size: 8
43
  - seed: 42
44
  - distributed_type: multi-GPU
45
- - num_devices: 16
46
- - gradient_accumulation_steps: 4
47
  - total_train_batch_size: 512
48
- - total_eval_batch_size: 128
49
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: constant
51
- - lr_scheduler_warmup_ratio: 0.1
52
- - lr_scheduler_warmup_steps: 1738
53
  - num_epochs: 3.0
54
 
55
  ### Training results
56
 
57
  | Training Loss | Epoch | Step | Validation Loss |
58
  |:-------------:|:------:|:----:|:---------------:|
59
- | 0.6489 | 0.9982 | 422 | 0.6508 |
60
- | 0.5988 | 1.9988 | 845 | 0.6403 |
61
- | 0.5728 | 2.9947 | 1266 | 0.6413 |
62
 
63
 
64
  ### Framework versions
65
 
66
- - Transformers 4.45.2
67
  - Pytorch 2.3.0
68
- - Datasets 2.21.0
69
  - Tokenizers 0.20.3
 
4
  base_model: meta-llama/Meta-Llama-3.1-8B
5
  tags:
6
  - llama-factory
 
7
  - generated_from_trainer
8
  model-index:
9
  - name: oh-dcft-v3.1-gpt-4o-mini
 
15
 
16
  # oh-dcft-v3.1-gpt-4o-mini
17
 
18
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.6408
21
 
22
  ## Model description
23
 
 
41
  - eval_batch_size: 8
42
  - seed: 42
43
  - distributed_type: multi-GPU
44
+ - num_devices: 8
45
+ - gradient_accumulation_steps: 8
46
  - total_train_batch_size: 512
47
+ - total_eval_batch_size: 64
48
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
49
  - lr_scheduler_type: constant
 
 
50
  - num_epochs: 3.0
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss |
55
  |:-------------:|:------:|:----:|:---------------:|
56
+ | 0.648 | 0.9985 | 422 | 0.6504 |
57
+ | 0.5984 | 1.9997 | 845 | 0.6400 |
58
+ | 0.5714 | 2.9962 | 1266 | 0.6408 |
59
 
60
 
61
  ### Framework versions
62
 
63
+ - Transformers 4.46.1
64
  - Pytorch 2.3.0
65
+ - Datasets 3.1.0
66
  - Tokenizers 0.20.3
generation_config.json CHANGED
@@ -5,5 +5,5 @@
5
  "eos_token_id": 128001,
6
  "temperature": 0.6,
7
  "top_p": 0.9,
8
- "transformers_version": "4.45.2"
9
  }
 
5
  "eos_token_id": 128001,
6
  "temperature": 0.6,
7
  "top_p": 0.9,
8
+ "transformers_version": "4.46.1"
9
  }