hrezaei commited on
Commit
d3fe64e
·
1 Parent(s): da0a26f

End of training

Browse files
README.md CHANGED
@@ -3,25 +3,25 @@ library_name: transformers
3
  tags:
4
  - generated_from_trainer
5
  datasets:
6
- - generator
7
  metrics:
8
  - accuracy
9
  model-index:
10
  - name: T5Laa2-Large-WeightedLoss
11
  results:
12
  - task:
13
- name: Sequence-to-sequence Language Modeling
14
- type: text2text-generation
15
  dataset:
16
- name: generator
17
- type: generator
18
  config: default
19
  split: train
20
- args: default
21
  metrics:
22
  - name: Accuracy
23
  type: accuracy
24
- value: 0.037367906066536206
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -29,13 +29,13 @@ should probably proofread and complete it, then remove this comment. -->
29
 
30
  # T5Laa2-Large-WeightedLoss
31
 
32
- This model is a fine-tuned version of [](https://huggingface.co/) on the generator dataset.
33
  It achieves the following results on the evaluation set:
34
- - Perplexity: 184.6505
35
- - Loss: 5.2185
36
- - Accuracy: 0.0374
37
- - Lookahead Perplexity: 2090.0473
38
- - Lookahead Loss: 7.6449
39
 
40
  ## Model description
41
 
 
3
  tags:
4
  - generated_from_trainer
5
  datasets:
6
+ - HuggingFaceFW/fineweb
7
  metrics:
8
  - accuracy
9
  model-index:
10
  - name: T5Laa2-Large-WeightedLoss
11
  results:
12
  - task:
13
+ name: Causal Language Modeling
14
+ type: text-generation
15
  dataset:
16
+ name: HuggingFaceFW/fineweb sample-350BT
17
+ type: HuggingFaceFW/fineweb
18
  config: default
19
  split: train
20
+ args: sample-350BT
21
  metrics:
22
  - name: Accuracy
23
  type: accuracy
24
+ value: 0.03730665362035225
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
29
 
30
  # T5Laa2-Large-WeightedLoss
31
 
32
+ This model is a fine-tuned version of [](https://huggingface.co/) on the HuggingFaceFW/fineweb sample-350BT dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Perplexity: 184.5759
35
+ - Loss: 5.2181
36
+ - Accuracy: 0.0373
37
+ - Lookahead Perplexity: 2089.7438
38
+ - Lookahead Loss: 7.6448
39
 
40
  ## Model description
41
 
all_results.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "eval_accuracy": 0.03730665362035225,
3
+ "eval_lookahead_loss": 7.644796759986877,
4
+ "eval_loss": 5.2180609703063965,
5
+ "eval_perplexity": 184.57593863292408,
6
+ "eval_runtime": 492.01,
7
+ "eval_samples": 10000,
8
+ "eval_samples_per_second": 20.325,
9
+ "eval_steps_per_second": 5.081,
10
+ "total_flos": 4.833448717656785e+18,
11
+ "train_loss": 2.1971652193460613,
12
+ "train_runtime": 166030.7788,
13
+ "train_samples": 2000000,
14
+ "train_samples_per_second": 12.631,
15
+ "train_steps_per_second": 3.158
16
+ }
eval_results.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "eval_accuracy": 0.03730665362035225,
3
+ "eval_lookahead_loss": 7.644796759986877,
4
+ "eval_loss": 5.2180609703063965,
5
+ "eval_perplexity": 184.57593863292408,
6
+ "eval_runtime": 492.01,
7
+ "eval_samples": 10000,
8
+ "eval_samples_per_second": 20.325,
9
+ "eval_steps_per_second": 5.081
10
+ }
runs/Oct09_06-03-54_gpu22.viking2.yor.alces.network/events.out.tfevents.1760155049.gpu22.viking2.yor.alces.network.3252474.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51a554ea2ead726f27c653d74b8060027aa9a86e1a82c5221945627fe08a408d
3
+ size 596
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 4.833448717656785e+18,
3
+ "train_loss": 2.1971652193460613,
4
+ "train_runtime": 166030.7788,
5
+ "train_samples": 2000000,
6
+ "train_samples_per_second": 12.631,
7
+ "train_steps_per_second": 3.158
8
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff