floflodebilbao commited on
Commit
f7718b8
·
verified ·
1 Parent(s): 8650000

End of training

Browse files
README.md CHANGED
@@ -22,21 +22,21 @@ should probably proofread and complete it, then remove this comment. -->
22
 
23
  This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on an unknown dataset.
24
  It achieves the following results on the evaluation set:
25
- - Loss: 3.7571
26
- - Rouge1: 0.3651
27
- - Rouge2: 0.142
28
- - Rougel: 0.2963
29
- - Rougelsum: 0.2972
30
- - Gen Len: 27.12
31
- - Bleu: 0.065
32
- - Precisions: 0.1385
33
- - Brevity Penalty: 0.7854
34
- - Length Ratio: 0.8055
35
- - Translation Length: 944.0
36
  - Reference Length: 1172.0
37
- - Precision: 0.8932
38
  - Recall: 0.8832
39
- - F1: 0.8881
40
  - Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1)
41
 
42
  ## Model description
@@ -56,7 +56,7 @@ More information needed
56
  ### Training hyperparameters
57
 
58
  The following hyperparameters were used during training:
59
- - learning_rate: 0.002
60
  - train_batch_size: 1
61
  - eval_batch_size: 1
62
  - seed: 42
@@ -70,16 +70,16 @@ The following hyperparameters were used during training:
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode |
72
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:|
73
- | 8.123 | 1.0 | 7 | 7.3532 | 0.2921 | 0.0954 | 0.2408 | 0.2414 | 32.0 | 0.0493 | 0.0883 | 1.0 | 1.1058 | 1296.0 | 1172.0 | 0.8636 | 0.8648 | 0.8641 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
74
- | 5.947 | 2.0 | 14 | 5.1102 | 0.335 | 0.1361 | 0.2865 | 0.2889 | 22.94 | 0.0612 | 0.1526 | 0.6419 | 0.6928 | 812.0 | 1172.0 | 0.8992 | 0.8779 | 0.8883 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
75
- | 4.4536 | 3.0 | 21 | 4.2132 | 0.3429 | 0.1435 | 0.2928 | 0.2937 | 22.68 | 0.0531 | 0.1512 | 0.5815 | 0.6485 | 760.0 | 1172.0 | 0.902 | 0.8794 | 0.8904 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
76
- | 3.8549 | 4.0 | 28 | 3.9066 | 0.3534 | 0.1446 | 0.2939 | 0.2952 | 24.72 | 0.0661 | 0.146 | 0.7252 | 0.7568 | 887.0 | 1172.0 | 0.8976 | 0.8819 | 0.8896 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
77
- | 3.5956 | 5.0 | 35 | 3.8415 | 0.3878 | 0.1685 | 0.3168 | 0.3168 | 25.48 | 0.0649 | 0.1552 | 0.7176 | 0.7509 | 880.0 | 1172.0 | 0.9023 | 0.8863 | 0.8941 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
78
- | 3.4355 | 6.0 | 42 | 3.7729 | 0.3825 | 0.1597 | 0.3131 | 0.3135 | 26.08 | 0.0581 | 0.1496 | 0.7317 | 0.7619 | 893.0 | 1172.0 | 0.8975 | 0.8856 | 0.8914 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
79
- | 3.3337 | 7.0 | 49 | 3.7560 | 0.3597 | 0.1448 | 0.2961 | 0.2973 | 27.3 | 0.0629 | 0.1368 | 0.8069 | 0.8234 | 965.0 | 1172.0 | 0.8941 | 0.8827 | 0.8882 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
80
- | 3.2635 | 8.0 | 56 | 3.7481 | 0.3491 | 0.1411 | 0.2928 | 0.2938 | 25.9 | 0.046 | 0.1253 | 0.7519 | 0.7782 | 912.0 | 1172.0 | 0.8932 | 0.8811 | 0.887 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
81
- | 3.2095 | 9.0 | 63 | 3.7572 | 0.3672 | 0.1422 | 0.3015 | 0.3019 | 26.68 | 0.0582 | 0.1374 | 0.7771 | 0.7986 | 936.0 | 1172.0 | 0.8958 | 0.885 | 0.8903 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
82
- | 3.2068 | 10.0 | 70 | 3.7571 | 0.3651 | 0.142 | 0.2963 | 0.2972 | 27.12 | 0.065 | 0.1385 | 0.7854 | 0.8055 | 944.0 | 1172.0 | 0.8932 | 0.8832 | 0.8881 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
83
 
84
 
85
  ### Framework versions
 
22
 
23
  This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on an unknown dataset.
24
  It achieves the following results on the evaluation set:
25
+ - Loss: 3.7652
26
+ - Rouge1: 0.3745
27
+ - Rouge2: 0.1523
28
+ - Rougel: 0.3056
29
+ - Rougelsum: 0.3055
30
+ - Gen Len: 25.92
31
+ - Bleu: 0.0641
32
+ - Precisions: 0.1432
33
+ - Brevity Penalty: 0.7677
34
+ - Length Ratio: 0.791
35
+ - Translation Length: 927.0
36
  - Reference Length: 1172.0
37
+ - Precision: 0.8956
38
  - Recall: 0.8832
39
+ - F1: 0.8892
40
  - Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1)
41
 
42
  ## Model description
 
56
  ### Training hyperparameters
57
 
58
  The following hyperparameters were used during training:
59
+ - learning_rate: 0.001
60
  - train_batch_size: 1
61
  - eval_batch_size: 1
62
  - seed: 42
 
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode |
72
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:|
73
+ | 8.3925 | 1.0 | 7 | 8.0434 | 0.2138 | 0.043 | 0.174 | 0.1733 | 32.0 | 0.0223 | 0.0541 | 1.0 | 1.1246 | 1318.0 | 1172.0 | 0.8553 | 0.8578 | 0.8564 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
74
+ | 6.8857 | 2.0 | 14 | 6.1890 | 0.3112 | 0.0956 | 0.2638 | 0.2638 | 30.08 | 0.0477 | 0.0915 | 1.0 | 1.0333 | 1211.0 | 1172.0 | 0.8775 | 0.8694 | 0.8733 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
75
+ | 5.1856 | 3.0 | 21 | 4.7196 | 0.3535 | 0.1502 | 0.2855 | 0.2871 | 23.4 | 0.0688 | 0.1552 | 0.7034 | 0.7398 | 867.0 | 1172.0 | 0.9014 | 0.8803 | 0.8906 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
76
+ | 4.2664 | 4.0 | 28 | 4.1945 | 0.354 | 0.1541 | 0.295 | 0.2952 | 23.14 | 0.0781 | 0.1725 | 0.643 | 0.6937 | 813.0 | 1172.0 | 0.904 | 0.8821 | 0.8928 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
77
+ | 3.898 | 5.0 | 35 | 3.9578 | 0.3777 | 0.1653 | 0.3107 | 0.3108 | 25.16 | 0.0912 | 0.1705 | 0.7434 | 0.7713 | 904.0 | 1172.0 | 0.9005 | 0.884 | 0.892 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
78
+ | 3.6798 | 6.0 | 42 | 3.8411 | 0.368 | 0.1556 | 0.2914 | 0.2907 | 23.92 | 0.0719 | 0.1621 | 0.6836 | 0.7244 | 849.0 | 1172.0 | 0.9039 | 0.8834 | 0.8934 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
79
+ | 3.5743 | 7.0 | 49 | 3.8041 | 0.3678 | 0.1445 | 0.2954 | 0.2956 | 27.28 | 0.0648 | 0.1358 | 0.809 | 0.8251 | 967.0 | 1172.0 | 0.8937 | 0.883 | 0.8883 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
80
+ | 3.5091 | 8.0 | 56 | 3.7772 | 0.371 | 0.1559 | 0.3051 | 0.3061 | 26.3 | 0.0755 | 0.1511 | 0.7709 | 0.7935 | 930.0 | 1172.0 | 0.896 | 0.8833 | 0.8895 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
81
+ | 3.4502 | 9.0 | 63 | 3.7611 | 0.3633 | 0.1495 | 0.3013 | 0.3013 | 25.8 | 0.0653 | 0.1423 | 0.7498 | 0.7765 | 910.0 | 1172.0 | 0.8952 | 0.881 | 0.8879 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
82
+ | 3.4484 | 10.0 | 70 | 3.7652 | 0.3745 | 0.1523 | 0.3056 | 0.3055 | 25.92 | 0.0641 | 0.1432 | 0.7677 | 0.791 | 927.0 | 1172.0 | 0.8956 | 0.8832 | 0.8892 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
83
 
84
 
85
  ### Framework versions
adapter_config.json CHANGED
@@ -24,10 +24,10 @@
24
  "rank_pattern": {},
25
  "revision": null,
26
  "target_modules": [
 
27
  "out_proj",
28
  "v_proj",
29
- "k_proj",
30
- "q_proj"
31
  ],
32
  "task_type": "SEQ_2_SEQ_LM",
33
  "trainable_token_indices": null,
 
24
  "rank_pattern": {},
25
  "revision": null,
26
  "target_modules": [
27
+ "q_proj",
28
  "out_proj",
29
  "v_proj",
30
+ "k_proj"
 
31
  ],
32
  "task_type": "SEQ_2_SEQ_LM",
33
  "trainable_token_indices": null,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c1f8c4506324eec4c71a13ad43d29486641f13b1988308a52b0897aa8488b4b3
3
  size 2372496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a98bb65a77ef615ea2a97651a3468edd689bc44dca6163f79e71cc6d70e1d40
3
  size 2372496
runs/Jul30_10-41-41_tardis/events.out.tfevents.1753864902.tardis.38958.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7c439e87baa27bf5492bcdf7b3ddf520a6f66913600d4138c8b1ce3df07d7a8
3
+ size 16549
runs/Jul30_10-53-49_tardis/events.out.tfevents.1753865630.tardis.39634.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f33287363b31c02abd94eb56fa3120b9511254fb2c73ebccc88f32679a8f2eb
3
+ size 5629
runs/Jul30_10-54-56_tardis/events.out.tfevents.1753865697.tardis.39818.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67e417f1b582a7f17ea1cf21180dcd97bf3ed52195d00cc4f7fc00b5f7afbfbe
3
+ size 20502
runs/Jul30_12-32-02_tardis/events.out.tfevents.1753871525.tardis.48268.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f1fc67b808b9f78a1ec4fe243d739b67a392258ca0faa62ae48eee97ba4c90a
3
+ size 19368
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:985a1055ce59fe10f0fec7e9527bd7332efcc0d24b5b7edab8dce1072d2fb8c3
3
  size 5905
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4161cb18012faacaa83921394cd9c2e24b2f969f8c78e3c7e4a6a19a7bb938fd
3
  size 5905