eskayML commited on
Commit
04bf457
·
verified ·
1 Parent(s): dd3e345

eskayML/bert_interview_new

Browse files
Files changed (5) hide show
  1. README.md +17 -17
  2. config.json +1 -1
  3. model.safetensors +1 -1
  4. tokenizer_config.json +2 -1
  5. training_args.bin +2 -2
README.md CHANGED
@@ -18,8 +18,8 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.9106
22
- - Accuracy: 0.7697
23
 
24
  ## Model description
25
 
@@ -42,7 +42,7 @@ The following hyperparameters were used during training:
42
  - train_batch_size: 2
43
  - eval_batch_size: 2
44
  - seed: 42
45
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - num_epochs: 10
48
 
@@ -50,21 +50,21 @@ The following hyperparameters were used during training:
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
- | 2.5257 | 1.0 | 759 | 2.1260 | 0.4079 |
54
- | 1.9969 | 2.0 | 1518 | 1.7308 | 0.5066 |
55
- | 1.7415 | 3.0 | 2277 | 1.4204 | 0.6316 |
56
- | 1.5223 | 4.0 | 3036 | 1.2385 | 0.6743 |
57
- | 1.3484 | 5.0 | 3795 | 1.1238 | 0.7039 |
58
- | 1.2576 | 6.0 | 4554 | 1.0478 | 0.7336 |
59
- | 1.1427 | 7.0 | 5313 | 0.9814 | 0.7467 |
60
- | 1.0499 | 8.0 | 6072 | 0.9471 | 0.75 |
61
- | 1.0145 | 9.0 | 6831 | 0.9234 | 0.7632 |
62
- | 0.911 | 10.0 | 7590 | 0.9106 | 0.7697 |
63
 
64
 
65
  ### Framework versions
66
 
67
- - Transformers 4.44.2
68
- - Pytorch 2.4.0+cu121
69
- - Datasets 3.0.0
70
- - Tokenizers 0.19.1
 
18
 
19
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 2.6712
22
+ - Accuracy: 0.3534
23
 
24
  ## Model description
25
 
 
42
  - train_batch_size: 2
43
  - eval_batch_size: 2
44
  - seed: 42
45
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
47
  - num_epochs: 10
48
 
 
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
+ | No log | 1.0 | 463 | 2.2196 | 0.3319 |
54
+ | 2.1608 | 2.0 | 926 | 2.1235 | 0.3534 |
55
+ | 1.816 | 3.0 | 1389 | 2.1393 | 0.3879 |
56
+ | 1.533 | 4.0 | 1852 | 2.1836 | 0.3578 |
57
+ | 1.2761 | 5.0 | 2315 | 2.2730 | 0.3664 |
58
+ | 1.122 | 6.0 | 2778 | 2.3939 | 0.3578 |
59
+ | 0.9403 | 7.0 | 3241 | 2.4908 | 0.3578 |
60
+ | 0.8317 | 8.0 | 3704 | 2.5671 | 0.3448 |
61
+ | 0.7571 | 9.0 | 4167 | 2.6484 | 0.3491 |
62
+ | 0.693 | 10.0 | 4630 | 2.6712 | 0.3534 |
63
 
64
 
65
  ### Framework versions
66
 
67
+ - Transformers 4.47.1
68
+ - Pytorch 2.5.1+cu121
69
+ - Datasets 3.2.0
70
+ - Tokenizers 0.21.0
config.json CHANGED
@@ -64,6 +64,6 @@
64
  "sinusoidal_pos_embds": false,
65
  "tie_weights_": true,
66
  "torch_dtype": "float32",
67
- "transformers_version": "4.44.2",
68
  "vocab_size": 30522
69
  }
 
64
  "sinusoidal_pos_embds": false,
65
  "tie_weights_": true,
66
  "torch_dtype": "float32",
67
+ "transformers_version": "4.47.1",
68
  "vocab_size": 30522
69
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:149a9d8e3b01a3b88776b9549a68c36e9a8e0d5d9045d8623e488bfa81770a85
3
  size 267887936
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d9dbccaa14803162baafd5476241487de48fe774526ae379c9906da80c002d2
3
  size 267887936
tokenizer_config.json CHANGED
@@ -41,9 +41,10 @@
41
  "special": true
42
  }
43
  },
44
- "clean_up_tokenization_spaces": true,
45
  "cls_token": "[CLS]",
46
  "do_lower_case": true,
 
47
  "mask_token": "[MASK]",
48
  "max_length": 512,
49
  "model_max_length": 512,
 
41
  "special": true
42
  }
43
  },
44
+ "clean_up_tokenization_spaces": false,
45
  "cls_token": "[CLS]",
46
  "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
  "mask_token": "[MASK]",
49
  "max_length": 512,
50
  "model_max_length": 512,
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d11b28af5d635c99b0ece872a85d26404ddaef6bb47baa37d9a9e428a8ccbae8
3
- size 5240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4be4389c28dca71b9cdc03c0481c5cb743949f9a9768a3f5fa6af0031c050bdb
3
+ size 5304