softhell commited on
Commit
4f727f6
·
verified ·
1 Parent(s): 889f4c1

codet5_local_e8_7_mix

Browse files
Files changed (5) hide show
  1. README.md +3 -3
  2. config.json +1 -1
  3. model.safetensors +1 -1
  4. tokenizer.json +10 -1
  5. training_args.bin +1 -1
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
- base_model: Salesforce/codet5-small
5
  tags:
6
  - generated_from_trainer
7
  datasets:
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # results
18
 
19
- This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on the code_search_net dataset.
20
 
21
  ## Model description
22
 
@@ -44,7 +44,7 @@ The following hyperparameters were used during training:
44
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_ratio: 0.1
47
- - num_epochs: 8
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
 
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
+ base_model: softhell/results
5
  tags:
6
  - generated_from_trainer
7
  datasets:
 
16
 
17
  # results
18
 
19
+ This model is a fine-tuned version of [softhell/results](https://huggingface.co/softhell/results) on the code_search_net dataset.
20
 
21
  ## Model description
22
 
 
44
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_ratio: 0.1
47
+ - num_epochs: 7
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "Salesforce/codet5-small",
3
  "architectures": [
4
  "T5ForConditionalGeneration"
5
  ],
 
1
  {
2
+ "_name_or_path": "softhell/results",
3
  "architectures": [
4
  "T5ForConditionalGeneration"
5
  ],
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f009e39ef7a9de24c0b7d7e2fcdf7cc93e78ddd7f6509ca7054546d39831ffe6
3
  size 242017320
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b5118a11a85873a3b392153ed7511f8a5295b83379c179ee45d2d8a3ce15658
3
  size 242017320
tokenizer.json CHANGED
@@ -6,7 +6,16 @@
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
9
- "padding": null,
 
 
 
 
 
 
 
 
 
10
  "added_tokens": [
11
  {
12
  "id": 0,
 
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
9
+ "padding": {
10
+ "strategy": {
11
+ "Fixed": 512
12
+ },
13
+ "direction": "Right",
14
+ "pad_to_multiple_of": null,
15
+ "pad_id": 0,
16
+ "pad_type_id": 0,
17
+ "pad_token": "<pad>"
18
+ },
19
  "added_tokens": [
20
  {
21
  "id": 0,
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2f9e738442e8c22206e2097775172f821f36f0ea13e67cfd510c4339cda7ae29
3
  size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:739d001731451542e3c523319b2640fd01e5e20e8d47b10825e8c538ada59108
3
  size 5368