jackkuo commited on
Commit
af5699d
·
verified ·
1 Parent(s): d471d1b

fix TypeError: TrainingArguments.__init__() got an unexpected keyword argument 'evaluation_strategy'

Browse files

Due to the update of transformer, `evaluation_strategy` is now `eval_strategy` in the new version. The author `ctheodoris` has made corresponding modifications to this file [in: https://huggingface.co/ctheodoris/Geneformer/discussions/531#6822ed18c442885b1dcbf70a ], but the modification still has problems. The reasons are as follows: the original `def_training_args` dictionary is referenced from the `classifier_utils.py` file, which has already preset `evaluation_strategy` as the key, so the original modification in `classifier.py` here is incorrect:
```py
if eval_data is None:
if transformers_version >= parse("4.46"):
def_training_args["eval_strategy"] = "no"
else:
def_training_args["evaluation_strategy"] = "no"
def_training_args["load_best_model_at_end"] = False
```
will still retain the `evaluation_strategy` field. The new modification can solve this problem through `def_training_args["eval_strategy"] = def_training_args.pop("evaluation_strategy")`:
```py
if eval_data is None:
def_training_args["evaluation_strategy"] = "no"
def_training_args["load_best_model_at_end"] = False
if transformers_version >= parse("4.46"):
def_training_args["eval_strategy"] = def_training_args.pop("evaluation_strategy")
```

Files changed (1) hide show
  1. geneformer/classifier.py +6 -8
geneformer/classifier.py CHANGED
@@ -1063,11 +1063,10 @@ class Classifier:
1063
  def_training_args["logging_steps"] = logging_steps
1064
  def_training_args["output_dir"] = output_directory
1065
  if eval_data is None:
1066
- if transformers_version >= parse("4.46"):
1067
- def_training_args["eval_strategy"] = "no"
1068
- else:
1069
- def_training_args["evaluation_strategy"] = "no"
1070
  def_training_args["load_best_model_at_end"] = False
 
 
1071
  def_training_args.update(
1072
  {"save_strategy": "epoch", "save_total_limit": 1}
1073
  ) # only save last model for each run
@@ -1237,11 +1236,10 @@ class Classifier:
1237
  def_training_args["logging_steps"] = logging_steps
1238
  def_training_args["output_dir"] = output_directory
1239
  if eval_data is None:
1240
- if transformers_version >= parse("4.46"):
1241
- def_training_args["eval_strategy"] = "no"
1242
- else:
1243
- def_training_args["evaluation_strategy"] = "no"
1244
  def_training_args["load_best_model_at_end"] = False
 
 
1245
  training_args_init = TrainingArguments(**def_training_args)
1246
 
1247
  if self.freeze_layers is not None:
 
1063
  def_training_args["logging_steps"] = logging_steps
1064
  def_training_args["output_dir"] = output_directory
1065
  if eval_data is None:
1066
+ def_training_args["evaluation_strategy"] = "no"
 
 
 
1067
  def_training_args["load_best_model_at_end"] = False
1068
+ if transformers_version >= parse("4.46"):
1069
+ def_training_args["eval_strategy"] = def_training_args.pop("evaluation_strategy")
1070
  def_training_args.update(
1071
  {"save_strategy": "epoch", "save_total_limit": 1}
1072
  ) # only save last model for each run
 
1236
  def_training_args["logging_steps"] = logging_steps
1237
  def_training_args["output_dir"] = output_directory
1238
  if eval_data is None:
1239
+ def_training_args["evaluation_strategy"] = "no"
 
 
 
1240
  def_training_args["load_best_model_at_end"] = False
1241
+ if transformers_version >= parse("4.46"):
1242
+ def_training_args["eval_strategy"] = def_training_args.pop("evaluation_strategy")
1243
  training_args_init = TrainingArguments(**def_training_args)
1244
 
1245
  if self.freeze_layers is not None: