pere commited on
Commit
7d424c2
·
1 Parent(s): f49df17

Saving weights and logs of step 10000

Browse files
README.md CHANGED
@@ -1 +1,7 @@
1
  Just for performing some experiments. Do not use.
 
 
 
 
 
 
 
1
  Just for performing some experiments. Do not use.
2
+
3
+ This needed to be restarted at 100k. I am getting memory errors at the end of the epoch. Not really sure why.
4
+
5
+ Step 2 is therefore on train_2__4. Static learning rate for a while. The first 100k ended at 0.59. This is decent so early. No point in running more epochs here though. Changing the corpus and continue training.
6
+
7
+
events.out.tfevents.1641426799.t1v-n-e1a08808-w-0.871181.0.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1fe9d08288d5eebd9d406223ced869b7f987018053a53aaf0ad8e5da3cf4c11
3
+ size 1470136
flax_model.msgpack CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5b892d1acff31af7d49006a03d609301575fed7b9753d8361d9404bea6459e84
3
  size 498796983
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a85b53846a6926416b78941df5f65df77a3dfe44ecedecc4352158a10c7daec
3
  size 498796983
run_step2.sh ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ./run_mlm_flax.py \
2
+ --output_dir="./" \
3
+ --model_type="roberta" \
4
+ --model_name_or_path="./" \
5
+ --config_name="./" \
6
+ --tokenizer_name="./" \
7
+ --train_file /mnt/disks/flaxdisk/corpus/train_2_4.json \
8
+ --validation_file /mnt/disks/flaxdisk/corpus/validation.json \
9
+ --cache_dir="/mnt/disks/flaxdisk/cache/" \
10
+ --max_seq_length="128" \
11
+ --weight_decay="0.01" \
12
+ --per_device_train_batch_size="192" \
13
+ --per_device_eval_batch_size="192" \
14
+ --learning_rate="4e-4" \
15
+ --warmup_steps="0" \
16
+ --overwrite_output_dir \
17
+ --num_train_epochs="1000" \
18
+ --adam_beta1="0.9" \
19
+ --adam_beta2="0.98" \
20
+ --adam_epsilon="1e-6" \
21
+ --logging_steps="10000" \
22
+ --save_steps="10000" \
23
+ --eval_steps="10000" \
24
+ --preprocessing_num_workers="64" \
25
+ --auth_token="True" \
26
+ --static_learning_rate="True" \
27
+ --dtype="bfloat16" \
28
+ --push_to_hub
29
+