mtasic85 commited on
Commit
54daf0f
·
1 Parent(s): 9a4f571

pretrain core 4

Browse files
Files changed (3) hide show
  1. README.md +65 -0
  2. config-4.json +29 -0
  3. scripts/pretrain_core_model_4.yaml +150 -0
README.md CHANGED
@@ -340,4 +340,69 @@ CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable
340
  ```
341
 
342
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
343
  ```
 
340
  ```
341
 
342
  ```
343
+ Seed set to 23
344
+ Time to instantiate model: 0.30 seconds.
345
+ Total parameters: 234,914,304
346
+ Validating ...
347
+ Measured TFLOPs: 6940.12
348
+ Epoch 1 | iter 512 step 1 | loss train: 2.698, val: 2.522 | iter time: 675.96 ms (step) remaining time: 9:49:31
349
+ Epoch 1 | iter 1024 step 2 | loss train: 2.627, val: 2.522 | iter time: 603.66 ms (step) remaining time: 9:19:41
350
+ Epoch 1 | iter 1536 step 3 | loss train: 2.653, val: 2.522 | iter time: 604.66 ms (step) remaining time: 9:06:15
351
+ Epoch 1 | iter 2048 step 4 | loss train: 2.608, val: 2.522 | iter time: 606.23 ms (step) remaining time: 8:57:08
352
+ Epoch 1 | iter 2560 step 5 | loss train: 2.604, val: 2.522 | iter time: 605.04 ms (step) remaining time: 8:49:43
353
+ Epoch 1 | iter 3072 step 6 | loss train: 2.578, val: 2.522 | iter time: 606.32 ms (step) remaining time: 8:43:08
354
+ Epoch 1 | iter 3584 step 7 | loss train: 2.692, val: 2.522 | iter time: 605.08 ms (step) remaining time: 8:37:01
355
+ Epoch 1 | iter 4096 step 8 | loss train: 2.570, val: 2.522 | iter time: 607.54 ms (step) remaining time: 8:31:20
356
+ Epoch 1 | iter 4608 step 9 | loss train: 2.646, val: 2.522 | iter time: 607.19 ms (step) remaining time: 8:25:47
357
+ Epoch 1 | iter 5120 step 10 | loss train: 2.565, val: 2.522 | iter time: 604.76 ms (step) remaining time: 8:20:23
358
+ # ...
359
+ Epoch 1 | iter 51712 step 101 | loss train: 2.562, val: 2.453 | iter time: 607.12 ms (step) remaining time: 0:48:29
360
+ Epoch 1 | iter 52224 step 102 | loss train: 2.637, val: 2.453 | iter time: 605.46 ms (step) remaining time: 0:43:31
361
+ Epoch 1 | iter 52736 step 103 | loss train: 2.629, val: 2.453 | iter time: 604.15 ms (step) remaining time: 0:38:34
362
+ Epoch 1 | iter 53248 step 104 | loss train: 2.629, val: 2.453 | iter time: 605.92 ms (step) remaining time: 0:33:36
363
+ Epoch 1 | iter 53760 step 105 | loss train: 2.606, val: 2.453 | iter time: 604.48 ms (step) remaining time: 0:28:38
364
+ Epoch 1 | iter 54272 step 106 | loss train: 2.581, val: 2.453 | iter time: 603.78 ms (step) remaining time: 0:23:41
365
+ Epoch 1 | iter 54784 step 107 | loss train: 2.580, val: 2.453 | iter time: 605.41 ms (step) remaining time: 0:18:43
366
+ Epoch 1 | iter 55296 step 108 | loss train: 2.602, val: 2.453 | iter time: 607.38 ms (step) remaining time: 0:13:46
367
+ Epoch 1 | iter 55808 step 109 | loss train: 2.633, val: 2.453 | iter time: 606.06 ms (step) remaining time: 0:08:49
368
+ Epoch 1 | iter 56320 step 110 | loss train: 2.631, val: 2.453 | iter time: 608.68 ms (step) remaining time: 0:03:51
369
+ Validating ...
370
+ iter 56320: val loss 2.4515, val time: 19303.40 ms
371
+ Saving checkpoint to '../out/pretrain-core-3/step-00000110/lit_model.pth'
372
+ Validating ...
373
+ Final evaluation | val loss: 2.451 | val ppl: 11.605
374
+ Saving checkpoint to '../out/pretrain-core-3/final/lit_model.pth'
375
+ ----------------------------------------
376
+ | Performance
377
+ | - Total tokens : 464,642,048
378
+ | - Training Time : 33018.19 s
379
+ | - Tok/sec : 362.46 tok/s
380
+ | ----------------------------------------
381
+ | Memory Usage
382
+ | - Memory Used : 22.33 GB
383
+ ----------------------------------------
384
+ ```
385
+
386
+ ```bash
387
+ cp ../config-3.json ../out/pretrain-core-3/final/config.json
388
+ ```
389
+
390
+ ```bash
391
+ mv wandb wandb-pretrain-core-3
392
+ ```
393
+
394
+ ```bash
395
+ CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt chat ../out/pretrain-core-3/final
396
+ ```
397
+
398
+ ```bash
399
+ litgpt convert_pretrained_checkpoint ../out/pretrain-core-3/final ../out/pretrain-core-3/checkpoint
400
+ ```
401
+
402
+ ```bash
403
+ CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_core_model_4.yaml
404
+ ```
405
+
406
+ ```
407
+ # ...
408
  ```
config-4.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "LlamaForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 0,
8
+ "eos_token_id": 1,
9
+ "head_dim": 128,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 512,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 2048,
14
+ "max_position_embeddings": 131072,
15
+ "mlp_bias": false,
16
+ "model_type": "llama",
17
+ "num_attention_heads": 8,
18
+ "num_hidden_layers": 32,
19
+ "num_key_value_heads": 8,
20
+ "pretraining_tp": 1,
21
+ "rms_norm_eps": 1e-05,
22
+ "rope_scaling": null,
23
+ "rope_theta": 310000.0,
24
+ "tie_word_embeddings": true,
25
+ "torch_dtype": "bfloat16",
26
+ "transformers_version": "4.45.0.dev0",
27
+ "use_cache": true,
28
+ "vocab_size": 131072
29
+ }
scripts/pretrain_core_model_4.yaml ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # The name of the model to pretrain. Choose from names in ``litgpt.config``. Mutually exclusive with
2
+ # ``model_config``. (type: Optional[str], default: null)
3
+ model_name: 'tangled-alpha-0.9-core'
4
+
5
+ # A ``litgpt.Config`` object to define the model architecture. Mutually exclusive with
6
+ # ``model_config``. (type: Optional[Config], default: null)
7
+ model_config:
8
+ name: 'tangled-alpha-0.9-core'
9
+ block_size: 131072
10
+ vocab_size: 131072
11
+ padded_vocab_size: 131072
12
+ n_layer: 32
13
+ n_head: 8
14
+ n_embd: 512
15
+ n_query_groups: 8
16
+ rotary_percentage: 1.0
17
+ parallel_residual: False
18
+ bias: False
19
+ norm_class_name: "RMSNorm"
20
+ mlp_class_name: "LLaMAMLP"
21
+ intermediate_size: 2048 # n_embd * 4
22
+ norm_eps: 1e-5
23
+ rope_base: 310000 # https://arxiv.org/pdf/2405.14591
24
+ head_size: 128 # n_embd / n_head
25
+
26
+ # Directory in which to save checkpoints and logs. If running in a Lightning Studio Job, look for it in
27
+ # /teamspace/jobs/<job-name>/share. (type: <class 'Path'>, default: out/pretrain)
28
+ out_dir: "../out/pretrain-core-4/"
29
+
30
+ # The precision to use for pretraining. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
31
+ # precision: bf16-mixed
32
+ precision: bf16-true
33
+
34
+ # Optional path to a checkpoint directory to initialize the model from.
35
+ # Useful for continued pretraining. Mutually exclusive with ``resume``. (type: Optional[Path], default: null)
36
+ initial_checkpoint_dir: "../out/pretrain-core-3/checkpoint"
37
+
38
+ # Path to a checkpoint directory to resume from in case training was interrupted, or ``True`` to resume
39
+ # from the latest checkpoint in ``out_dir``. An error will be raised if no checkpoint is found. Passing
40
+ # ``'auto'`` will resume from the latest checkpoint but not error if no checkpoint exists.
41
+ # (type: Union[bool, Literal["auto"], Path], default: False)
42
+ resume:
43
+
44
+ # Data-related arguments. If not provided, the default is ``litgpt.data.TinyLlama``.
45
+ data:
46
+ class_path: LitData
47
+
48
+ init_args:
49
+ data_path: "../core-data-4-8193-16385-16385-1000/"
50
+ num_workers: 32
51
+
52
+ # Training-related arguments. See ``litgpt.args.TrainArgs`` for details
53
+ train:
54
+ # Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
55
+ save_interval: 10
56
+
57
+ # Number of iterations between logging calls (type: int, default: 1)
58
+ log_interval: 1
59
+
60
+ # Number of samples between optimizer steps across data-parallel ranks (type: int, default: 512)
61
+ # global_batch_size: 512
62
+ global_batch_size: 256
63
+
64
+ # Number of samples per data-parallel rank (type: int, default: 4)
65
+ micro_batch_size: 1
66
+
67
+ # Number of iterations with learning rate warmup active (type: int, default: 2000)
68
+ lr_warmup_steps: 0
69
+
70
+ # Number of epochs to train on (type: Optional[int], default: null)
71
+ epochs:
72
+
73
+ # Total number of tokens to train on (type: Optional[int], default: 3000000000000)
74
+ max_tokens: 612897310
75
+
76
+ # Limits the number of optimizer steps to run. (type: Optional[int], default: null)
77
+ max_steps:
78
+
79
+ # Limits the length of samples. Off by default (type: Optional[int], default: null)
80
+ max_seq_length: 16384
81
+
82
+ # Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: False)
83
+ tie_embeddings: true
84
+
85
+ # (type: Optional[float], default: 1.0)
86
+ max_norm: 1.0
87
+
88
+ # (type: float, default: 4e-05)
89
+ min_lr: 5e-5
90
+
91
+ # Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
92
+ eval:
93
+ # Number of optimizer steps between evaluation calls (type: int, default: 1000)
94
+ interval: 10
95
+
96
+ # Number of tokens to generate (type: Optional[int], default: null)
97
+ max_new_tokens:
98
+
99
+ # Number of iterations (type: int, default: 100)
100
+ max_iters: 100
101
+
102
+ # Whether to evaluate on the validation set at the beginning of the training
103
+ initial_validation: true
104
+
105
+ # Whether to evaluate on the validation set at the end the training
106
+ final_validation: true
107
+
108
+ # Optimizer-related arguments
109
+
110
+ # optimizer:
111
+ # class_path: torch.optim.AdamW
112
+ # # class_path: torchao.prototype.low_bit_optim.AdamW8bit
113
+ # # class_path: torchao.prototype.low_bit_optim.AdamW4bit
114
+ # # class_path: bitsandbytes.optim.AdamW8bit
115
+ # # class_path: bitsandbytes.optim.PagedAdamW8bit
116
+ # init_args:
117
+ # # (type: float, default: 0.001)
118
+ # lr: 3e-4
119
+ # # (type: float, default: 0.01)
120
+ # weight_decay: 0.01
121
+ # # (type: tuple, default: (0.9,0.999))
122
+ # betas:
123
+ # - 0.9
124
+ # - 0.999
125
+
126
+ optimizer:
127
+ class_path: sophia_opt.SophiaG
128
+ init_args:
129
+ lr: 1e-4
130
+ betas:
131
+ - 0.9
132
+ - 0.95
133
+ rho: 0.05
134
+ weight_decay: 0.1
135
+
136
+ # How many devices/GPUs to use. Uses all GPUs by default. (type: Union[int, str], default: auto)
137
+ devices: auto
138
+
139
+ # How many nodes to use. (type: int, default: 1)
140
+ num_nodes: 1
141
+
142
+ # Optional path to the tokenizer dir that was used for preprocessing the dataset. Only some data
143
+ # module require this. (type: Optional[Path], default: null)
144
+ tokenizer_dir: "../tokenizer"
145
+
146
+ # The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: tensorboard)
147
+ logger_name: "wandb"
148
+
149
+ # The random seed to use for reproducibility. (type: int, default: 42)
150
+ seed: 23