apriasmoro commited on
Commit
bfd0b10
·
verified ·
1 Parent(s): 5fe61db

End of training

Browse files
Files changed (1) hide show
  1. README.md +138 -0
README.md ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ license: apache-2.0
4
+ base_model: Qwen/Qwen3-8B-Base
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: 9e863409-5502-4d0b-9027-9eff9972345a
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.10.0.dev0`
20
+ ```yaml
21
+ adapter: lora
22
+ base_model: Qwen/Qwen3-8B-Base
23
+ bf16: true
24
+ chat_template: llama3
25
+ datasets:
26
+ - data_files:
27
+ - a4d38a814b208fbf_train_data.json
28
+ ds_type: json
29
+ format: custom
30
+ path: /workspace/input_data/
31
+ type:
32
+ field_input: input
33
+ field_instruction: instruct
34
+ field_output: output
35
+ format: '{instruction} {input}'
36
+ no_input_format: '{instruction}'
37
+ system_format: '{system}'
38
+ system_prompt: ''
39
+ eval_max_new_tokens: 256
40
+ evals_per_epoch: 2
41
+ flash_attention: false
42
+ fp16: false
43
+ gradient_accumulation_steps: 1
44
+ gradient_checkpointing: true
45
+ group_by_length: true
46
+ hub_model_id: apriasmoro/9e863409-5502-4d0b-9027-9eff9972345a
47
+ learning_rate: 0.0002
48
+ logging_steps: 10
49
+ lora_alpha: 16
50
+ lora_dropout: 0.05
51
+ lora_fan_in_fan_out: false
52
+ lora_r: 8
53
+ lora_target_linear: true
54
+ lr_scheduler: cosine
55
+ max_steps: 3483
56
+ micro_batch_size: 4
57
+ mlflow_experiment_name: /tmp/a4d38a814b208fbf_train_data.json
58
+ model_type: AutoModelForCausalLM
59
+ num_epochs: 3
60
+ optimizer: adamw_bnb_8bit
61
+ output_dir: miner_id_24
62
+ pad_to_sequence_len: true
63
+ sample_packing: false
64
+ save_steps: 348
65
+ sequence_len: 2048
66
+ tf32: true
67
+ tokenizer_type: AutoTokenizer
68
+ train_on_inputs: false
69
+ trust_remote_code: true
70
+ val_set_size: 0.05
71
+ wandb_entity: null
72
+ wandb_mode: online
73
+ wandb_name: 32391185-cb4f-4ffe-b8f6-62504519c53c
74
+ wandb_project: Gradients-On-Demand
75
+ wandb_run: apriasmoro
76
+ wandb_runid: 32391185-cb4f-4ffe-b8f6-62504519c53c
77
+ warmup_steps: 100
78
+ weight_decay: 0.01
79
+
80
+ ```
81
+
82
+ </details><br>
83
+
84
+ # 9e863409-5502-4d0b-9027-9eff9972345a
85
+
86
+ This model is a fine-tuned version of [Qwen/Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base) on an unknown dataset.
87
+ It achieves the following results on the evaluation set:
88
+ - Loss: 0.4513
89
+
90
+ ## Model description
91
+
92
+ More information needed
93
+
94
+ ## Intended uses & limitations
95
+
96
+ More information needed
97
+
98
+ ## Training and evaluation data
99
+
100
+ More information needed
101
+
102
+ ## Training procedure
103
+
104
+ ### Training hyperparameters
105
+
106
+ The following hyperparameters were used during training:
107
+ - learning_rate: 0.0002
108
+ - train_batch_size: 4
109
+ - eval_batch_size: 4
110
+ - seed: 42
111
+ - distributed_type: multi-GPU
112
+ - num_devices: 8
113
+ - total_train_batch_size: 32
114
+ - total_eval_batch_size: 32
115
+ - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
116
+ - lr_scheduler_type: cosine
117
+ - lr_scheduler_warmup_steps: 100
118
+ - training_steps: 3483
119
+
120
+ ### Training results
121
+
122
+ | Training Loss | Epoch | Step | Validation Loss |
123
+ |:-------------:|:-------:|:----:|:---------------:|
124
+ | No log | 0.0096 | 1 | 1.0573 |
125
+ | 0.0774 | 5.5865 | 581 | 0.2366 |
126
+ | 0.0054 | 11.1731 | 1162 | 0.3158 |
127
+ | 0.0016 | 16.7596 | 1743 | 0.3904 |
128
+ | 0.0002 | 22.3462 | 2324 | 0.4352 |
129
+ | 0.0001 | 27.9327 | 2905 | 0.4513 |
130
+
131
+
132
+ ### Framework versions
133
+
134
+ - PEFT 0.15.2
135
+ - Transformers 4.51.3
136
+ - Pytorch 2.5.1+cu124
137
+ - Datasets 3.5.1
138
+ - Tokenizers 0.21.1