chenshi910814 commited on
Commit
e41870c
·
verified ·
1 Parent(s): 85713d5

<< YOUR USER NAME HERE>>/llama381binstruct_summarize_short_updated

Browse files
README.md CHANGED
@@ -1,83 +1,58 @@
1
  ---
2
  base_model: NousResearch/Meta-Llama-3.1-8B-Instruct
3
- datasets:
4
- - generator
5
- library_name: peft
6
- license: llama3.1
7
  tags:
 
8
  - trl
9
  - sft
10
- - generated_from_trainer
11
- model-index:
12
- - name: llama381binstruct_summarize_short
13
- results: []
14
  ---
15
 
16
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
- should probably proofread and complete it, then remove this comment. -->
18
-
19
- # llama381binstruct_summarize_short
20
-
21
- This model is a fine-tuned version of [NousResearch/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3.1-8B-Instruct) on the generator dataset.
22
- It achieves the following results on the evaluation set:
23
- - Loss: 1.8652
24
 
25
- ## Model description
 
26
 
27
- More information needed
28
 
29
- ## Intended uses & limitations
 
30
 
31
- More information needed
 
 
 
 
32
 
33
- ## Training and evaluation data
34
 
35
- More information needed
36
 
37
- ## Training procedure
38
 
39
- ### Training hyperparameters
40
 
41
- The following hyperparameters were used during training:
42
- - learning_rate: 0.0002
43
- - train_batch_size: 1
44
- - eval_batch_size: 8
45
- - seed: 42
46
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
- - lr_scheduler_type: linear
48
- - lr_scheduler_warmup_steps: 30
49
- - training_steps: 500
50
 
51
- ### Training results
 
 
 
 
52
 
53
- | Training Loss | Epoch | Step | Validation Loss |
54
- |:-------------:|:-----:|:----:|:---------------:|
55
- | 1.4098 | 2.5 | 25 | 1.0767 |
56
- | 0.3872 | 5.0 | 50 | 1.2700 |
57
- | 0.0902 | 7.5 | 75 | 1.6628 |
58
- | 0.0294 | 10.0 | 100 | 1.5043 |
59
- | 0.0147 | 12.5 | 125 | 1.6301 |
60
- | 0.007 | 15.0 | 150 | 1.6782 |
61
- | 0.005 | 17.5 | 175 | 1.7262 |
62
- | 0.0026 | 20.0 | 200 | 1.7412 |
63
- | 0.0013 | 22.5 | 225 | 1.7683 |
64
- | 0.0009 | 25.0 | 250 | 1.8120 |
65
- | 0.0008 | 27.5 | 275 | 1.8294 |
66
- | 0.0008 | 30.0 | 300 | 1.8397 |
67
- | 0.0007 | 32.5 | 325 | 1.8466 |
68
- | 0.0007 | 35.0 | 350 | 1.8521 |
69
- | 0.0006 | 37.5 | 375 | 1.8566 |
70
- | 0.0006 | 40.0 | 400 | 1.8599 |
71
- | 0.0006 | 42.5 | 425 | 1.8625 |
72
- | 0.0006 | 45.0 | 450 | 1.8641 |
73
- | 0.0006 | 47.5 | 475 | 1.8653 |
74
- | 0.0005 | 50.0 | 500 | 1.8652 |
75
 
76
 
77
- ### Framework versions
78
 
79
- - PEFT 0.13.2
80
- - Transformers 4.45.2
81
- - Pytorch 2.4.1+cu121
82
- - Datasets 3.0.1
83
- - Tokenizers 0.20.1
 
 
 
 
 
 
 
 
1
  ---
2
  base_model: NousResearch/Meta-Llama-3.1-8B-Instruct
3
+ library_name: transformers
4
+ model_name: llama381binstruct_summarize_short
 
 
5
  tags:
6
+ - generated_from_trainer
7
  - trl
8
  - sft
9
+ licence: license
 
 
 
10
  ---
11
 
12
+ # Model Card for llama381binstruct_summarize_short
 
 
 
 
 
 
 
13
 
14
+ This model is a fine-tuned version of [NousResearch/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3.1-8B-Instruct).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
+ ## Quick start
18
 
19
+ ```python
20
+ from transformers import pipeline
21
 
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="chenshi910814/llama381binstruct_summarize_short", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
 
28
+ ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenshi910814-northern-arizona-university/huggingface/runs/w09ae3dd)
31
 
 
32
 
33
+ This model was trained with SFT.
34
 
35
+ ### Framework versions
 
 
 
 
 
 
 
 
36
 
37
+ - TRL: 0.16.0
38
+ - Transformers: 4.50.3
39
+ - Pytorch: 2.6.0+cu124
40
+ - Datasets: 3.5.0
41
+ - Tokenizers: 0.21.1
42
 
43
+ ## Citations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
 
45
 
 
46
 
47
+ Cite TRL as:
48
+
49
+ ```bibtex
50
+ @misc{vonwerra2022trl,
51
+ title = {{TRL: Transformer Reinforcement Learning}},
52
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
53
+ year = 2020,
54
+ journal = {GitHub repository},
55
+ publisher = {GitHub},
56
+ howpublished = {\url{https://github.com/huggingface/trl}}
57
+ }
58
+ ```
adapter_config.json CHANGED
@@ -3,6 +3,9 @@
3
  "auto_mapping": null,
4
  "base_model_name_or_path": "NousResearch/Meta-Llama-3.1-8B-Instruct",
5
  "bias": "none",
 
 
 
6
  "fan_in_fan_out": false,
7
  "inference_mode": true,
8
  "init_lora_weights": true,
@@ -11,6 +14,7 @@
11
  "layers_to_transform": null,
12
  "loftq_config": {},
13
  "lora_alpha": 32,
 
14
  "lora_dropout": 0.1,
15
  "megatron_config": null,
16
  "megatron_core": "megatron.core",
@@ -20,15 +24,16 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "gate_proj",
24
- "v_proj",
25
  "down_proj",
 
 
26
  "o_proj",
27
- "q_proj",
28
  "k_proj",
 
29
  "up_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
 
32
  "use_dora": false,
33
  "use_rslora": false
34
  }
 
3
  "auto_mapping": null,
4
  "base_model_name_or_path": "NousResearch/Meta-Llama-3.1-8B-Instruct",
5
  "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
  "fan_in_fan_out": false,
10
  "inference_mode": true,
11
  "init_lora_weights": true,
 
14
  "layers_to_transform": null,
15
  "loftq_config": {},
16
  "lora_alpha": 32,
17
+ "lora_bias": false,
18
  "lora_dropout": 0.1,
19
  "megatron_config": null,
20
  "megatron_core": "megatron.core",
 
24
  "rank_pattern": {},
25
  "revision": null,
26
  "target_modules": [
 
 
27
  "down_proj",
28
+ "v_proj",
29
+ "gate_proj",
30
  "o_proj",
 
31
  "k_proj",
32
+ "q_proj",
33
  "up_proj"
34
  ],
35
  "task_type": "CAUSAL_LM",
36
+ "trainable_token_indices": null,
37
  "use_dora": false,
38
  "use_rslora": false
39
  }
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:60d95b10b6e140a9626a7058d5038528f2ff80148dc4569b881db56052046509
3
- size 40
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d7d362a843acdf447a18d8ffb7b832093a06f149a32d8f9e3c06fb000e06b7e
3
+ size 167832240
runs/Apr01_15-58-02_13301a165851/events.out.tfevents.1743523279.13301a165851.225.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf57f648613d6d64c316d3e435be870457b2103264d5342ec475e8267151355f
3
+ size 29690
tokenizer_config.json CHANGED
@@ -2053,11 +2053,12 @@
2053
  "chat_template": "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}",
2054
  "clean_up_tokenization_spaces": true,
2055
  "eos_token": "<|eot_id|>",
 
2056
  "model_input_names": [
2057
  "input_ids",
2058
  "attention_mask"
2059
  ],
2060
  "model_max_length": 131072,
2061
  "pad_token": "<|eot_id|>",
2062
- "tokenizer_class": "PreTrainedTokenizerFast"
2063
  }
 
2053
  "chat_template": "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}",
2054
  "clean_up_tokenization_spaces": true,
2055
  "eos_token": "<|eot_id|>",
2056
+ "extra_special_tokens": {},
2057
  "model_input_names": [
2058
  "input_ids",
2059
  "attention_mask"
2060
  ],
2061
  "model_max_length": 131072,
2062
  "pad_token": "<|eot_id|>",
2063
+ "tokenizer_class": "PreTrainedTokenizer"
2064
  }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:589437e567974d8a0a843e35793cbc6225174a09a1e8376ab2c9b5fe31e727a8
3
- size 5560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8def6783973a11fe35efc1a9396dd1cdcd164474dff9588cd4d8ac26aa633b32
3
+ size 5688