vincentmin commited on
Commit
37a71b7
·
1 Parent(s): f0d55ff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -7
README.md CHANGED
@@ -2,39 +2,77 @@
2
  base_model: meta-llama/Llama-2-13b-chat-hf
3
  tags:
4
  - generated_from_trainer
 
5
  metrics:
6
  - accuracy
7
  model-index:
8
  - name: llama-2-13b-reward-oasst1
9
  results: []
 
 
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
 
15
  # llama-2-13b-reward-oasst1
16
 
17
- This model is a fine-tuned version of [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 0.4810
20
  - Accuracy: 0.7869
21
 
22
  ## Model description
23
 
24
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  ## Intended uses & limitations
27
 
28
- More information needed
29
 
30
  ## Training and evaluation data
31
 
32
- More information needed
33
 
34
  ## Training procedure
35
 
36
  ### Training hyperparameters
37
 
 
 
 
 
 
 
 
 
 
 
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 2e-05
40
  - train_batch_size: 1
@@ -45,6 +83,7 @@ The following hyperparameters were used during training:
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - num_epochs: 1
 
48
 
49
  ### Training results
50
 
@@ -66,7 +105,9 @@ The following hyperparameters were used during training:
66
 
67
  ### Framework versions
68
 
 
 
69
  - Transformers 4.32.0.dev0
70
  - Pytorch 2.0.1+cu118
71
  - Datasets 2.14.0
72
- - Tokenizers 0.13.3
 
2
  base_model: meta-llama/Llama-2-13b-chat-hf
3
  tags:
4
  - generated_from_trainer
5
+ - trl
6
  metrics:
7
  - accuracy
8
  model-index:
9
  - name: llama-2-13b-reward-oasst1
10
  results: []
11
+ datasets:
12
+ - tasksource/oasst1_pairwise_rlhf_reward
13
+ library_name: peft
14
+ pipeline_tag: text-classification
15
  ---
16
 
 
 
17
 
18
  # llama-2-13b-reward-oasst1
19
 
20
+ This model is a fine-tuned version of [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) on the [tasksource/oasst1_pairwise_rlhf_reward](https://huggingface.co/datasets/tasksource/oasst1_pairwise_rlhf_reward) dataset.
21
  It achieves the following results on the evaluation set:
22
  - Loss: 0.4810
23
  - Accuracy: 0.7869
24
 
25
  ## Model description
26
 
27
+ This is a reward model trained with QLoRA in 4bit precision. The base model is [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) for which you need to have accepted the license in order to be able use it. Once you've been given permission, you can load the reward model as follows:
28
+ ```
29
+ import torch
30
+ from peft import PeftModel, PeftConfig
31
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
32
+
33
+ peft_model_id = "vincentmin/llama-2-7b-reward-oasst1"
34
+ config = PeftConfig.from_pretrained(peft_model_id)
35
+ model = AutoModelForSequenceClassification.from_pretrained(
36
+ config.base_model_name_or_path,
37
+ num_labels=1,
38
+ load_in_4bit=True,
39
+ torch_dtype=torch.float16,
40
+ )
41
+ model = PeftModel.from_pretrained(model, peft_model_id)
42
+ tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path, use_auth_token=True)
43
+ model.eval()
44
+ with torch.no_grad():
45
+ reward = model(**tokenizer("prompter: hello world. assistant: foo bar", return_tensors='pt')).logits
46
+ reward
47
+ ```
48
+ For best results, one should use the prompt format used during training:
49
+ ```
50
+ prompt = "prompter: <prompt_1> assistant: <response_1> prompter: <prompt_2> ..."
51
+ ```
52
 
53
  ## Intended uses & limitations
54
 
55
+ Since the model was trained on oasst1 data, the reward will reflect any biases present in the oasst1 data.
56
 
57
  ## Training and evaluation data
58
 
59
+ The model was trained using QLoRA and the `trl` library's `RewardTrainer` on the [tasksource/oasst1_pairwise_rlhf_reward](https://huggingface.co/datasets/tasksource/oasst1_pairwise_rlhf_reward) dataset where examples with more than 512 tokens were filtered out from both the training and eval data.
60
 
61
  ## Training procedure
62
 
63
  ### Training hyperparameters
64
 
65
+ The following `bitsandbytes` quantization config was used during training:
66
+ - load_in_8bit: False
67
+ - load_in_4bit: True
68
+ - llm_int8_threshold: 6.0
69
+ - llm_int8_skip_modules: None
70
+ - llm_int8_enable_fp32_cpu_offload: False
71
+ - llm_int8_has_fp16_weight: False
72
+ - bnb_4bit_quant_type: nf4
73
+ - bnb_4bit_use_double_quant: False
74
+ - bnb_4bit_compute_dtype: float16
75
+
76
  The following hyperparameters were used during training:
77
  - learning_rate: 2e-05
78
  - train_batch_size: 1
 
83
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
84
  - lr_scheduler_type: linear
85
  - num_epochs: 1
86
+ - max_seq_length: 512
87
 
88
  ### Training results
89
 
 
105
 
106
  ### Framework versions
107
 
108
+
109
+ - PEFT 0.5.0.dev0 (with https://github.com/huggingface/peft/pull/755)
110
  - Transformers 4.32.0.dev0
111
  - Pytorch 2.0.1+cu118
112
  - Datasets 2.14.0
113
+ - Tokenizers 0.13.3