chrisliu298 commited on
Commit
bc54982
·
verified ·
1 Parent(s): 1605038

Add files using upload-large-folder tool

Browse files
README.md CHANGED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.1
3
+ base_model:
4
+ - meta-llama/Llama-3.1-8B-Instruct
5
+ pipeline_tag: text-classification
6
+ ---
7
+ # Skywork-Reward-V2
8
+
9
+ <div align="center">
10
+ <img src="assets/skywork_logo.png" width="60%" alt="Skywork-Reward-V2"/>
11
+ </div>
12
+ <hr>
13
+ <div align="center" style="line-height: 1;">
14
+ <a href="https://huggingface.co/Skywork/Skywork-Reward-V2-Llama-3.1-8B/resolve/main/assets/Skywork_Reward_V2.pdf" target="_blank">
15
+ <img alt="Paper" src="https://img.shields.io/badge/📖%20Paper-Skywork--Reward--V2-4D5EFF?style=flat-square&labelColor=202124"/>
16
+ </a>
17
+ <a href="https://huggingface.co/collections/Skywork/skywork-reward-v2-685cc86ce5d9c9e4be500c84" target="_blank">
18
+ <img alt="Models" src="https://img.shields.io/badge/🤗_Hugging_Face-Skywork-4D5EFF?style=flat-square&labelColor=202124"/>
19
+ </a>
20
+ </div>
21
+
22
+ ## 🔥 Highlights
23
+
24
+ **Skywork-Reward-V2** is a series of reward models designed for versatility across a wide range of tasks, trained on a mixture of 26 million carefully curated preference pairs. While the Skywork-Reward-V2 series remains based on the Bradley-Terry model, we push the boundaries of training data scale and quality to achieve superior performance. Compared to the first generation of Skywork-Reward, the Skywork-Reward-V2 series offers the following major improvements:
25
+
26
+ - **Trained on a significantly larger and higher-quality preference data mixture**, consisting of **26 million preference pairs** curated via a large-scale human-LLM synergistic pipeline.
27
+ - **State-of-the-art performance on seven major reward model benchmarks**, including RewardBench v1, RewardBench v2, PPE Preference, PPE Correctness, RMB, RM-Bench, and JudgeBench.
28
+ - **Available in eight models across multiple sizes**, with the smallest 0.6B variant, *Skywork-Reward-V2-Qwen3-0.6B*, nearly matching the average performance of our previous best model, Skywork-Reward-Gemma-2-27B-v0.2. The largest 8B version, *Skywork-Reward-V2-Llama-3.1-8B*, surpasses all existing reward models across all benchmarks on average. Our top experimental model, *Skywork-Reward-V2-Llama-3.1-8B-40M*, **outperforms all existing reward models on every benchmark**.
29
+
30
+ ## 📊 Evaluation
31
+
32
+ In the following table, we categorize the models into two types: Bradley-Terry (BT) reward models and Generative reward models. The Skywork-Reward-V2 series outperforms models in both categories with much smaller model sizes.
33
+
34
+ | Category | Model | RewardBench v1 | RewardBench v2 | PPE Preference | PPE Correctness | RMB | RM-Bench | JudgeBench | Avg. |
35
+ |:-----------------:|:---------------------------------------|:--------------:|:--------------:|:--------------:|:---------------:|:--------:|:--------:|:----------:|:--------:|
36
+ | **Bradley-Terry** | Llama-3-OffsetBias-RM-8B | 89.0 | 64.8 | 59.2 | 64.1 | 57.8 | 71.3 | 63.5 | 67.1 |
37
+ | | ArmoRM-Llama3-8B-v0.1 | 90.4 | 66.5 | 60.6 | 60.6 | 64.6 | 69.3 | 59.7 | 67.4 |
38
+ | | Internlm2-20b-reward | 90.2 | 56.3 | 61.0 | 63.0 | 62.9 | 68.3 | 64.3 | 66.6 |
39
+ | | Skywork-Reward-Llama-3.1-8B-v0.2 | 93.1 | 71.8 | 62.2 | 62.5 | 66.6 | 72.1 | 62.9 | 70.2 |
40
+ | | LDL-Reward-Gemma-2-27B-v0.1 | 95.0 | 72.5 | 62.4 | 63.9 | 67.9 | 71.1 | 64.2 | 71.0 |
41
+ | | Skywork-Reward-Gemma-2-27B-v0.2 | 94.3 | 75.3 | 63.6 | 61.9 | 69.4 | 70.0 | 66.5 | 71.6 |
42
+ | | INF-ORM-Llama3.1-70B | 95.1 | 76.5 | 64.2 | 64.4 | 70.5 | 73.8 | 70.2 | 73.5 |
43
+ | **Generative** | GPT-4o | 86.7 | 64.9 | 67.7 | - | 73.8 | - | 59.8 | - |
44
+ | | Claude-3.5-Sonnet | 84.2 | 64.7 | 67.3 | - | 70.6 | - | 64.8 | - |
45
+ | | DeepSeek-GRM-27B | 88.5 | - | 65.3 | 60.4 | 69.0 | - | - | - |
46
+ | | DeepSeek-GRM-27B (w/ MetaRM) | 90.4 | - | 67.2 | 63.2 | 70.3 | - | - | - |
47
+ | | RM-R1-Qwen-Instruct-32B | 92.9 | - | - | - | 73.0 | 79.1 | - | - |
48
+ | | RM-R1-DeepSeek-Distill-Qwen-32B | 90.9 | - | - | - | 69.8 | 83.9 | - | - |
49
+ | | EvalPlanner (Llama-3.1-70B) | 93.9 | - | - | - | - | 80.0 | 50.9 | - |
50
+ | | EvalPlanner (Llama-3.3-70B) | 93.8 | - | - | - | - | 82.1 | 56.6 | - |
51
+ | | J1-Llama-8B | 85.7 | - | 60.3 | 59.2 | - | 73.4 | 42.0 | - |
52
+ | | J1-Llama-8B (Maj@32) | - | - | 60.6 | 61.9 | - | - | - | - |
53
+ | | J1-Llama-70B | 93.3 | - | 66.3 | 72.9 | - | 82.7 | 60.0 | - |
54
+ | | J1-Llama-70B (Maj@32) | - | - | 67.0 | 73.7 | - | - | - | - |
55
+ | **Bradley-Terry** | **Skywork-Reward-V2-Qwen3-0.6B** | 85.2 | 61.3 | 65.3 | 68.3 | 74.5 | 74.4 | 67.6 | 70.9 |
56
+ | | **Skywork-Reward-V2-Qwen3-1.7B** | 90.3 | 68.3 | 67.6 | 70.5 | 78.1 | 78.7 | 72.9 | 75.2 |
57
+ | | **Skywork-Reward-V2-Qwen3-4B** | 93.4 | 75.5 | 69.5 | 74.7 | 80.6 | 81.6 | 69.3 | 77.8 |
58
+ | | **Skywork-Reward-V2-Qwen3-8B** | 93.7 | 78.2 | 70.6 | 75.1 | 81.2 | 82.6 | 73.4 | 79.3 |
59
+ | | **Skywork-Reward-V2-Llama-3.2-1B** | 89.9 | 64.3 | 66.6 | 67.4 | 76.7 | 76.4 | 65.0 | 72.3 |
60
+ | | **Skywork-Reward-V2-Llama-3.2-3B** | 93.0 | 74.7 | 69.1 | 72.1 | 80.5 | 81.1 | 69.2 | 77.1 |
61
+ | | **Skywork-Reward-V2-Llama-3.1-8B** | 96.4 | 84.1 | 77.3 | 83.4 | 86.4 | 92.8 | 80.0 | 85.8 |
62
+ | | **Skywork-Reward-V2-Llama-3.1-8B-40M** | **97.8** | **86.5** | **79.8** | **87.2** | **89.3** | **96.0** | **83.4** | **88.6** |
63
+
64
+ ## 💡 Recommended Usage
65
+
66
+ We make the following recommendations for using the Skywork-Reward-V2 model series:
67
+
68
+ 1. For most use cases, we recommend Skywork-Reward-V2-Llama-3.1-8B and consider smaller variants for low-resource settings.
69
+ 2. All models are trained on preference data with a maximum length of 16,384 tokens. It is recommended to perform inference within this limit.
70
+ 3. Do not include system prompts when using chat templates.
71
+
72
+ Special note on Skywork-Reward-V2-Llama-3.1-8B-40M:
73
+
74
+ > [!NOTE]
75
+ > Although Skywork-Reward-V2-Llama-3.1-8B-40M outperforms the original Skywork-Reward-V2-Llama-3.1-8B, we consider it an experimental variant. This model is trained on the complete set of 40 million preference pairs, with about one third of the chosen-rejected pairs flipped. We recommend using this model solely for research or non-production purposes.
76
+
77
+ ## 📦 Model Usage
78
+
79
+ ### 📝 Simple Example in `transformers`
80
+
81
+ The example below shows how to perform inference in Hugging Face Transformers to get the reward score for conversations. For better data parallelization and throughput, we recommend using it along with [Accelerate](https://github.com/huggingface/accelerate) if multiple GPUs are available.
82
+
83
+ ```python
84
+ import torch
85
+
86
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
87
+
88
+ # Load model and tokenizer
89
+ device = "cuda:0"
90
+ model_name = "Skywork/Skywork-Reward-V2-Llama-3.1-8B"
91
+ rm = AutoModelForSequenceClassification.from_pretrained(
92
+ model_name,
93
+ torch_dtype=torch.bfloat16,
94
+ device_map=device,
95
+ attn_implementation="flash_attention_2",
96
+ num_labels=1,
97
+ )
98
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
99
+
100
+ prompt = "Jane has 12 apples. She gives 4 apples to her friend Mark, then buys 1 more apple, and finally splits all her apples equally among herself and her 2 siblings. How many apples does each person get?"
101
+ response1 = "1. Jane starts with 12 apples and gives 4 to Mark. 12 - 4 = 8. Jane now has 8 apples.\n2. Jane buys 1 more apple. 8 + 1 = 9. Jane now has 9 apples.\n3. Jane splits the 9 apples equally among herself and her 2 siblings (3 people in total). 9 ÷ 3 = 3 apples each. Each person gets 3 apples."
102
+ response2 = "1. Jane starts with 12 apples and gives 4 to Mark. 12 - 4 = 8. Jane now has 8 apples.\n2. Jane buys 1 more apple. 8 + 1 = 9. Jane now has 9 apples.\n3. Jane splits the 9 apples equally among her 2 siblings (2 people in total). 9 ÷ 2 = 4.5 apples each. Each person gets 4 apples."
103
+
104
+ conv1 = [{"role": "user", "content": prompt}, {"role": "assistant", "content": response1}]
105
+ conv2 = [{"role": "user", "content": prompt}, {"role": "assistant", "content": response2}]
106
+
107
+ # Format and tokenize the conversations
108
+ conv1_formatted = tokenizer.apply_chat_template(conv1, tokenize=False)
109
+ conv2_formatted = tokenizer.apply_chat_template(conv2, tokenize=False)
110
+ # These two lines remove the potential duplicate bos token
111
+ if tokenizer.bos_token is not None and conv1_formatted.startswith(tokenizer.bos_token):
112
+ conv1_formatted = conv1_formatted[len(tokenizer.bos_token):]
113
+ if tokenizer.bos_token is not None and conv2_formatted.startswith(tokenizer.bos_token):
114
+ conv2_formatted = conv2_formatted[len(tokenizer.bos_token):]
115
+ conv1_tokenized = tokenizer(conv1_formatted, return_tensors="pt").to(device)
116
+ conv2_tokenized = tokenizer(conv2_formatted, return_tensors="pt").to(device)
117
+
118
+ # Get the reward scores
119
+ with torch.no_grad():
120
+ score1 = rm(**conv1_tokenized).logits[0][0].item()
121
+ score2 = rm(**conv2_tokenized).logits[0][0].item()
122
+ print(f"Score for response 1: {score1}")
123
+ print(f"Score for response 2: {score2}")
124
+
125
+ # Expected output:
126
+ # Score for response 1: 23.0
127
+ # Score for response 2: 3.59375
128
+ ```
129
+
130
+ ### ⚡ Distributed Inference via SGLang
131
+
132
+ For the optimal throughput under a large number (e.g., millions) of conversations, we recommend the following distributed method via SGLang.
133
+
134
+ Install the latest version of [SGLang](https://docs.sglang.ai/index.html):
135
+
136
+ ```bash
137
+ pip install "sglang[all]>=0.4.7.post1"
138
+ ```
139
+
140
+ Launch model servers (assuming `NUM_GPUS` GPUs are available):
141
+
142
+ ```bash
143
+ NUM_GPUS=8
144
+ for (( i=0; i<NUM_GPUS; i++ )); do
145
+ echo "Starting server on port $((8000+i)) with GPU: $i"
146
+ CUDA_VISIBLE_DEVICES=$i python -m sglang.launch_server \
147
+ --model-path Skywork/Skywork-Reward-V2-Llama-3.1-8B \
148
+ --mem-fraction-static 0.9 \
149
+ --tp 1 \
150
+ --host 127.0.0.1 \
151
+ --port $((8000+i)) \
152
+ --context-length 16384 \
153
+ --is-embedding \
154
+ &
155
+ done
156
+ ```
157
+
158
+ After the servers are ready, we can get rewards from the servers. You should get reward values similar to those in the `transformers` example above.
159
+
160
+ ```python
161
+ import requests
162
+ from transformers import AutoTokenizer
163
+
164
+
165
+ model_name_or_path = "Skywork/Skywork-Reward-V2-Llama-3.1-8B"
166
+ base_urls = [f"http://127.0.0.1:{8000 + i}/classify" for i in range(8)]
167
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
168
+
169
+
170
+ def process_convs(convs, base_url, tokenizer, model_name_or_path):
171
+ payload = {"model": model_name_or_path}
172
+ convs_formatted = []
173
+ for conv in convs:
174
+ conv = tokenizer.apply_chat_template(conv, tokenize=False)
175
+ if tokenizer.bos_token is not None and conv.startswith(tokenizer.bos_token):
176
+ conv = conv[len(tokenizer.bos_token) :]
177
+ convs_formatted.append(conv)
178
+
179
+ payload.update({"text": convs_formatted})
180
+ rewards = []
181
+ try:
182
+ responses = requests.post(base_url, json=payload).json()
183
+ for response in responses:
184
+ rewards.append(response["embedding"][0])
185
+ assert len(rewards) == len(
186
+ convs
187
+ ), f"Expected {len(convs)} rewards, got {len(rewards)}"
188
+ return rewards
189
+ except Exception as e:
190
+ print(f"Error: {e}")
191
+ return [None] * len(convs)
192
+
193
+
194
+ prompt = "Jane has 12 apples. She gives 4 apples to her friend Mark, then buys 1 more apple, and finally splits all her apples equally among herself and her 2 siblings. How many apples does each person get?"
195
+ response1 = "1. Jane starts with 12 apples and gives 4 to Mark. 12 - 4 = 8. Jane now has 8 apples.\n2. Jane buys 1 more apple. 8 + 1 = 9. Jane now has 9 apples.\n3. Jane splits the 9 apples equally among herself and her 2 siblings (3 people in total). 9 ÷ 3 = 3 apples each. Each person gets 3 apples."
196
+ response2 = "1. Jane starts with 12 apples and gives 4 to Mark. 12 - 4 = 8. Jane now has 8 apples.\n2. Jane buys 1 more apple. 8 + 1 = 9. Jane now has 9 apples.\n3. Jane splits the 9 apples equally among her 2 siblings (2 people in total). 9 ÷ 2 = 4.5 apples each. Each person gets 4 apples."
197
+
198
+ conv1 = [{"role": "user", "content": prompt}, {"role": "assistant", "content": response1}]
199
+ conv2 = [{"role": "user", "content": prompt}, {"role": "assistant", "content": response2}]
200
+
201
+ rewards = process_convs([conv1, conv2], base_urls[0], tokenizer, model_name_or_path)
202
+ print(f"Score for response 1: {rewards[0]}")
203
+ print(f"Score for response 2: {rewards[1]}")
204
+
205
+ # Expected output:
206
+ # Score for response 1: 23.125
207
+ # Score for response 2: 3.578125
208
+ ```
209
+
210
+ ## 📃 License
211
+
212
+ This model repository, including the model weights and code, is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). Reward models in the Skywork-Reward-V2 series derived from Qwen3 support commercial use and permit modifications and the creation of derivative works, provided that all conditions of the Apache 2.0 License are met and proper attribution is given. Please note that:
213
+
214
+ - Skywork-Reward-V2-Qwen3-0.6B, Skywork-Reward-V2-Qwen3-1.7B, Skywork-Reward-V2-Qwen3-4B, and Skywork-Reward-V2-Qwen3-8B are derived from the Qwen3 model series of corresponding sizes, which are originally licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
215
+ - Skywork-Reward-V2-Llama-3.1-8B and Skywork-Reward-V2-Llama-3.1-8B-40M are both derived from Llama-3.1-8B-Instruct and follow the [Llama 3.1 community license](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct/blob/main/LICENSE).
216
+ - Skywork-Reward-V2-Llama-3.2-1B and Skywork-Reward-V2-Llama-3.2-3B are derived from Llama-3.2-1B-Instruct and Llama-3.2-3B-Instruct, respectively, and follow the [Llama 3.2 community license](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct/blob/main/LICENSE.txt).
217
+
218
+ ## 📧 Contact
219
+
220
+ If you have any questions, please feel free to reach us at `yuhao.liuu at kunlun-inc dot com` and `liang.zeng at kunlun-inc dot com`.
221
+
222
+ ## 📚 Citation
223
+
224
+ If you find our work useful, please cite it as follows. The arXiv link of our technical report will be released soon.
225
+
226
+ ```bibtex
227
+ @article{liu2025skywork,
228
+ title={Skywork-Reward-V2: Scaling Preference Data Curation via Human-AI Synergy},
229
+ author = {Liu, Chris Yuhao and Zeng, Liang and Xiao, Yuzhen and He, Jujie and Liu, Jiacai and Wang, Chaojie and Yan, Rui and Shen, Wei and Zhang, Fuxiang and Xu, Jiacheng and Liu, Yang and Zhou, Yahui},
230
+ journal={arXiv preprint},
231
+ year={2025}
232
+ }
233
+ ```
assets/Skywork_Reward_V2.pdf CHANGED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aaf70a7c481e3eec945b6897c07ba7f56673948f7532c11348b235280592f821
3
+ size 1025350
assets/skywork_logo.png CHANGED

Git LFS Details

  • SHA256: bbd30750ec11120286b588940e8f33d045f56f00c97447880e88423518617fcc
  • Pointer size: 131 Bytes
  • Size of remote file: 119 kB