SiweiWu nielsr HF Staff commited on
Commit
0d261ee
·
verified ·
1 Parent(s): 28da199

Add pipeline tag, library name, and improve model card (#1)

Browse files

- Add pipeline tag, library name, and improve model card (9692008b31174c1853161dd0f94a7d4da222066c)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +87 -135
README.md CHANGED
@@ -1,199 +1,151 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
 
 
11
 
12
  ## Model Details
13
 
14
- ### Model Description
15
-
16
- This repository contains the Qwen2.5-Instruct-7B-COIG-P model of the paper [COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values](https://huggingface.co/papers/2504.05535).
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
  - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
  - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
  ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
45
 
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
 
52
  ### Out-of-Scope Use
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
 
58
  ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
  ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ## Training Details
77
 
78
  ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
 
84
  ### Training Procedure
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
 
93
  #### Training Hyperparameters
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
 
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
 
101
- [More Information Needed]
 
102
 
103
  ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
  ### Testing Data, Factors & Metrics
108
 
109
- #### Testing Data
110
 
111
- <!-- This should link to a Dataset Card if possible. -->
112
 
113
- [More Information Needed]
 
114
 
115
  #### Factors
116
 
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
 
121
  #### Metrics
122
 
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
 
127
  ### Results
128
 
129
- [More Information Needed]
130
 
131
  #### Summary
132
 
 
133
 
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
 
175
  **BibTeX:**
176
 
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
1
  ---
2
  library_name: transformers
3
+ pipeline_tag: text-generation
4
+ license: cc-by-nc-4.0
5
+ tags:
6
+ - qwen
7
+ - instruction-following
8
+ - chinese
9
  ---
10
 
11
+ # Model Card for Qwen2.5-Instruct-7B-COIG-P
 
 
 
12
 
13
+ This repository contains the Qwen2.5-Instruct-7B-COIG-P model, a 7B parameter Large Language Model fine-tuned for instruction following using the COIG-P dataset, as described in the paper [COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values](https://huggingface.co/papers/2504.05535).
14
 
15
  ## Model Details
16
 
 
 
 
 
 
 
17
  - **Developed by:** [More Information Needed]
18
+ - **Funded by:** [More Information Needed]
19
+ - **Shared by:** [More Information Needed]
20
+ - **Model type:** Large Language Model (LLM)
21
+ - **Language(s) (NLP):** Chinese (zh)
22
+ - **License:** cc-by-nc-4.0
23
+ - **Finetuned from model:** Qwen2
 
 
24
 
25
+ ### Model Sources
26
 
27
  - **Repository:** [More Information Needed]
28
+ - **Paper:** [COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values](https://huggingface.co/papers/2504.05535)
 
29
 
30
  ## Uses
31
 
 
 
32
  ### Direct Use
33
 
34
+ This model is designed for text generation tasks and is particularly well-suited for Chinese language processing. It can be used for generating creative text formats, translating languages, and answering questions.
35
 
36
+ ### Downstream Use
37
 
38
+ The model can be fine-tuned for various downstream tasks, including chatbots, code generation, summarization, question answering, and other NLP tasks. The [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) can be used for fine-tuning the model.
 
 
 
 
39
 
40
  ### Out-of-Scope Use
41
 
42
+ The model's performance may be limited when applied to tasks significantly different from those it was trained on or tasks requiring understanding of languages other than Chinese.
 
 
43
 
44
  ## Bias, Risks, and Limitations
45
 
46
+ The model may exhibit biases present in its training data, particularly reflecting biases inherent in the Chinese language and culture. Users should be aware of potential biases and limitations and use the model responsibly and ethically, avoiding applications that could perpetuate or amplify harmful biases.
 
 
 
 
 
 
 
 
47
 
48
  ## How to Get Started with the Model
49
 
50
+ Use the following code to get started with the Qwen2.5-Instruct-7B-COIG-P model:
51
+
52
+ ```python
53
+ from transformers import AutoModelForCausalLM, AutoTokenizer
54
+ import torch
55
+
56
+ device = "cuda" # or "cpu" if you don't have a GPU
57
+
58
+ model = AutoModelForCausalLM.from_pretrained(
59
+ "m-a-p/Qwen2.5-Instruct-7B-COIG-P",
60
+ torch_dtype="auto",
61
+ device_map="auto"
62
+ )
63
+ tokenizer = AutoTokenizer.from_pretrained("m-a-p/Qwen2.5-Instruct-7B-COIG-P")
64
+
65
+ prompt = "给我一个大型语言模型的简短介绍。" # Give me a short introduction to large language model.
66
+ messages = [
67
+ {"role": "system", "content": "你是一个乐于助人的助手。"}, # You are a helpful assistant.
68
+ {"role": "user", "content": prompt}
69
+ ]
70
+ text = tokenizer.apply_chat_template(
71
+ messages,
72
+ tokenize=False,
73
+ add_generation_prompt=True
74
+ )
75
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
76
+
77
+ generated_ids = model.generate(
78
+ model_inputs.input_ids,
79
+ max_new_tokens=512
80
+ )
81
+ generated_ids = [
82
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
83
+ ]
84
+
85
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
86
+ print(response)
87
+ ```
88
 
89
  ## Training Details
90
 
91
  ### Training Data
92
 
93
+ The model was trained on the COIG-P dataset ([https://huggingface.co/datasets/m-a-p/COIG-P](https://huggingface.co/datasets/m-a-p/COIG-P)). This dataset consists of 101k Chinese preference pairs across six domains: Chat, Code, Math, Logic, Novel, and Role.
 
 
94
 
95
  ### Training Procedure
96
 
97
+ The model was trained using the [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory).
 
 
 
 
 
98
 
99
  #### Training Hyperparameters
100
 
101
+ - **Training regime:** [More Information Needed]
 
 
102
 
103
+ #### Speeds, Sizes, Times
104
 
105
+ - **Checkpoint size:** [More Information Needed]
106
+ - **Training time:** [More Information Needed]
107
 
108
  ## Evaluation
109
 
 
 
110
  ### Testing Data, Factors & Metrics
111
 
112
+ The model's performance is evaluated using the Chinese Reward Benchmark (CRBench) and AlignBench.
113
 
114
+ #### Testing Data
115
 
116
+ - Chinese Reward Benchmark (CRBench): [https://huggingface.co/datasets/m-a-p/COIG-P-CRM](https://huggingface.co/datasets/m-a-p/COIG-P-CRM)
117
+ - AlignBench: [https://github.com/THUDM/AlignBench](https://github.com/THUDM/AlignBench)
118
 
119
  #### Factors
120
 
121
+ [Add factors from paper, e.g., domain, task type]
 
 
122
 
123
  #### Metrics
124
 
125
+ [Add metrics from paper, e.g., accuracy, precision, recall]
 
 
126
 
127
  ### Results
128
 
129
+ [Add results from paper, including tables and figures if appropriate]
130
 
131
  #### Summary
132
 
133
+ [Summarize evaluation results from the paper]
134
 
135
+ ## Citation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
136
 
137
  **BibTeX:**
138
 
139
+ ```bibtex
140
+ @misc{pteam2025coigphighqualitylargescalechinese,
141
+ title={COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values},
142
+ author={P Team and Siwei Wu and Jincheng Ren and Xinrun Du and Shuyue Guo and Xingwei Qu and Yiming Liang and Jie Liu and Yunwen Li and Tianyu Zheng and Boyu Feng and Huaqing Yuan and Zenith Wang and Jiaheng Liu and Wenhao Huang and Chenglin Cai and Haoran Que and Jian Yang and Yuelin Bai and Zekun Moore Wang and Zhouliang Yu and Qunshu Lin and Ding Pan and Yuchen Jiang and Tiannan Wang and Wangchunshu Zhou and Shenzhi Wang and Xingyuan Bu and Minghao Liu and Guoyin Wang and Ge Zhang and Chenghua Lin},
143
+ year={2025},
144
+ eprint={2504.05535},
145
+ archivePrefix={arXiv},
146
+ primaryClass={cs.CL},
147
+ url={https://arxiv.org/abs/2504.05535},
148
+ }
149
+ ```
150
+
151
+ **APA:** [Add APA citation here based on BibTeX]