mahdin70 commited on
Commit
ae20617
·
verified ·
1 Parent(s): bef3c30

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +261 -1
README.md CHANGED
@@ -12,4 +12,264 @@ metrics:
12
  - accuracy
13
  pipeline_tag: text-classification
14
  library_name: transformers
15
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  - accuracy
13
  pipeline_tag: text-classification
14
  library_name: transformers
15
+ ---
16
+
17
+ # UnixCoder-Primevul-BigVul Model Card
18
+
19
+ ## Model Overview
20
+
21
+ `UnixCoder-Primevul-BigVul` is a multi-task model based on Microsoft's `unixcoder-base`, fine-tuned to detect vulnerabilities (`vul`) and classify Common Weakness Enumeration (CWE) types in code snippets. It was developed by [mahdin70](https://huggingface.co/mahdin70) and trained on a balanced dataset combining BigVul and PrimeVul datasets. The model performs binary classification for vulnerability detection and multi-class classification for CWE identification.
22
+
23
+ - **Model Repository**: [mahdin70/UnixCoder-Primevul-BigVul](https://huggingface.co/mahdin70/UnixCoder-Primevul-BigVul)
24
+ - **Base Model**: [microsoft/unixcoder-base](https://huggingface.co/microsoft/unixcoder-base)
25
+ - **Tasks**: Vulnerability Detection (Binary), CWE Classification (Multi-class)
26
+ - **License**: MIT (assumed; adjust if different)
27
+ - **Date**: Trained and uploaded as of March 11, 2025
28
+
29
+ ## Model Architecture
30
+
31
+ The model extends `unixcoder-base` with two task-specific heads:
32
+ - **Vulnerability Head**: A linear layer mapping 768-dimensional hidden states to 2 classes (vulnerable or not).
33
+ - **CWE Head**: A linear layer mapping 768-dimensional hidden states to 135 classes (134 CWE types + 1 for "no CWE").
34
+
35
+ The architecture is implemented as a custom `MultiTaskUnixCoder` class in PyTorch, with the loss computed as the sum of cross-entropy losses for both tasks.
36
+
37
+ ## Training Dataset
38
+
39
+ The model was trained on the `mahdin70/balanced_merged_bigvul_primevul` dataset (configuration: `10_per_commit`), which combines:
40
+ - **BigVul**: A dataset of real-world vulnerabilities from open-source projects.
41
+ - **PrimeVul**: A dataset focused on prime vulnerabilities in code.
42
+
43
+ ### Dataset Details
44
+ - **Splits**:
45
+ - Train: 124,780 samples
46
+ - Validation: 26,740 samples
47
+ - Test: 26,738 samples
48
+ - **Features**:
49
+ - `func`: Code snippet (text)
50
+ - `vul`: Binary label (0 = non-vulnerable, 1 = vulnerable)
51
+ - `CWE ID`: CWE identifier (e.g., CWE-89) or None for non-vulnerable samples
52
+ - **Preprocessing**:
53
+ - CWE labels were encoded using a `LabelEncoder` with 134 unique CWE classes identified across the dataset.
54
+ - Non-vulnerable samples assigned a CWE label of -1 (mapped to 0 in the model).
55
+
56
+ The dataset is balanced to ensure a fair representation of vulnerable and non-vulnerable samples, with a maximum of 10 samples per commit where applicable.
57
+
58
+ ## Training Details
59
+
60
+ ### Training Arguments
61
+ The model was trained using the Hugging Face `Trainer` API with the following arguments:
62
+ - **Output Directory**: `./unixcoder_multitask`
63
+ - **Evaluation Strategy**: Per epoch
64
+ - **Save Strategy**: Per epoch
65
+ - **Learning Rate**: 2e-5
66
+ - **Batch Size**: 8 (per device, train and eval)
67
+ - **Epochs**: 3
68
+ - **Weight Decay**: 0.01
69
+ - **Logging**: Every 10 steps, logged to `./logs`
70
+ - **WANDB**: Disabled
71
+
72
+ ### Training Environment
73
+ - **Hardware**: NVIDIA Tesla T4 GPU
74
+ - **Framework**: PyTorch 2.5.1+cu121, Transformers 4.47.0
75
+ - **Duration**: ~6 hours, 34 minutes, 53 seconds (23,397 steps)
76
+
77
+ ### Training Metrics
78
+ Validation metrics across epochs:
79
+
80
+ | Epoch | Training Loss | Validation Loss | Vul Accuracy | Vul Precision | Vul Recall | Vul F1 | CWE Accuracy |
81
+ |-------|---------------|-----------------|--------------|---------------|------------|----------|--------------|
82
+ | 1 | 0.3038 | 0.4997 | 0.9570 | 0.8082 | 0.5379 | 0.6459 | 0.1887 |
83
+ | 2 | 0.6092 | 0.4859 | 0.9587 | 0.8118 | 0.5641 | 0.6657 | 0.2964 |
84
+ | 3 | 0.4261 | 0.5090 | 0.9585 | 0.8114 | 0.5605 | 0.6630 | 0.3323 |
85
+
86
+ - **Final Training Loss**: 0.4430 (average over all steps)
87
+
88
+ ## Evaluation
89
+
90
+ The model was evaluated on the test split (26,738 samples) with the following metrics:
91
+ - **Vulnerability Detection**:
92
+ - Accuracy: 0.9571
93
+ - Precision: 0.7947
94
+ - Recall: 0.5437
95
+ - F1 Score: 0.6457
96
+ - **CWE Classification** (on vulnerable samples):
97
+ - Accuracy: 0.3288
98
+
99
+ The model excels at identifying non-vulnerable code (high accuracy) but has moderate recall for vulnerabilities and lower CWE classification accuracy, indicating room for improvement in CWE prediction.
100
+
101
+ ## Usage
102
+
103
+ ### Installation
104
+ Install the required libraries:
105
+ ```bash
106
+ pip install transformers torch datasets huggingface_hub
107
+
108
+ ```
109
+ Apologies for the oversight! Below is the corrected README.md with the entire content, including the "Sample Code Snippet" section through to the end, formatted properly in Markdown.
110
+
111
+ markdown
112
+
113
+ Collapse
114
+
115
+ Wrap
116
+
117
+ Copy
118
+ # UnixCoder-Primevul-BigVul Model Card
119
+
120
+ ## Model Overview
121
+
122
+ `UnixCoder-Primevul-BigVul` is a multi-task model based on Microsoft's `unixcoder-base`, fine-tuned to detect vulnerabilities (`vul`) and classify Common Weakness Enumeration (CWE) types in code snippets. It was developed by [mahdin70](https://huggingface.co/mahdin70) and trained on a balanced dataset combining BigVul and PrimeVul datasets. The model performs binary classification for vulnerability detection and multi-class classification for CWE identification.
123
+
124
+ - **Model Repository**: [mahdin70/UnixCoder-Primevul-BigVul](https://huggingface.co/mahdin70/UnixCoder-Primevul-BigVul)
125
+ - **Base Model**: [microsoft/unixcoder-base](https://huggingface.co/microsoft/unixcoder-base)
126
+ - **Tasks**: Vulnerability Detection (Binary), CWE Classification (Multi-class)
127
+ - **License**: MIT (assumed; adjust if different)
128
+ - **Date**: Trained and uploaded as of March 11, 2025
129
+
130
+ ## Model Architecture
131
+
132
+ The model extends `unixcoder-base` with two task-specific heads:
133
+ - **Vulnerability Head**: A linear layer mapping 768-dimensional hidden states to 2 classes (vulnerable or not).
134
+ - **CWE Head**: A linear layer mapping 768-dimensional hidden states to 135 classes (134 CWE types + 1 for "no CWE").
135
+
136
+ The architecture is implemented as a custom `MultiTaskUnixCoder` class in PyTorch, with the loss computed as the sum of cross-entropy losses for both tasks.
137
+
138
+ ## Training Dataset
139
+
140
+ The model was trained on the `mahdin70/balanced_merged_bigvul_primevul` dataset (configuration: `10_per_commit`), which combines:
141
+ - **BigVul**: A dataset of real-world vulnerabilities from open-source projects.
142
+ - **PrimeVul**: A dataset focused on prime vulnerabilities in code.
143
+
144
+ ### Dataset Details
145
+ - **Splits**:
146
+ - Train: 124,780 samples
147
+ - Validation: 26,740 samples
148
+ - Test: 26,738 samples
149
+ - **Features**:
150
+ - `func`: Code snippet (text)
151
+ - `vul`: Binary label (0 = non-vulnerable, 1 = vulnerable)
152
+ - `CWE ID`: CWE identifier (e.g., CWE-89) or None for non-vulnerable samples
153
+ - **Preprocessing**:
154
+ - CWE labels were encoded using a `LabelEncoder` with 134 unique CWE classes identified across the dataset.
155
+ - Non-vulnerable samples assigned a CWE label of -1 (mapped to 0 in the model).
156
+
157
+ The dataset is balanced to ensure a fair representation of vulnerable and non-vulnerable samples, with a maximum of 10 samples per commit where applicable.
158
+
159
+ ## Training Details
160
+
161
+ ### Training Arguments
162
+ The model was trained using the Hugging Face `Trainer` API with the following arguments:
163
+ - **Output Directory**: `./unixcoder_multitask`
164
+ - **Evaluation Strategy**: Per epoch
165
+ - **Save Strategy**: Per epoch
166
+ - **Learning Rate**: 2e-5
167
+ - **Batch Size**: 8 (per device, train and eval)
168
+ - **Epochs**: 3
169
+ - **Weight Decay**: 0.01
170
+ - **Logging**: Every 10 steps, logged to `./logs`
171
+ - **WANDB**: Disabled
172
+
173
+ ### Training Environment
174
+ - **Hardware**: NVIDIA Tesla T4 GPU
175
+ - **Framework**: PyTorch 2.5.1+cu121, Transformers 4.47.0
176
+ - **Duration**: ~6 hours, 34 minutes, 53 seconds (23,397 steps)
177
+
178
+ ### Training Metrics
179
+ Validation metrics across epochs:
180
+
181
+ | Epoch | Training Loss | Validation Loss | Vul Accuracy | Vul Precision | Vul Recall | Vul F1 | CWE Accuracy |
182
+ |-------|---------------|-----------------|--------------|---------------|------------|----------|--------------|
183
+ | 1 | 0.3038 | 0.4997 | 0.9570 | 0.8082 | 0.5379 | 0.6459 | 0.1887 |
184
+ | 2 | 0.6092 | 0.4859 | 0.9587 | 0.8118 | 0.5641 | 0.6657 | 0.2964 |
185
+ | 3 | 0.4261 | 0.5090 | 0.9585 | 0.8114 | 0.5605 | 0.6630 | 0.3323 |
186
+
187
+ - **Final Training Loss**: 0.4430 (average over all steps)
188
+
189
+ ## Evaluation
190
+
191
+ The model was evaluated on the test split (26,738 samples) with the following metrics:
192
+ - **Vulnerability Detection**:
193
+ - Accuracy: 0.9571
194
+ - Precision: 0.7947
195
+ - Recall: 0.5437
196
+ - F1 Score: 0.6457
197
+ - **CWE Classification** (on vulnerable samples):
198
+ - Accuracy: 0.3288
199
+
200
+ The model excels at identifying non-vulnerable code (high accuracy) but has moderate recall for vulnerabilities and lower CWE classification accuracy, indicating room for improvement in CWE prediction.
201
+
202
+ ## Usage
203
+
204
+ ### Installation
205
+ Install the required libraries:
206
+ ```bash
207
+ pip install transformers torch datasets huggingface_hub
208
+ ```
209
+
210
+ ### Sample Code Snippet
211
+ Below is an example of how to use the model for inference on a code snippet:
212
+
213
+ ```python
214
+ from transformers import AutoTokenizer, AutoModel
215
+ import torch
216
+
217
+ # Load tokenizer and model
218
+ tokenizer = AutoTokenizer.from_pretrained("microsoft/unixcoder-base")
219
+ model = AutoModel.from_pretrained("mahdin70/UnixCoder-Primevul-BigVul", trust_remote_code=True)
220
+ model.eval()
221
+
222
+ # Example code snippet
223
+ code = """
224
+ void example(char *input) {
225
+ char buffer[10];
226
+ strcpy(buffer, input);
227
+ }
228
+ """
229
+
230
+ # Tokenize input
231
+ inputs = tokenizer(code, return_tensors="pt", padding="max_length", truncation=True, max_length=512)
232
+
233
+ # Move to GPU if available
234
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
235
+ model.to(device)
236
+ inputs = {k: v.to(device) for k, v in inputs.items()}
237
+
238
+ # Get predictions
239
+ with torch.no_grad():
240
+ outputs = model(**inputs)
241
+ vul_logits = outputs["vul_logits"]
242
+ cwe_logits = outputs["cwe_logits"]
243
+
244
+ # Vulnerability prediction
245
+ vul_pred = torch.argmax(vul_logits, dim=1).item()
246
+ print(f"Vulnerability: {'Vulnerable' if vul_pred == 1 else 'Not Vulnerable'}")
247
+
248
+ # CWE prediction (if vulnerable)
249
+ if vul_pred == 1:
250
+ cwe_pred = torch.argmax(cwe_logits, dim=1).item() - 1 # Subtract 1 as -1 is "no CWE"
251
+ print(f"Predicted CWE: {cwe_pred if cwe_pred >= 0 else 'None'}")
252
+
253
+ ```
254
+
255
+ ### Output Example:
256
+
257
+ ```bash
258
+ Vulnerability: Vulnerable
259
+ Predicted CWE: 120 # Maps to CWE-120 (Buffer Overflow), depending on encoder
260
+ ```
261
+
262
+ ## Notes:
263
+
264
+ The CWE prediction is an integer index (0 to 133). To map it to a specific CWE ID (e.g., CWE-120), you need the LabelEncoder used during training, available in the dataset preprocessing step.
265
+ Ensure trust_remote_code=True as the model uses custom code from the repository.
266
+
267
+ ## Limitations
268
+ - CWE Accuracy: The model struggles with precise CWE classification (32.88% accuracy), likely due to class imbalance or complexity in distinguishing similar CWE types.
269
+ - Recall: Moderate recall (54.37%) for vulnerability detection suggests some vulnerable samples may be missed.
270
+ - Generalization: Trained on BigVul and PrimeVul, performance may vary on out-of-domain codebases.
271
+
272
+ ## Future Improvements
273
+ - Increase training epochs or dataset size for better CWE accuracy.
274
+ - Experiment with class weighting to address CWE imbalance.
275
+ - Fine-tune on additional datasets for broader generalization.