mahdin70 commited on
Commit
2809053
·
verified ·
1 Parent(s): b599947

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -3
README.md CHANGED
@@ -1,3 +1,95 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - mahdin70/cwe_enriched_balanced_bigvul_primevul
5
+ metrics:
6
+ - accuracy
7
+ - precision
8
+ - recall
9
+ - f1
10
+ base_model:
11
+ - microsoft/codebert-base
12
+ library_name: transformers
13
+ ---
14
+
15
+
16
+ # CodeBERT-VulnCWE - Fine-Tuned CodeBERT for Vulnerability and CWE Classification
17
+
18
+ ## Model Overview
19
+ This model is a fine-tuned version of **microsoft/codebert-base** on a curated and enriched dataset for vulnerability detection and CWE classification. It is capable of predicting whether a given code snippet is vulnerable and, if vulnerable, identifying the specific CWE ID associated with it.
20
+
21
+ ## Dataset
22
+ The model was fine-tuned using the dataset [mahdin70/cwe_enriched_balanced_bigvul_primevul](https://huggingface.co/datasets/mahdin70/cwe_enriched_balanced_bigvul_primevul). The dataset contains both vulnerable and non-vulnerable code samples and is enriched with CWE metadata.
23
+
24
+ ### CWE IDs Covered:
25
+ 1. **CWE-119**: Improper Restriction of Operations within the Bounds of a Memory Buffer
26
+ 2. **CWE-20**: Improper Input Validation
27
+ 3. **CWE-125**: Out-of-bounds Read
28
+ 4. **CWE-399**: Resource Management Errors
29
+ 5. **CWE-200**: Information Exposure
30
+ 6. **CWE-787**: Out-of-bounds Write
31
+ 7. **CWE-264**: Permissions, Privileges, and Access Controls
32
+ 8. **CWE-416**: Use After Free
33
+ 9. **CWE-476**: NULL Pointer Dereference
34
+ 10. **CWE-190**: Integer Overflow or Wraparound
35
+ 11. **CWE-189**: Numeric Errors
36
+ 12. **CWE-362**: Concurrent Execution using Shared Resource with Improper Synchronization
37
+
38
+ ---
39
+
40
+ ## Model Training
41
+ The model was trained for **3 epochs** with the following configuration:
42
+ - **Learning Rate**: 2e-5
43
+ - **Weight Decay**: 0.01
44
+ - **Batch Size**: 8
45
+ - **Optimizer**: AdamW
46
+ - **Scheduler**: Linear
47
+
48
+ ### Training Loss and Validation Metrics Per Epoch:
49
+ | Epoch | Training Loss | Validation Loss | Vul Accuracy | Vul Precision | Vul Recall | Vul F1 | CWE Accuracy |
50
+ |-------|---------------|-----------------|--------------|---------------|------------|--------|--------------|
51
+ | 1 | 1.4663 | 1.4988 | 0.7887 | 0.8526 | 0.5498 | 0.6685 | 0.2932 |
52
+ | 2 | 1.2107 | 1.3474 | 0.8038 | 0.8493 | 0.6002 | 0.7034 | 0.3688 |
53
+ | 3 | 1.1885 | 1.3096 | 0.8034 | 0.8020 | 0.6541 | 0.7205 | 0.3963 |
54
+
55
+ #### Training Summary:
56
+ - **Total Training Steps**: 2958
57
+ - **Training Loss**: 1.3862
58
+ - **Training Time**: 3058.7 seconds (~51 minutes)
59
+ - **Training Speed**: 15.47 samples per second
60
+ - **Steps Per Second**: 0.967
61
+
62
+
63
+ ## How to Use the Model
64
+ ```python
65
+ from transformers import AutoModel, AutoTokenizer
66
+
67
+ model = AutoModel.from_pretrained("mahdin70/CodeBERT-VulnCWE", trust_remote_code=True)
68
+ tokenizer = AutoTokenizer.from_pretrained("microsoft/codebert-base")
69
+
70
+ code_snippet = "int main() { int arr[10]; arr[11] = 5; return 0; }"
71
+ inputs = tokenizer(code_snippet, return_tensors="pt")
72
+ outputs = model(**inputs)
73
+
74
+ vul_logits = outputs["vul_logits"]
75
+ cwe_logits = outputs["cwe_logits"]
76
+
77
+ vul_pred = vul_logits.argmax(dim=1).item()
78
+ cwe_pred = kov_logits.argmax(dim=1).item()
79
+
80
+ print(f"Vulnerability: {'Vulnerable' if vul_pred == 1 else 'Non-vulnerable'}")
81
+ print(f"CWE ID: {cwe_pred if vul_pred == 1 else 'N/A'}")
82
+ ```
83
+
84
+ ## Limitations and Future Improvements
85
+ - The model achieves a CWE classification accuracy of 39.63% on the validation set, indicating significant room for improvement. Advanced architectures, better data balancing, or additional pretraining could enhance performance.
86
+ - The model's vulnerability detection F1-score (72.05% on validation) is moderate but could be improved with further tuning or a larger dataset.
87
+ - The model may struggle with edge cases or CWEs not well-represented in the training data.
88
+ - Test set evaluation metrics are pending. Running the model on the test set will provide a clearer picture of its generalization.
89
+
90
+
91
+ ## Notes
92
+ - Ensure the `trust_remote_code=True` flag is used when loading the model, as it relies on custom code for the `MultiTaskCodeBERT` architecture.
93
+ - The model expects input code snippets tokenized using the CodeBERT tokenizer (`microsoft/codebert-base`).
94
+ - For best results, preprocess code snippets consistently with the training dataset (e.g., max length of 512 tokens).
95
+