Update README.md
Browse files
README.md
CHANGED
@@ -52,23 +52,23 @@ We compare two distinct approaches:
|
|
52 |
|
53 |
| Catégorie | Global (NL + CL) | NL | CL |
|
54 |
|:------------:|:----------------:|:-------------:|:-------------:|
|
55 |
-
| **Harmfull** | 0.83 | 0.
|
56 |
-
| **Low** | 0.
|
57 |
-
| **Medium** | 0.63 | 0.
|
58 |
-
| **High** | 0.
|
59 |
-
| **Accuracy** | **0.
|
60 |
|
61 |
|
62 |
## Key Performance Metrics:
|
63 |
- **Unified Model (NL + CL)**:
|
64 |
-
- Overall accuracy: ~
|
65 |
-
- High reliability on harmful data (f1-score: 0.
|
66 |
|
67 |
- **Separate Models**:
|
68 |
-
- **Natural Language (NL)**: ~
|
69 |
-
- Excellent performance on harmful data (f1-score: 0.
|
70 |
-
- **Code Language (CL)**: ~63% accuracy
|
71 |
-
- Good detection of harmful data (f1-score: 0.
|
72 |
|
73 |
## Training Dataset:
|
74 |
- Public dataset available: [TempestTeam/dataset-quality](https://huggingface.co/datasets/TempestTeam/dataset-quality)
|
|
|
52 |
|
53 |
| Catégorie | Global (NL + CL) | NL | CL |
|
54 |
|:------------:|:----------------:|:-------------:|:-------------:|
|
55 |
+
| **Harmfull** | 0.83 | 0.93 | 0.72 |
|
56 |
+
| **Low** | 0.64 | 0.76 | 0.53 |
|
57 |
+
| **Medium** | 0.63 | 0.76 | 0.52 |
|
58 |
+
| **High** | 0.79 | 0.81 | 0.76 |
|
59 |
+
| **Accuracy** | **0.73** | **0.82** | **0.63** |
|
60 |
|
61 |
|
62 |
## Key Performance Metrics:
|
63 |
- **Unified Model (NL + CL)**:
|
64 |
+
- Overall accuracy: ~73%
|
65 |
+
- High reliability on harmful data (f1-score: 0.86)
|
66 |
|
67 |
- **Separate Models**:
|
68 |
+
- **Natural Language (NL)**: ~82% accuracy
|
69 |
+
- Excellent performance on harmful data (f1-score: 0.93)
|
70 |
+
- **Code Language (CL)**: ~63% accuracy
|
71 |
+
- Good detection of harmful data (f1-score: 0.72)
|
72 |
|
73 |
## Training Dataset:
|
74 |
- Public dataset available: [TempestTeam/dataset-quality](https://huggingface.co/datasets/TempestTeam/dataset-quality)
|