Update README.md
Browse files
README.md
CHANGED
|
@@ -68,16 +68,16 @@ We follow the [MARBLE](https://github.com/a43992899/MARBLE) protocol under const
|
|
| 68 |
|
| 69 |
| Model | Turkish-makam | Hindustani | Carnatic | Lyra | FMA | MTAT | **Avg.** |
|
| 70 |
|------------------------|-----------------|----------------|--------------|------------|-----------|------------|------------|
|
| 71 |
-
| **MERT-v1-95M** | 83.2 / 53.3 | 82.4 / 52.9 | 74.9 / 39.7 | 85.7 / 56.5 | 90.7 / 48.1 | 89.6 / 35.9 | 66.1 |
|
| 72 |
-
| **CultureMERT-95M** | **89.6 / 60.6** | **88.2 / 63.5** | **79.2 / 43.1** | 86.9 / 56.7 | 90.7 / 48.1 | 89.4 / 35.9 | **69.3** |
|
| 73 |
|
| 74 |
|
| 75 |
## Micro-F1 and Macro-F1
|
| 76 |
|
| 77 |
| Model | Turkish-makam | Hindustani | Carnatic | Lyra | FMA | MTAT | **Avg.** |
|
| 78 |
|--------------------|----------------|------------|----------|------|------|------|----------|
|
| 79 |
-
| **MERT-v1-95M** | 73.0 / 38.9 | 71.1 / 33.2 | 80.1 / 30.0 | 72.4 / 42.6 | 57.0 / 36.9 | 35.7 / 21.2 | 49.3 |
|
| 80 |
-
| **CultureMERT-95M** | **77.4 / 45.8** | **77.8 / 50.4** | **82.7 / 32.5** | 73.1 / 43.1 | 58.3 / 36.6 | 35.6 / **22.9** | **52.9** |
|
| 81 |
|
| 82 |
|
| 83 |
**CultureMERT-95M** outperforms the original **MERT-v1-95M** by an average of **4.43%** in ROC-AUC across non-Western traditions, with consistent improvements of **5.4% in mAP**, **3.6% in Micro-F1**, and **6.8% in Macro-F1**, while exhibiting minimal forgetting on Western datasets.
|
|
|
|
| 68 |
|
| 69 |
| Model | Turkish-makam | Hindustani | Carnatic | Lyra | FMA | MTAT | **Avg.** |
|
| 70 |
|------------------------|-----------------|----------------|--------------|------------|-----------|------------|------------|
|
| 71 |
+
| **MERT-v1-95M** | 83.2 / 53.3 | 82.4 / 52.9 | 74.9 / 39.7 | 85.7 / 56.5 | 90.7 / 48.1 | **89.6** / 35.9 | 66.1 |
|
| 72 |
+
| **CultureMERT-95M** | **89.6 / 60.6** | **88.2 / 63.5** | **79.2 / 43.1** | **86.9** / **56.7** | 90.7 / 48.1 | 89.4 / 35.9 | **69.3** |
|
| 73 |
|
| 74 |
|
| 75 |
## Micro-F1 and Macro-F1
|
| 76 |
|
| 77 |
| Model | Turkish-makam | Hindustani | Carnatic | Lyra | FMA | MTAT | **Avg.** |
|
| 78 |
|--------------------|----------------|------------|----------|------|------|------|----------|
|
| 79 |
+
| **MERT-v1-95M** | 73.0 / 38.9 | 71.1 / 33.2 | 80.1 / 30.0 | 72.4 / 42.6 | 57.0 / **36.9** | **35.7** / 21.2 | 49.3 |
|
| 80 |
+
| **CultureMERT-95M** | **77.4 / 45.8** | **77.8 / 50.4** | **82.7 / 32.5** | **73.1** / **43.1** | **58.3** / 36.6 | 35.6 / **22.9** | **52.9** |
|
| 81 |
|
| 82 |
|
| 83 |
**CultureMERT-95M** outperforms the original **MERT-v1-95M** by an average of **4.43%** in ROC-AUC across non-Western traditions, with consistent improvements of **5.4% in mAP**, **3.6% in Micro-F1**, and **6.8% in Macro-F1**, while exhibiting minimal forgetting on Western datasets.
|