Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
TurBLiMP Evaluations
This dataset hosts the TurBLiMP evaluation results on my Turkish Model Zoo.
More about the TurBLiMP benchmark:
TurBLiMP is the first Turkish benchmark of linguistic minimal pairs, designed to evaluate the linguistic abilities of monolingual and multilingual language models (LMs). This benchmark covers 16 core grammatical phenomena in Turkish, with 1,000 minimal pairs per phenomenon. Additionally, it incorporates experimental paradigms that examine model performance across different subordination strategies and word order variations.
I've modified the original evaluation script and extended it for the Turkish Model Zoo. My evaluation script can be found here.
Results
After running the evaluation script, all results can be parsed with this notebook to print-out a nice overview table:
Phenomenon | dbmdz/electra-small-turkish-cased-generator |
dbmdz/electra-base-turkish-cased-generator |
dbmdz/electra-base-turkish-mc4-cased-generator |
dbmdz/electra-base-turkish-mc4-uncased-generator |
dbmdz/bert-base-turkish-cased |
dbmdz/bert-base-turkish-uncased |
dbmdz/bert-base-turkish-128k-cased |
dbmdz/bert-base-turkish-128k-uncased |
dbmdz/distilbert-base-turkish-cased |
dbmdz/convbert-base-turkish-cased |
dbmdz/convbert-base-turkish-mc4-cased |
dbmdz/convbert-base-turkish-mc4-uncased |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Anaphor Agreement | 74.1 | 94.3 | 94.3 | 92.8 | 96.7 | 97.3 | 97.3 | 97.7 | 96.9 | 58.1 | 44.3 | 44.6 |
Argument Str. Tran. | 86.6 | 99.6 | 99.4 | 98.7 | 99.7 | 99.6 | 99.8 | 99.1 | 97.5 | 51.9 | 58.1 | 51.3 |
Argument Str. Ditr. | 79.3 | 96.1 | 95.5 | 95.2 | 99.8 | 96.1 | 96.1 | 96.1 | 95.4 | 64.6 | 58.6 | 64.5 |
Binding | 70.7 | 96.2 | 91.4 | 89.6 | 99.9 | 98.5 | 97.7 | 99 | 93 | 89.1 | 49.4 | 78.4 |
Determiners | 91.8 | 99.3 | 98.2 | 99.1 | 99.9 | 100 | 99 | 99.3 | 82.9 | 0 | 0 | 0 |
Ellipsis | 10.6 | 49.7 | 46.3 | 49 | 87.4 | 73.6 | 96.6 | 87.5 | 13.6 | 54.7 | 57.8 | 67.9 |
Irregular Forms | 98.7 | 97.9 | 99 | 99.8 | 98.8 | 100 | 99.9 | 99.6 | 94.1 | 82.9 | 86.6 | 95.2 |
Island Effects | 39.1 | 35.3 | 41.8 | 44 | 49.4 | 39.8 | 60.9 | 51.2 | 47.4 | 96.7 | 99.4 | 100 |
Nominalization | 90 | 96.6 | 97 | 95.4 | 97.4 | 97 | 98.9 | 97.4 | 95.6 | 55.2 | 59.2 | 60.6 |
NPI Licensing | 90.9 | 96.1 | 95 | 98 | 98.2 | 97.6 | 97.2 | 95 | 92.1 | 82.1 | 95.6 | 71.9 |
Passives | 100 | 91.2 | 93.6 | 91.6 | 82.2 | 78.1 | 84.4 | 81.3 | 98.8 | 100 | 100 | 99 |
Quantifiers | 97.9 | 98 | 98 | 97.6 | 95.7 | 94.6 | 98 | 98.4 | 98.4 | 99 | 99 | 99 |
Relative Clauses | 79.9 | 90.7 | 92 | 91.6 | 97.7 | 97.5 | 97 | 98.5 | 92 | 53.4 | 53.7 | 56.9 |
Scrambling | 99.5 | 100 | 100 | 99.8 | 100 | 100 | 99.6 | 100 | 99.8 | 38.7 | 59.3 | 63.3 |
Subject Agreement | 82.8 | 99 | 97.2 | 96.1 | 98.3 | 99.2 | 99.1 | 98.8 | 97 | 47.7 | 43.9 | 56.4 |
Suspended Affixation | 97.5 | 99 | 99.1 | 98.8 | 100 | 100 | 100 | 100 | 100 | 25.4 | 12.8 | 23.2 |
Model Average | 80.6 | 89.9 | 89.9 | 89.8 | 93.8 | 91.8 | 95.1 | 93.7 | 87.2 | 62.5 | 61.1 | 64.5 |
Summary
The TurBLiMP paper used the dbmdz/bert-base-turkish-128k-uncased
for evaluation, yielding a strong performance.
My evaluation here showed, that the dbmdz/bert-base-turkish-128k-cased
even performs better on the TurBLiMP benchmark.
- Downloads last month
- 26