Translation
File size: 2,602 Bytes
c7f2297
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
---
pipeline_tag: translation
language:
  - multilingual
  - en
  - am
  - ar
  - so
  - sw
  - pt
  - af
  - fr
  - zu
  - mg
  - ha
  - sn
  - arz
  - ny
  - ig
  - xh
  - yo
  - st
  - rw
  - tn
  - ti
  - ts
  - om
  - run
  - nso
  - ee
  - ln
  - tw
  - pcm
  - gaa
  - loz
  - lg
  - guw
  - bem
  - efi
  - lue
  - lua
  - toi
  - ve
  - tum
  - tll
  - iso
  - kqn
  - zne
  - umb
  - mos
  - tiv
  - lu
  - ff
  - kwy
  - bci
  - rnd
  - luo
  - wal
  - ss
  - lun
  - wo
  - nyk
  - kj
  - ki
  - fon
  - bm
  - cjk
  - din
  - dyu
  - kab
  - kam
  - kbp
  - kr
  - kmb
  - kg
  - nus
  - sg
  - taq
  - tzm
  - nqo

license: apache-2.0
---

This is an improved version of [AfriCOMET-QE-STL (quality estimation single task)](https://github.com/masakhane-io/africomet) evaluation model: It receives a source sentence, and a translation, and returns a score that reflects the quality of the translation compared to the source.
Different from the original AfriCOMET-QE-STL, this QE model is based on an improved African enhanced encoder, [afro-xlmr-large-76L](https://huggingface.co/Davlan/afro-xlmr-large-76L), which leads better performance on quality estimation of African-related machine translation, verified in WMT 2024 Metrics Shared Task.

# Paper

[AfriMTE and AfriCOMET: Empowering COMET to Embrace Under-resourced African Languages](https://arxiv.org/abs/2311.09828) (Wang et al., arXiv 2023)

# License

Apache-2.0

# Usage (AfriCOMET)

Using this model requires unbabel-comet to be installed:

```bash
pip install --upgrade pip  # ensures that pip is current 
pip install unbabel-comet
```

Then you can use it through comet CLI:

```bash
comet-score -s {source-inputs}.txt -t {translation-outputs}.txt --model masakhane/africomet-qe-stl
```

Or using Python: 
```python
from comet import download_model, load_from_checkpoint

model_path = download_model("masakhane/africomet-qe-stl")
model = load_from_checkpoint(model_path)
data = [
    {
        "src": "Nadal sàkọọ́lẹ̀ ìforígbárí o ní àmì méje sóódo pẹ̀lú ilẹ̀ Canada.",
        "mt": "Nadal's head to head record against the Canadian is 7–2.",
    },
    {
        "src": "Laipe yi o padanu si Raoniki ni ere Sisi Brisbeni.",
        "mt": "He recently lost against Raonic in the Brisbane Open.",
    }
]
model_output = model.predict(data, batch_size=8, gpus=1)
print (model_output)
```

# Intended uses

Our model is intented to be used for **MT quality estimation**. 

Given a source sentence and a translation outputs a single score between 0 and 1 where 1 represents a perfect translation.