Update README.md
Browse files
README.md
CHANGED
@@ -53,7 +53,7 @@ The translation process for the ARC_Challenge_Swahili dataset involved two main
|
|
53 |
### Machine Translation:
|
54 |
1. The initial translation from English to Swahili was performed using the SeamlessM4TModel translation model.
|
55 |
|
56 |
-
The following parameters were used for the translation:
|
57 |
```python
|
58 |
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=1024).to(device)
|
59 |
outputs = model.generate(**inputs, tgt_lang=dest_lang)
|
@@ -62,5 +62,36 @@ translation = tokenizer.batch_decode(outputs, skip_special_tokens=True)
|
|
62 |
|
63 |
2. Human Verification and Annotation:
|
64 |
|
65 |
-
After the initial machine translation, the translations were passed through GPT-3.5 for verification. This step involved checking the quality of the translations and identifying any that were not up to standard.
|
66 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
### Machine Translation:
|
54 |
1. The initial translation from English to Swahili was performed using the SeamlessM4TModel translation model.
|
55 |
|
56 |
+
* The following parameters were used for the translation:
|
57 |
```python
|
58 |
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=1024).to(device)
|
59 |
outputs = model.generate(**inputs, tgt_lang=dest_lang)
|
|
|
62 |
|
63 |
2. Human Verification and Annotation:
|
64 |
|
65 |
+
* After the initial machine translation, the translations were passed through GPT-3.5 for verification. This step involved checking the quality of the translations and identifying any that were not up to standard.
|
66 |
+
* Human translators reviewed and annotated the translations flagged by GPT-3.5 as problematic to ensure accuracy and naturalness in Swahili.
|
67 |
+
|
68 |
+
## Supported Tasks and Leaderboards
|
69 |
+
* multiple-choice: The dataset supports multiple-choice question-answering tasks.
|
70 |
+
|
71 |
+
## Languages
|
72 |
+
The dataset is in Swahili.
|
73 |
+
|
74 |
+
## Dataset Structure
|
75 |
+
### Data Instances
|
76 |
+
* An example of a data instance:
|
77 |
+
```json
|
78 |
+
{
|
79 |
+
"id": "example-id",
|
80 |
+
"language": "sw",
|
81 |
+
"question": "Ni gani kati ya zifuatazo ni sehemu ya mmea?",
|
82 |
+
"choices": [
|
83 |
+
{"text": "Majani", "label": "A"},
|
84 |
+
{"text": "Jiwe", "label": "B"},
|
85 |
+
{"text": "Ubao", "label": "C"},
|
86 |
+
{"text": "Nondo", "label": "D"}
|
87 |
+
],
|
88 |
+
"answerKey": "A"
|
89 |
+
}
|
90 |
+
```
|
91 |
+
|
92 |
+
### Data Fields
|
93 |
+
* id: Unique identifier for each question.
|
94 |
+
* language: The language of the question is Swahili (sw).
|
95 |
+
* question: The science question in Swahili.
|
96 |
+
* Choices: There are multiple choice options, each with text and label.
|
97 |
+
* answerKey: The correct answer for each question.
|