Datasets:
The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
Dataset Card for MasakhaNER
Dataset Summary
MasakhaNER-X is an aggregation of MasakhaNER 1.0 and MasakhaNER 2.0 datasets for 20 African languages. The dataset is not in CoNLL format. The input is the original raw text while the output is byte-level span annotations.
Example: {"example_id": "test-00015916", "language": "pcm", "text": "By Bashir Ibrahim Hassan", "spans": [{"start_byte": 3, "limit_byte": 24, "label": "PER"}], "target": "PER: Bashir Ibrahim Hassan"}
MasakhaNER-X is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for twenty African languages:
- Amharic
- Ghomala
- Bambara
- Ewe
- Hausa
- Igbo
- Kinyarwanda
- Luganda
- Luo
- Mossi
- Chichewa
- Nigerian-Pidgin
- chiShona
- Swahili
- Setswana
- Twi
- Wolof
- Xhosa
- Yoruba
- Zulu
The train/validation/test sets are available for all the twenty languages.
For more details see https://aclanthology.org/2022.emnlp-main.298
Supported Tasks and Leaderboards
[More Information Needed]
named-entity-recognition
: The performance in this task is measured with Span F1 (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.
Languages
There are twenty languages available :
- Amharic (am)
- Ghomala (bbj)
- Bambara (bm)
- Ewe (ee)
- Hausa (ha)
- Igbo (ig)
- Kinyarwanda (rw)
- Luganda (lg)
- Luo (luo)
- Mossi (mos)
- Chichewa (ny)
- Nigerian-Pidgin (pcm)
- chiShona (sn)
- Swahili (sw)
- Setswana (tn)
- Twi (tw)
- Wolof (wo)
- Xhosa (xh)
- Yoruba (yo)
- Zulu (zu)
Dataset Structure
Data Instances
The examples look like this for Nigerian-Pidgin:
from datasets import load_dataset
data = load_dataset('masakhaner-x', 'pcm')
# Please, specify the language code
# A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
{'id': '0',
'text': "Most of de people who dey opposed to Prez Akufo-Addo en decision say within 3 weeks of lockdown, total number of cases for Ghana rise from around 100 catch 1024.",
'spans': [{"start_byte": 42, "limit_byte": 52, "label": "PER"}, {"start_byte": 76, "limit_byte": 83, "label": "DATE"}, {"start_byte": 123, "limit_byte": 128, "label": "LOC"}]
'target': "PER: Akufo-Addo $$ DATE: 3 weeks $$ LOC: Ghana"
}
Data Fields
id
: id of the sampletext
: sentence containing entitiesspans
: details of each named entities in the sentencetarget
: named entities and their values. Each named entity is separated by '$$'
The NER tags correspond to this list:
"PER", "ORG", "LOC", and "DATE",
Data Splits
For all languages, there are three splits - train
, validation
and test
splits.
The splits have the following sizes :
Language | train | validation | test |
---|---|---|---|
Amharic | 1441 | 250 | 500 |
Gbomola | 1441 | 483 | 966 |
Bambara | 1441 | 638 | 1000 |
Ewe | 1441 | 501 | 1000 |
Hausa | 1441 | 1000 | 1000 |
Igbo | 1441 | 319 | 638 |
Kinyarwanda | 1441 | 1000 | 1000 |
Luganda | 1441 | 906 | 1000 |
Luo | 644 | 92 | 185 |
Mossi | 1441 | 648 | 1000 |
Chichewa | 1441 | 893 | 1000 |
Nigerian-Pidgin | 1441 | 1000 | 1000 |
Shona | 1441 | 887 | 1000 |
Swahili | 1441 | 1000 | 1000 |
Setswana | 1441 | 499 | 996 |
Twi | 1441 | 605 | 1000 |
Wolof | 1441 | 923 | 1000 |
Xhosa | 1441 | 817 | 1000 |
Yoruba | 1441 | 1000 | 1000 |
Zulu | 1441 | 836 | 1000 |
Licensing Information
The licensing status of the data is CC 4.0 Non-Commercial
Citation Information
Provide the BibTex-formatted reference for the dataset. For example:
@inproceedings{adelani-etal-2022-masakhaner,
title = "{M}asakha{NER} 2.0: {A}frica-centric Transfer Learning for Named Entity Recognition",
author = "Adelani, David and
Neubig, Graham and
Ruder, Sebastian and
Rijhwani, Shruti and
Beukman, Michael and
Palen-Michel, Chester and
Lignos, Constantine and
Alabi, Jesujoba and
Muhammad, Shamsuddeen and
Nabende, Peter and
Dione, Cheikh M. Bamba and
Bukula, Andiswa and
Mabuya, Rooweither and
Dossou, Bonaventure F. P. and
Sibanda, Blessing and
Buzaaba, Happy and
Mukiibi, Jonathan and
Kalipe, Godson and
Mbaye, Derguene and
Taylor, Amelia and
Kabore, Fatoumata and
Emezue, Chris Chinenye and
Aremu, Anuoluwapo and
Ogayo, Perez and
Gitau, Catherine and
Munkoh-Buabeng, Edwin and
Memdjokam Koagne, Victoire and
Tapo, Allahsera Auguste and
Macucwa, Tebogo and
Marivate, Vukosi and
Elvis, Mboning Tchiaze and
Gwadabe, Tajuddeen and
Adewumi, Tosin and
Ahia, Orevaoghene and
Nakatumba-Nabende, Joyce and
Mokono, Neo Lerato and
Ezeani, Ignatius and
Chukwuneke, Chiamaka and
Oluwaseun Adeyemi, Mofetoluwa and
Hacheme, Gilles Quentin and
Abdulmumin, Idris and
Ogundepo, Odunayo and
Yousuf, Oreen and
Moteu, Tatiana and
Klakow, Dietrich",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.298",
pages = "4488--4508",
abstract = "African languages are spoken by over a billion people, but they are under-represented in NLP research and development. Multiple challenges exist, including the limited availability of annotated training and evaluation datasets as well as the lack of understanding of which settings, languages, and recently proposed methods like cross-lingual transfer will be effective. In this paper, we aim to move towards solutions for these challenges, focusing on the task of named entity recognition (NER). We present the creation of the largest to-date human-annotated NER dataset for 20 African languages. We study the behaviour of state-of-the-art cross-lingual transfer methods in an Africa-centric setting, empirically demonstrating that the choice of source transfer language significantly affects performance. While much previous work defaults to using English as the source language, our results show that choosing the best transfer language improves zero-shot F1 scores by an average of 14{\%} over 20 languages as compared to using English.",
}
- Downloads last month
- 9