Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ParserError
Message:      Error tokenizing data. C error: Expected 1 fields in line 3, saw 3

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 190, in _generate_tables
                  for batch_idx, df in enumerate(csv_file_reader):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__
                  return self.get_chunk()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk
                  return self.read(nrows=size)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1923, in read
                  ) = self._engine.read(  # type: ignore[attr-defined]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
                  chunks = self._reader.read_low_memory(nrows)
                File "parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory
                File "parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
                File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
              pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 3, saw 3

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for Dataset Name

This is a 55,000 word UD treebank of Old English. The text has been retrieved from Martín Arista, Javier (ed.), et al. 2023. ParCorOEv3 [www.nerthusproject.com]. The treebank is a revised version of the dataset of Domínguez Barragán, S. 2024. Universal Dependencies of Old English. PhD Dissertation, University of La Rioja.

Dataset Description

This treebank contains 68,689 lines that annotate around 50,000 Old English words.

The texts have been retrieved from Martín Arista, Javier (ed.), Sara Domínguez Barragán, Luisa Fidalgo Allo, Laura García Fernández, Yosra Hamdoun Bghiyel, Miguel Lacalle Palacios, Raquel Mateo Mendaza, Carmen Novo Urraca, Ana Elvira Ojanguren López, Esaúl Ruíz Narbona, Roberto Torre Alonso & Raquel Vea Escarza. 2023. ParCorOEv3. An open access annotated parallel corpus Old English-English. Nerthus Project, Universidad de La Rioja, www.nerthusproject.com. The choice of texts, which includes both vernacular prose (chronicles, homilies, religious and legal texts) and Latin translations (the Gospel) include Orosius (OROS), St. Mark’s Gospel (MARK), Ælfric ́s Catholic Homilies I (ÆHOM1), The Anglo-Saxon Chronicle A (ASCA), and the Laws (LAWS). The treebank is a revised and expanded version of the manual annotation carried out for assessing the performance of a computational model based on a Natural Langauge Processing library, which was the main aim of Domínguez Barragán, S. 2024. Universal Dependencies of Old English. Automatic Parsing with a Computational Model of Language. PhD Dissertation, Department of Modern Languages, University of La Rioja. Supervised by J. Martín Arista and A. E. Ojanguren López.

  • Curated by: Martín Arista, Javier (ed), Sara Domínguez Barragán, Luisa Fidalgo Allo, Laura García Fernández, Yosra Hamdoun Bghiyel, Miguel Lacalle Palacios, Raquel Mateo Mendaza, Carmen Novo Urraca, Ana Elvira Ojanguren López, Esaúl Ruiz Narbona, Roberto Torre Alonso, Raquel Vea Escarza.
  • Funded by [optional]: Grant PID2020-119200GB-100 funded by MICIU/AEI/ 10.13039/501100011033.
  • Language(s) (NLP): Old English
  • License: CC BY-SA 4.0

Dataset Sources [optional]

  • Repository: [More Information Needed]
  • Paper: Martín Arista, Javier (ed.), Sara Domínguez Barragán, Luisa Fidalgo Allo, Laura García Fernández, Yosra Hamdoun Bghiyel, Miguel Lacalle Palacios, Raquel Mateo Mendaza, Carmen Novo Urraca, Ana Elvira Ojanguren López, Esaúl Ruíz Narbona, Roberto Torre Alonso & Raquel Vea Escarza. 2023. ParCorOEv3. An open access annotated parallel corpus Old English-English. Nerthus Project, Universidad de La Rioja, www.nerthusproject.com

Direct Use

Token Classification following the Universal Dependencies principles.

Dataset Structure

The dataset was divided into a training set (25,000 words) and randomly selected test and dev sets comprising 15,000 words.

Dataset Creation

Source Data

The choice of texts, which includes both vernacular prose (chronicles, homilies, religious and legal texts) and Latin translations (the Gospel) include Orosius (OROS), St. Mark’s Gospel (MARK), Ælfric ́s Catholic Homilies I (ÆHOM1), The Anglo-Saxon Chronicle A (ASCA), and the Laws (LAWS). The treebank is a revised and expanded version of the manual annotation carried out for assessing the performance of a computational model based on a Natural Langauge Processing library, which was the main aim of Domínguez Barragán, S. 2024. Universal Dependencies of Old English. Automatic Parsing with a Computational Model of Language. PhD Dissertation, Department of Modern Languages, University of La Rioja. Supervised by J. Martín Arista and A. E. Ojanguren López.

Annotations

Annotation process

Apart from addressing some local issues, the revised treebank does not consider multi-word tokens. Contractions with the negative word ne attached to pronouns, verbs and adverbs are dealt with by means of Polarity=Neg, marked in the FEATS column. The lemma list is based on Martín Arista, J. et al. 2011. Nerthus. A lexical database of Old English. The initial headword list 2007-2009. Working Papers in Early English Lexicology and Lexicography 1. Nerthus Project. Universidad de La Rioja. Comment lines include a translation into Present-Day English, as can be seen in the example.

global.columns ID FORM LEMMA UPOSTAG XPOSTAG FEATS HEAD DEPREL DEPS MISC

sent_id = LAWSAF.001.11(5).001.

# text = Gif borenran wifmen ðis gelimpe, weaxe sio bot be ðam were.
    # text_en = If this outrage is done to a woman of higher birth, the compensation to be paid shall increase according to the wergeld.
        1	Gif	gif (CONJ)	SCONJ	subordinating conjunction	Uninflected=Yes	5	mark	_	_
        2	borenran	beran(ge)	ADJ	adjective	Tense=Past|Uninflected=Yes|VerbForm=Part|Case=Dat|Gender=Masc|Number=Sing	3	amod	_	_
        3	wifmen	wīfmann	NOUN	common noun	Case=Dat|Gender=Masc|Number=Sing	5	orphan	_	_
        4	ðis	ðes-ðēos-ðis (DEM)	DET	demonstrative	Case=Nom|Gender=Neut|Number=Sing|PronType=Dem	5	det	_	_
        5	gelimpe	gelimp 	NOUN	common noun	Case=Nom|Gender=Neut|Number=Sing	7	advcl	_	SpaceAfter=No
        6	,	,	PUNCT	punctuation	Uninflected=Yes	5	punct	_	_
        7	weaxe	weaxan(ge)	VERB	main verb	Mood=Sub|Number=Sing|Tense=Pres|VerbForm=Fin	0	root	_	_
        8	sio	se-sēo-ðæt (DEM)	DET	demonstrative-article	Case=Nom|Gender=Fem|Number=Sing|PronType=DemArt	9	det	_	_
        9	bot	bōt 	NOUN	common noun	Case=Nom|Gender=Fem|Number=Sing	7	nsubj	_	_
        10	be	be (PREP)	ADP	adposition	Uninflected=Yes	12	case	_	_
        11	ðam	se-sēo-ðæt (DEM)	DET	demonstrative-article	Case=Dat|Gender=Masc|Number=Sing|PronType=DemArt	12	det	_	_
        12	were	wer ‘the legal money-equivalent of a person’s life, a man’s legal value’	NOUN	common noun	Case=Dat|Gender=Masc|Number=Sing	7	obl	_	SpaceAfter=No
        13	.	.	PUNCT	punctuation	Uninflected=Yes	7	punct	_	_

Future releases

The annotation process will be extended to include all the texts in ParCorOEv3, which contains around 300,000 words. This represents approximately one tenth of the written records of Old English. In its present state, ParCorOEv3 is comprised of 303,342 records corresponding to the following prose texts: ÆHOM (37,135 records), APOL (6,579), ASCA (15,602), ASCE (25,431), BERU (20,376), BOET (23,972), CHAR (25,887), HERB (10,863), LACN (8,901), LAWS (25,016), LEEC (11,883), MARK (12,031), MART (24,355), OROS (27,874), QUAD (4,337), SOLI (15,535) and WILL (7,565).

Citation

Domínguez Barragán, S. 2024. Universal Dependencies of Old English. PhD Dissertation, University of La Rioja.

Dataset Card Authors

Contributors: Martín Arista, Javier; Metola, Dario

Dataset Card Contact

Contact: [email protected]

Downloads last month
43