--- license: cc-by-nc-sa-4.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* dataset_info: features: - name: source_normalized sequence: string - name: source_diacritics sequence: string - name: tag_oblubienica sequence: string - name: tag_biblehub sequence: string - name: target_pl sequence: string - name: target_en sequence: string - name: book dtype: int64 - name: chapter dtype: int64 - name: verse dtype: int64 - name: book_name_pl dtype: string - name: book_name_en dtype: string splits: - name: train num_bytes: 8491619 num_examples: 6352 - name: validation num_bytes: 1062996 num_examples: 794 - name: test num_bytes: 1066036 num_examples: 794 download_size: 2799395 dataset_size: 10620651 tags: - interlinear-translation task_categories: - translation language: - en - pl pretty_name: Interlinear Translations of the Greek New Testament --- # Dataset Card for Ancient Greek Interlinear Translations Dataset This dataset provides word-level aligned interlinear translations of the New Testament from Ancient Greek to English and Polish with morphological tags sourced from Oblubienica (https://biblia.oblubienica.pl) and BibleHub (https://biblehub.com/interlinear). See https://github.com/mrapacz/loreslm-interlinear-translation for more details. ## Dataset Details ### Dataset Description The dataset contains interlinear translations where each Greek word is paired with its corresponding Polish and English translations, along with morphological tags from two different annotation systems. We applied a set of heuristics to align the corpora at the word level. The alignment process achieved over 99% word matching accuracy, with unmatched words being excluded. Subsequently, we trimmed all verses so that the least memory-efficient models tested in our research could encode them within the chosen limit of 512 tokens. - **Curated by:** Maciej Rapacz - **Language(s):** Ancient Greek, English, Polish - **License:** CC BY-NC-SA 4.0 ### Dataset Sources - **Repository:** https://huggingface.co/datasets/mrapacz/greek-interlinear-translations - **Source Texts:** - English interlinear translation from BibleHub (NA27 critical edition) - https://biblehub.com/interlinear - Polish interlinear translation from Oblubienica (NA28 critical edition) - https://biblia.oblubienica.pl ## Dataset Structure The dataset is divided into: - Training: 6,352 verses (80%) - Validation: 794 verses (10%) - Test: 794 verses (10%) Each entry contains: - `source_diacritics`: Greek text with diacritics (BibleHub source) - `source_normalized`: Normalized Greek text (lowercase, no diacritics) - `tag_biblehub`: BibleHub morphological tags - `tag_oblubienica`: Oblubienica morphological tags - `target_pl`: Polish translation sourced from Oblubienica - `target_en`: English translation sourced from BibleHub - `book`: Book number - `chapter`: Chapter number - `verse`: Verse number - `book_name_pl`: Book name in Polish - `book_name_en`: Book name in English ## Dataset Card Authors Maciej Rapacz ## Citation ```bixtex @inproceedings{rapacz-smywinski-pohl-2025-low, title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek", author = "Rapacz, Maciej and Smywi{\'n}ski-Pohl, Aleksander", editor = "Hettiarachchi, Hansi and Ranasinghe, Tharindu and Rayson, Paul and Mitkov, Ruslan and Gaber, Mohamed and Premasiri, Damith and Tan, Fiona Anting and Uyangodage, Lasitha", booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages", month = jan, year = "2025", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.loreslm-1.11/", pages = "145--165", abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios." } ```