MultiEup-v2 / README.md
Jinruiy's picture
Create README.md
3828178 verified
metadata
pretty_name: 'Multi-EuP v2: European Parliament Debates with MEP Metadata (24 languages)'
dataset_name: multi-eup-v2
configs:
  - config_name: default
    data_files: clean_all_with_did_qid.MEP.csv
license: cc-by-4.0
multilinguality: multilingual
task_categories:
  - text-classification
  - text-retrieval
  - text-generation
language:
  - bg
  - cs
  - da
  - de
  - el
  - en
  - es
  - et
  - fi
  - fr
  - ga
  - hr
  - hu
  - it
  - lt
  - lv
  - mt
  - nl
  - pl
  - pt
  - ro
  - sk
  - sl
  - sv
size_categories:
  - 10K<n<100K
homepage: ''
repository: ''
paper: https://aclanthology.org/2024.mrl-1.23/
tags:
  - multilingual
  - european-parliament
  - political-discourse
  - metadata
  - mep

Multi-EuP-v2

This dataset card documents Multi-EuP-v2, a multilingual corpus of European Parliament debate speeches enriched with Member of European Parliament (MEP) metadata and multilingual debate titles/IDs. It supports research on political text analysis, speaker-attribute prediction, stance/vote prediction, multilingual NLP, and retrieval.

Dataset Details

Dataset Description

Multi-EuP-v2 aggregates 50,337 debate speeches (each a unique did) in 24 languages. Each row contains the speech text (TEXT), speaker identity (NAME, MEPID), language (LANGUAGE), political group (PARTY), country and gender of the MEP, date, video timestamps, plus multilingual debate titles title_<LANG> and per-language debate/vote linkage IDs qid_<LANG>.

  • Curated by: Jinrui Yang, Fan Jiang, Timothy Baldwin
  • Funded by: Melbourne Research Scholarship; LIEF HPC-GPGPU Facility (LE170100200)
  • Shared by: University of Melbourne
  • Language(s) (NLP): bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv (24 total)
  • License: cc-by-4.0

Dataset Sources

Uses

Direct Use

  • Text classification: predict gender, PARTY, or country from TEXT.
  • Stance/vote prediction: link qid_<LANG> to external roll-call vote labels.
  • Multilingual representation learning: train/evaluate models across 24 EU languages.
  • Information retrieval: index TEXT and use title_*/qid_* as multilingual query anchors.

Out-of-Scope Use

  • Inferring private attributes beyond public MEP metadata.
  • Automated profiling for sensitive decisions.
  • Misrepresenting model outputs as factual statements.

Dataset Structure

Each row corresponds to a single speech/document.

Core fields:

  • did (string) β€” unique speech ID
  • TEXT (string) β€” speech text
  • DATE (string/date) β€” debate date
  • LANGUAGE (string) β€” language code
  • NAME (string) β€” MEP name
  • MEPID (string) β€” MEP ID
  • PARTY (string) β€” political group
  • country (string) β€” MEP's country
  • gender (string) β€” Female, Male, or Unknown
  • Additional provenance fields: PRESIDENT, TEXTID, CODICT, VOD-START, VOD-END

Multilingual metadata:

  • title_<LANG> (string) β€” debate title in that language
  • qid_<LANG> (string) β€” debate/vote linkage ID in that language

Splits: Single CSV, no predefined splits.

Basic stats:

  • Rows: 50,337
  • Languages: 24
  • Top political groups: PPE 8,869; S-D 8,468; Renew 5,313; ECR 4,130; Verts/ALE 4,001; ID 3,286; The Left 2,951; NI 2,539; GUE/NGL 468
  • Gender counts: Female 25,536; Male 23,461; Unknown 349
  • Top countries: Germany 7,226; France 6,158; Poland 3,706; Spain 3,312; Italy 3,222; Netherlands 1,924; Greece 1,756; Romania 1,701; Czechia 1,661; Portugal 1,150; Belgium 1,134; Hungary 1,106

Dataset Creation

Curation Rationale

Support multilingual political text research, enabling standardized tasks in gender/group prediction, stance/vote prediction, and IR.

Source Data

Data Collection and Processing

  • Source: Official EP debates.
  • Processing: metadata linking, language verification, deduplication, multilingual title extraction.
  • Quality checks: consistency in language tags and IDs.

Who are the source data producers?

MEPs speaking in plenary debates; titles from official EP records.

Annotations

Annotation process

Metadata compiled from public records; no manual stance labels.

Personal and Sensitive Information

Contains names and political opinions of public officials.

Bias, Risks, and Limitations

  • Domain bias: formal political discourse.
  • Risk in demographic inference tasks.
  • Language/script differences affect comparability.

Recommendations

  • Report per-language metrics.
  • Avoid over-claiming causal interpretations.

Citation

BibTeX:

@inproceedings{yang-etal-2024-language-bias,
  title     = {Language Bias in Multilingual Information Retrieval: The Nature of the Beast and Mitigation Methods},
  author    = {Yang, Jinrui and Jiang, Fan and Baldwin, Timothy},
  booktitle = {Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)},
  year      = {2024},
  pages     = {280--292},
  publisher = {Association for Computational Linguistics},
  url       = {https://aclanthology.org/2024.mrl-1.23/},
  doi       = {10.18653/v1/2024.mrl-1.23}
}

APA: Yang, J., Jiang, F., & Baldwin, T. (2024). Language Bias in Multilingual Information Retrieval: The Nature of the Beast and Mitigation Methods. In MRL 2024. ACL.

Contact

[email protected]