Datasets:
dataset_name: fineweb2-llm-annotated
pretty_name: JQL LLMs Multilingual Educational Quality Annotations
license: odc-by
source_license: Same as FineWeb2 (see upstream dataset)
size_categories:
- 10M<n<100M
language:
- bg
- cs
- hr
- mk
- pl
- sl
- sk
- sr
- uk
- da
- de
- is
- nl
- nn
- nb
- sv
- ca
- es
- fr
- ga
- gl
- it
- pt
- ro
- et
- fi
- hu
- lt
- lv
- el
- mt
- tr
- sq
- eu
- hy
- en
๐ JQL Educational Quality Annotations from LLMs
This dataset provides 17,186,606 documents with high-quality LLM annotations for evaluating the educational value of web documents, and serves as a benchmark for training and evaluating multilingual LLM annotators as described in the JQL paper.
๐ Dataset Summary
Multilingual document-level quality annotations scored on a 0โ5 educational value scale by three state-of-the-art LLMs: Gemma-3-27B-it, Mistral-3.1-24B-it, and LLaMA-3.3-70B-it. Up to 500k documents per language from FineWeb2 are included. Annotations are aligned with human ratings and intended for quality estimation, distillation, and multilingual benchmark research.
๐ Languages
In total we included 35 European languages. Input documents are in their native language, but models were prompted and responded in English.
๐งฑ Dataset Structure:
Name | Description |
---|---|
id | Unique FW2 identifier for the document |
text | Full textual content extracted from the webpage |
dum | Common Crawl dump identifier from which the data originates |
url | Source URL of the document |
date | Timestamp indicating when the document was crawled (ISO 8601 format) |
file_path | Path to the WARC file in the Common Crawl S3 bucket |
language | ISO 639-3 language code of the document (e.g., deu) |
language_script | Script used in the document (e.g., Latn for Latin script) |
language_score | Confidence score of the language identification (float between 0 and 1) |
top_langs | JSON string mapping detected language-script pairs to their scores |
minhash_cluster_size | Number of documents in the deduplication cluster |
filter_reason | Reason for filtering or deduplication (e.g., duplicated_5_n_grams), NaN if it would have been filtered |
edu_score | Dictionary with per-model aggregated scores (modelname_score), -1 if a invalid score was generated |
aggregation | Dictionary with per-model aggregated type (modelname_type), either majority or average |
โ๏ธ Data Splits:
This dataset is not pre-split. Users can generate custom splits by:
- Language
- Model agreement
- Prediction validity
- Document length or other features
๐ฏ Intended Use
- Training multilingual document quality models
- Benchmarking multilingual LLM performance
- Distillation and teacher-student LLM training
- Creating filters for noisy web-scale data
โ ๏ธ Limitations:
- LLM-generated scores, not human-authored
- Some predictions may be invalid or inconsistent
- No domain control across documents
- Educational value is a subjective, task-specific metric
๐ Citation
@article{ali2025judging,
title = {Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models},
author = {
Mehdi Ali,
Manuel Brack,
Max Lรผbbering,
Elias Wendt,
Abbas Goher Khan,
Richard Rutmann,
Alex Jude,
Maurice Kraus,
Alexander Arno Weber,
Felix Stollenwerk,
David Kaczรฉr,
Florian Mai,
Lucie Flek,
Rafet Sifa,
Nicolas Flores-Herr,
Joachim Kรถhler,
Patrick Schramowski,
Michael Fromm,
Kristian Kersting
},
year = {2025},
journal = {arXiv preprint arXiv:2505:22232}
}
๐ Links:
- Base Dataset: FineWeb2
- Related Work: FineWeb2 LLM Judging Section