The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Ultra-FineWeb

π Introduction
Ultra-FineWeb is a large-scale, high-quality, and efficiently-filtered dataset. We use the proposed efficient verification-based high-quality filtering pipeline to the FineWeb and Chinese FineWeb datasets (source data from Chinese FineWeb-edu-v2, which includes IndustryCorpus2, MiChao, WuDao, SkyPile, WanJuan, ChineseWebText, TeleChat, and CCI3), resulting in the creation of higher-quality Ultra-FineWeb-en with approximately 1T tokens, and Ultra-FineWeb-zh datasets with approximately 120B tokens, collectively referred to as Ultra-FineWeb.
π’ What's New
- [2025.05.09] Ultra-FineWeb technical report is available on arXiv. π₯π₯π₯
- Datasets and models are coming soon... ππ
π‘ Highlights
Abstract: Data quality has become a key factor in enhancing model performance with the rapid development of large language models (LLMs). Model-driven data filtering has increasingly become a primary approach for acquiring high-quality data. However, it still faces two main challenges: (1) the lack of an efficient data verification strategy makes it difficult to provide timely feedback on data quality; and (2) the selection of seed data for training classifiers lacks clear criteria and relies heavily on human expertise, introducing a degree of subjectivity. To address the first challenge, we introduce an efficient verification strategy that enables rapid evaluation of the impact of data on LLM training with minimal computational cost. To tackle the second challenge, we build upon the assumption that high-quality seed data is beneficial for LLM training, and by integrating the proposed verification strategy, we optimize the selection of positive and negative samples and propose an efficient data filtering pipeline. This pipeline not only improves filtering efficiency, classifier quality, and robustness, but also significantly reduces experimental and inference costs. In addition, to efficiently filter high-quality data, we employ a lightweight classifier based on fastText, and successfully apply the filtering pipeline to two widely-used pre-training corpora, FineWeb and Chinese FineWeb datasets, resulting in the creation of the higher-quality Ultra-FineWeb dataset. Ultra-FineWeb contains approximately 1 trillion (T) English tokens and 120 billion (B) Chinese tokens. Empirical results demonstrate that the LLMs trained on Ultra-FineWeb exhibit significant performance improvements across multiple benchmark tasks, validating the effectiveness of our pipeline in enhancing both data quality and training efficiency.

- Efficient Verification Strategy: We propose a computationally efficient verification strategy that enables rapid evaluation of the impact of data on LLM training performance with minimal computational cost, significantly improving the efficiency of high-quality data filtering experiments.
- Large-Scale High-Quality Pre-training Datasets: We design and implement an efficient high-quality data filtering pipeline, applied to the FineWeb and Chinese FineWeb datasets, resulting in the creation of higher-quality datasets, which can facilitate high-quality LLM training.
- Lightweight Classifier: The Ultra-FineWeb classifier significantly reduces inference costs, achieving superior performance on extracted text from the same data source, thus validating the effectiveness of our proposed data filtering pipeline in enhancing data quality and training efficiency.
π Evaluation Results
We utilize the MiniCPM-1.2B model architecture with the MiniCPM3-4B tokenizer. Each experiment involves training on 100B tokens, allowing for comprehensive data performance validation within computationally efficient parameters. We employ Lighteval library for model evaluation, adopt 11 benchmarks to evaluate the performance of trained models, and all evaluation metrics are based on a zero-shot setting. The evaluation metrics include:
- English benchmarks: MMLU, ARC-C, ARC-E, CommonSenseQA, HellaSwag, OpenbookQA, PIQA, SIQA, and Winogrande.
- Chinese benchmarks: C-Eval and CMMLU.
Detailed evaluation results are reported below:
Individual data experiments. We perform isolated training runs using single datasets, facilitating direct comparisons between differently processed data from identical sources.
Mixed Data Experiments. We use a mix of 60% English data, 30% Chinese data, and 10% code data (StarCoder-v2).
Loss and Performance Estimation Results. We use the performance estimation methods proposed in Densing Law for further analysis and verification of the effectiveness of Ultra-FineWeb.


β€οΈ Acknowledgements
- The Ultra-FineWeb classifier is built based on fastText.
- The Ultra-FineWeb-en dataset is built based on FineWeb.
- The Ultra-FineWeb-zh dataset is constructed based on IndustryCorpus2, MiChao, WuDao, SkyPile, WanJuan, ChineseWebText, TeleChat, and CCI3.
Thanks for their awesome work! Open-source contributions make Ultra-FineWeb possible! π
π Citation
If you find our work useful, please consider citing:
@misc{wang2025ultrafineweb,
title={{Ultra-FineWeb}: Efficient Data Filtering and Verification for High-Quality LLM Training Data},
author={Yudong Wang and Zixuan Fu and Jie Cai and Peijun Tang and Hongya Lyu and Yewei Fang and Zhi Zheng and Jie Zhou and Guoyang Zeng and Chaojun Xiao and Xu Han and Zhiyuan Liu},
year={2025},
eprint={2505.05427},
archivePrefix={arXiv},
primaryClass={cs.CL},
}
π³ License
This project is released under the MIT. Please note that since Ultra-FineWeb is built using multiple datasets, users should check the LICENSE of each dataset individually to ensure proper usage and compliance.
- Downloads last month
- 183