--- configs: - config_name: default data_files: - split: en path: data/ultrafineweb_en/* - split: zh path: data/ultrafineweb_zh/* features: - name: content dtype: string - name: score dtype: float - name: source dtype: string task_categories: - text-generation language: - en - zh pretty_name: Ultra-FineWeb size_categories: - n>1T --- # Ultra-FineWeb - [📜 Technical Report]() - [💻 Github Repo]() - [🤗 Classifier Models]() ## 📚 Introduction Ultra-FineWeb datasets contains approximately 1 trillion English tokens and 120 billion Chinese tokens. ## 📢 What's New - **[2025.xx.xx]** **Ultra-FineWeb** technical report is available on [arXiv](). 🔥🔥🔥 - Datasets and models are coming soon... 🔜🚀 ## 💡 Highlights ## 📈 Evaluation Results ## ❤️ Acknowledgements - The ***Ultra-FineWeb classifier*** is built based on [fastText](https://fasttext.cc/). - The ***Ultra-FineWeb-en dataset*** is built based on [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb). - The ***Ultra-FineWeb-zh dataset*** is constructed based on [IndustryCorpus2](https://huggingface.co/datasets/BAAI/IndustryCorpus2), [MiChao](https://opendatalab.com/OpenDataLab/MiChao), [WuDao](https://data.baai.ac.cn/details/WuDaoCorporaText), [SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B), [WanJuan](https://opendatalab.com/OpenDataLab/WanJuanCC), [ChineseWebText](https://huggingface.co/datasets/CASIA-LM/ChineseWebText2.0), [TeleChat](https://huggingface.co/datasets/Tele-AI/TeleChat-PTD), and [CCI3](https://huggingface.co/datasets/BAAI/CCI3-Data). Thanks for their awesome work! Open-source contributions make Ultra-FineWeb possible! 🙌 ## 🌟 Citation If you find our work useful, please consider citing: ```bibtex Coming soon... ``` ## 💳 License This project is released under the [MIT](./LICENSE). Please note that since ***Ultra-FineWeb*** is built using multiple datasets, users should check the **LICENSE of each dataset individually** to ensure proper usage and compliance.