Datasets:
metadata
license: odc-by
viewer: true
task_categories:
- text-generation
language:
- ro
tags:
- language-modeling
- casual-lm
- llm
pretty_name: FuLG
size_categories:
- 100B<n<1T
❄️FuLG
The FuLG dataset is a comprehensive Romanian language corpus comprising 150 billion tokens, carefully extracted from Common Crawl. This extensive dataset is the result of rigorous filtering and deduplication processes applied to 95 Common Crawl snapshots. The compressed dataset has 289 GB.
For more details, check the arXiv preprint.
How do I download this?
Using 🤗 Datasets
from datasets import load_dataset
# Full dataset
dataset = load_dataset("faur-ai/fulg")
# To load the data from a specific CC snapshot
dataset = load_dataset("faur-ai/fulg", data_dir='2018-05')
Using Git
git clone https://huggingface.co/datasets/faur-ai/fulg
Data Fields
The data have several fields:
url
: url of the source as a stringdate_download
: date of crawldigest
: hash of contentlength
: length of contentnlines
: number of linessource_domain
: domain of documenttitle
: title of documentraw_content
: text content as a stringcc_segment
: source CommonCrawl segmentoriginal_nlines
: original number of lines before processingoriginal_length
: original length before processinglanguage
: language (ro)language_score
: score for language
Licensing Information
We are releasing this dataset under the terms of ODC-BY. By using this dataset, you are also bound any license agreements and terms of use of the original data sources.
Bibtex
If you use our dataset, please cite us at:
@misc{fulg150bromaniancorpus,
title={FuLG: 150B Romanian Corpus for Language Model Pretraining},
author={Vlad-Andrei Bădoiu and Mihai-Valentin Dumitru and Alexandru M. Gherghescu and Alexandru Agache and Costin Raiciu},
year={2024},
eprint={2407.13657},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.13657},
}