|
--- |
|
language: |
|
- pt |
|
license: cc-by-4.0 |
|
size_categories: |
|
- 10M<n<100M |
|
task_categories: |
|
- text-generation |
|
pretty_name: CrawlPT (deduplicated) |
|
dataset_info: |
|
- config_name: OSCAR-2301 |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: text |
|
dtype: string |
|
- name: meta |
|
struct: |
|
- name: categories |
|
sequence: string |
|
- name: dedup |
|
struct: |
|
- name: exact_norm |
|
struct: |
|
- name: cluster_main_idx |
|
dtype: int64 |
|
- name: cluster_size |
|
dtype: int64 |
|
- name: exact_hash_idx |
|
dtype: int64 |
|
- name: is_duplicate |
|
dtype: bool |
|
- name: minhash |
|
struct: |
|
- name: cluster_main_idx |
|
dtype: int64 |
|
- name: cluster_size |
|
dtype: int64 |
|
- name: is_duplicate |
|
dtype: bool |
|
- name: minhash_idx |
|
dtype: int64 |
|
- name: harmful_pp |
|
dtype: float64 |
|
- name: identification |
|
struct: |
|
- name: label |
|
dtype: string |
|
- name: prob |
|
dtype: float64 |
|
- name: quality_warnings |
|
sequence: string |
|
- name: sentence_identifications |
|
list: |
|
- name: label |
|
dtype: string |
|
- name: prob |
|
dtype: float64 |
|
- name: tlsh |
|
dtype: string |
|
- name: warc_headers |
|
struct: |
|
- name: content-length |
|
dtype: int64 |
|
- name: content-type |
|
dtype: string |
|
- name: warc-block-digest |
|
dtype: string |
|
- name: warc-date |
|
dtype: string |
|
- name: warc-identified-content-language |
|
dtype: string |
|
- name: warc-record-id |
|
dtype: string |
|
- name: warc-refers-to |
|
dtype: string |
|
- name: warc-target-uri |
|
dtype: string |
|
- name: warc-type |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 77259995670.30853 |
|
num_examples: 10888966 |
|
download_size: 42589347661 |
|
dataset_size: 77259995670.30853 |
|
- config_name: all |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: source |
|
dtype: string |
|
- name: orig_id |
|
dtype: int64 |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 133074727589 |
|
num_examples: 52462533 |
|
download_size: 81483949567 |
|
dataset_size: 133074727589 |
|
- config_name: brwac |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: text |
|
dtype: string |
|
- name: meta |
|
struct: |
|
- name: dedup |
|
struct: |
|
- name: exact_norm |
|
struct: |
|
- name: cluster_main_idx |
|
dtype: int64 |
|
- name: cluster_size |
|
dtype: int64 |
|
- name: exact_hash_idx |
|
dtype: int64 |
|
- name: is_duplicate |
|
dtype: bool |
|
- name: minhash |
|
struct: |
|
- name: cluster_main_idx |
|
dtype: int64 |
|
- name: cluster_size |
|
dtype: int64 |
|
- name: is_duplicate |
|
dtype: bool |
|
- name: minhash_idx |
|
dtype: int64 |
|
- name: doc_id |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: uri |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 18218935459.169613 |
|
num_examples: 3513588 |
|
download_size: 11210909325 |
|
dataset_size: 18218935459.169613 |
|
- config_name: cc100 |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: text |
|
dtype: string |
|
- name: meta |
|
struct: |
|
- name: dedup |
|
struct: |
|
- name: exact_norm |
|
struct: |
|
- name: cluster_main_idx |
|
dtype: int64 |
|
- name: cluster_size |
|
dtype: int64 |
|
- name: exact_hash_idx |
|
dtype: int64 |
|
- name: is_duplicate |
|
dtype: bool |
|
- name: minhash |
|
struct: |
|
- name: cluster_main_idx |
|
dtype: int64 |
|
- name: cluster_size |
|
dtype: int64 |
|
- name: is_duplicate |
|
dtype: bool |
|
- name: minhash_idx |
|
dtype: int64 |
|
splits: |
|
- name: train |
|
num_bytes: 53707749127.11777 |
|
num_examples: 38059979 |
|
download_size: 34844109320 |
|
dataset_size: 53707749127.11777 |
|
configs: |
|
- config_name: OSCAR-2301 |
|
data_files: |
|
- split: train |
|
path: OSCAR-2301/train-* |
|
- config_name: all |
|
data_files: |
|
- split: train |
|
path: all/train-* |
|
- config_name: brwac |
|
data_files: |
|
- split: train |
|
path: brwac/train-* |
|
- config_name: cc100 |
|
data_files: |
|
- split: train |
|
path: cc100/train-* |
|
--- |
|
|
|
# CrawlPT (deduplicated) |
|
|
|
CrawlPT is a generic Portuguese corpus extracted from various web pages. |
|
|
|
This version is deduplicated using MinHash algorithm and Locality Sensitive Hashing, following the approach of Lee et al. (2022). |
|
The raw version is also available [here](https://huggingface.co/datasets/eduagarcia/CrawlPT). |
|
|
|
## Dataset Details |
|
|
|
Dataset is composed by three corpora: |
|
[brWaC](https://aclanthology.org/L18-1686/), [C100-PT](https://arxiv.org/abs/1911.02116), [OSCAR-2301](http://arxiv.org/abs/2201.06642). |
|
|
|
- **brWaC**: a web corpus for Brazilian Portuguese from 120,000 different websites. |
|
- **C100-PT**: Portuguese subset from CC-100. C100 was created for training the multilingual Transformer XLM-R, containing two terabytes of cleaned data from 2018 snapshots of the [Common Crawl project](\url{https://commoncrawl.org/about/) in 100 languages. We use the , which contains 49.1 GiB of text. |
|
- **OSCAR-2301-PT**: curation from OSCAR-2301 in the Portuguese language. |
|
|
|
### Dataset Description |
|
|
|
- **Language(s) (NLP):** Brazilian Portuguese (pt-BR) |
|
- **License:** [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/deed.en) |
|
- **Repository:** https://github.com/eduagarcia/roberta-legal-portuguese |
|
- **Paper:** [More Information Needed] |
|
|
|
## Data Collection and Processing |
|
|
|
Raw corpora sizes in terms of billions of tokens and file size in GiB: |
|
|
|
| Corpus | Domain | Tokens (B) | Size (GiB) | |
|
|-----------------|:-------:|:----------:|:----------:| |
|
| brWaC | General | 2.7 | 16.3 | |
|
| CC100 (PT) | General | 8.4 | 49.1 | |
|
| OSCAR-2301 (PT) | General | 18.1 | 97.8 | |
|
|
|
|
|
CrawlPT is deduplicated using [MinHash algorithm](https://dl.acm.org/doi/abs/10.5555/647819.736184) and [Locality Sensitive Hashing](https://dspace.mit.edu/bitstream/handle/1721.1/134231/v008a014.pdf?sequence=2&isAllowed=y), following the approach of [Lee et al. (2022)](http://arxiv.org/abs/2107.06499). |
|
|
|
We used 5-grams and a signature of size 256, considering two documents to be identical if their Jaccard Similarity exceeded 0.7. |
|
Deduplicate rate found by the Minhash-LSH algorithm for the CrawlPT corpus: |
|
|
|
| Corpus | Documents | Docs. after deduplicatio} | Duplicates (%) | |
|
|------------------------|:----------:|:-------------------------:|:--------------:| |
|
| brWaC | 3,530,796 | 3,513,588 | 0.49 | |
|
| OSCAR-2301 (PT Subset) | 18,031,400 | 10,888,966 | 39.61 | |
|
| CC100 (PT Subset) | 38,999,388 | 38,059,979 | 2.41 | |
|
| Total (CrawlPT) | 60,561,584 | 52,462,533 | 13.37 | |
|
|
|
|
|
## Citation |
|
|
|
```bibtex |
|
@InProceedings{garcia2024_roberlexpt, |
|
author="Garcia, Eduardo A. S. |
|
and Silva, N{\'a}dia F. F. |
|
and Siqueira, Felipe |
|
and Gomes, Juliana R. S. |
|
and Albuqueruqe, Hidelberg O. |
|
and Souza, Ellen |
|
and Lima, Eliomar |
|
and De Carvalho, André", |
|
title="RoBERTaLexPT: A Legal RoBERTa Model pretrained with deduplication for Portuguese", |
|
booktitle="Computational Processing of the Portuguese Language", |
|
year="2024", |
|
publisher="Association for Computational Linguistics" |
|
} |
|
``` |
|
|
|
## Acknowledgment |
|
|
|
This work has been supported by the AI Center of Excellence (Centro de Excelência em Inteligência Artificial – CEIA) of the Institute of Informatics at the Federal University of Goiás (INF-UFG). |