Datasets:
File size: 3,702 Bytes
8e52d40 7437e4c 8e52d40 7437e4c 8e52d40 f80bc20 7437e4c 274bf13 f80bc20 00e73da 8e52d40 908c046 702d464 908c046 00e73da 702d464 00e73da |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 |
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: text-only
data_files:
- split: train
path: text-only/train-*
- split: validation
path: text-only/validation-*
- split: test
path: text-only/test-*
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 15368746858.779654
num_examples: 5673373
- name: validation
num_bytes: 404439922.64724064
num_examples: 149299
- name: test
num_bytes: 404442631.57310516
num_examples: 149300
download_size: 9703633440
dataset_size: 16177629413
- config_name: text-only
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 14834731398.280304
num_examples: 5673373
- name: validation
num_bytes: 390386911.46022856
num_examples: 149299
- name: test
num_bytes: 390389526.2594667
num_examples: 149300
download_size: 9374463601
dataset_size: 15615507835.999998
license: cc-by-sa-3.0
task_categories:
- text-generation
- fill-mask
- feature-extraction
language:
- en
tags:
- wiki
- wikipedia
- pretrain
size_categories:
- 1M<n<10M
source_datasets: graelo/wikipedia
---
# wikipedia - 20230901.en - deduped
> purpose: train with less data while maintaining (most) of the quality
This is really more of a "high quality diverse sample" rather than _"we are trying to remove literal duplicate documents"_. Source dataset: [graelo/wikipedia](https://huggingface.co/datasets/graelo/wikipedia).
## configs
### `default`
command:
```sh
python -m text_dedup.minhash \
--path $ds_name \
--name $dataset_config \
--split $data_split \
--cache_dir "./cache" \
--output $out_dir \
--column $text_column \
--ngram 4 --threshold 0.6 \
--hash_func xxh3 --hash_bits 16 --num_perm 64 \
--batch_size 10000
```
dedup:
```sh
Fingerprinting... (num_proc=40): 100% 6705754/6705754 [06:57<00:00, 16063.27 examples/s]
Iterating MinHashes...: 100% 671/671 [04:13<00:00, 2.65it/s]
Clustering...: 100% 10/10 [00:21<00:00, 2.18s/it]
Finding clusters... (num_proc=40): 100% 6705754/6705754 [06:38<00:00, 16839.42 examples/s]
Filtering clusters... (num_proc=40): 100% 6705754/6705754 [02:25<00:00, 46058.39 examples/s]
Saving the dataset (39/39 shards): 100% 5971972/5971972 [03:47<00:00, 26266.10 examples/s]
[10/23/23 02:29:41] INFO Loading : 78.82s
```
result:
```python
DatasetDict({
train: Dataset({
features: ['id', 'url', 'title', 'text'],
num_rows: 5673373
})
validation: Dataset({
features: ['id', 'url', 'title', 'text'],
num_rows: 149299
})
test: Dataset({
features: ['id', 'url', 'title', 'text'],
num_rows: 149300
})
})
```
### text-only
This is the same thing but with all columns except for 'text' removed.
```python
from datasets import load_dataset
# If the dataset is gated/private, make sure you have run huggingface-cli login
config_name = "text-only"
dataset = load_dataset("BEE-spoke-data/wikipedia-deduped", config_name)
```
## token counts
### train
Using `tiktoken` GPT-4 tokenizer, `train` split, and `text` column:
| | num_tokens |
|:------|----------------:|
| count | 5.67337e+06 |
| mean | 612.413 |
| std | 739.331 |
| min | 3 |
| 25% | 163 |
| 50% | 359 |
| 75% | 761 |
| max | 34298 |
total: 3,474,446,396
---
|