File size: 802 Bytes
41caa94 4652a2f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: input_ids
sequence: int32
- name: phonemes
sequence: string
splits:
- name: train
num_bytes: 1634785210
num_examples: 1571960
download_size: 473382889
dataset_size: 1634785210
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# PL-BERT-MS dataset
Combine [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) on `/20231101.ms` with [news dataset](https://huggingface.co/datasets/mesolitica/TTS/tree/main/texts).
Tokenizer from [mesolitica/PL-BERT-MS](https://huggingface.co/mesolitica/PL-BERT-MS).
## Source code
All source code at https://github.com/mesolitica/PL-BERT-MS
|