metadata
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: input_ids
sequence: int32
- name: phonemes
sequence: string
splits:
- name: train
num_bytes: 1634785210
num_examples: 1571960
download_size: 473382889
dataset_size: 1634785210
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
PL-BERT-MS dataset
Combine wikimedia/wikipedia on /20231101.ms
with news dataset.
Tokenizer from mesolitica/PL-BERT-MS.
Source code
All source code at https://github.com/mesolitica/PL-BERT-MS