metadata
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1683061783
num_examples: 259270
download_size: 856655992
dataset_size: 1683061783
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
This is the books taken from online sources:
Total examples: 259,270
Total tokens: 380,220,892
Average length: 1466.5 tokens
Maximum length: 81,761 tokens
95th percentile: 3601.0 tokens