metadata
license:
- cc-by-sa-3.0
- gfdl
language: en
size_categories:
- 100K<N<1M
task_categories:
- token-classification
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: raw
dtype: string
- name: text
dtype: string
- name: words
sequence: string
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 52609904366.168655
num_examples: 960000
- name: validation
num_bytes: 100561640.1165828
num_examples: 1835
- name: test
num_bytes: 54801983.71475902
num_examples: 1000
download_size: 20400944781
dataset_size: 52765267990
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
Overview
Chonkiepedia is a dataset of Chonkified Wikipedia for fine-tuning models. It has about 1 million Chonkified Wikipedia articles.
Methodology
- We take the English Wikipedia and filter the articles based on length of at least 5000 characters (~1000 words).
- We remove all references and
see also
sections. - We normalize the text to remove any weird spacing and newlines.
- We run Chonkie's RecursiveChunker under specific parameters to return a list of good quality chunks (on average).
- We combine the chunks with the
π¦
emoji for efficient storage.
Usage
You can download the dataset from the Hugging Face Hub.
from datasets import load_dataset
dataset = load_dataset("chonkie/chonkiepedia", split="train")
License
This dataset is licensed under the Creative Commons Attribution-ShareAlike 3.0 License and the GNU Free Documentation License just like the original Wikipedia.
Citation
If you use this dataset, please cite it as follows:
@article{chonkiepedia2025,
title={Chonkiepedia: A dataset of Chonkified Wikipedia for fine-tuning models},
author={Chonkie, Inc.},
year={2025}
}