--- license: - cc-by-sa-3.0 - gfdl language: en size_categories: - 100K ![chonkiepedia](./assets/chonkiepedia.png) # 📚🦛 Chonkiepedia: A dataset of Chonkified Wikipedia for fine-tuning models ## Overview Chonkiepedia is a dataset of Chonkified Wikipedia for fine-tuning models. It has about 1 million Chonkified Wikipedia articles. ## Methodology 1. We take the English Wikipedia and filter the articles based on length of at least 5000 characters (~1000 words). 2. We remove all references and `see also` sections. 3. We normalize the text to remove any weird spacing and newlines. 4. We run Chonkie's RecursiveChunker under specific parameters to return a list of good quality chunks (on average). 5. We combine the chunks with the `🦛` emoji for efficient storage. ## Usage You can download the dataset from the Hugging Face Hub. ```python from datasets import load_dataset dataset = load_dataset("chonkie/chonkiepedia", split="train") ``` ## License This dataset is licensed under the [Creative Commons Attribution-ShareAlike 3.0 License](https://creativecommons.org/licenses/by-sa/3.0/) and the [GNU Free Documentation License](https://www.gnu.org/licenses/fdl-1.3.en.html) just like the original Wikipedia. ## Citation If you use this dataset, please cite it as follows: ```bibtex @article{chonkiepedia2025, title={Chonkiepedia: A dataset of Chonkified Wikipedia for fine-tuning models}, author={Chonkie, Inc.}, year={2025} } ```