Datasets:
filename
stringlengths 9
9
| title
stringlengths 12
32
| text
stringlengths 187k
257k
|
|---|---|---|
53765.txt
|
Kabumpo in Oz
| "\"the cake, you chattering chittimong! where is the cake? stirem, friem, hashem, where is the cake?(...TRUNCATED)
|
55806.txt
|
Ozoplaning with the Wizard of Oz
| "in his big brightly lighted laboratory back of the throne room, the wizard of oz paced impatiently (...TRUNCATED)
|
55851.txt
|
The Wishing Horse of Oz
| "\"is this all?\" the king of skampavia frowned at the great stack of bags, bales, crates and carrie(...TRUNCATED)
|
56073.txt
|
Captain Salt in Oz
| "eight miles east of pingaree lies the eight-sided island of king ato the eighth. while not so large(...TRUNCATED)
|
56079.txt
|
Handy Mandy in Oz
| "\"what-a-butter! what-a-butter!\" high and clear above the peaks of mt. mern floated the voice of t(...TRUNCATED)
|
56085.txt
|
The Silver Princess in Oz
| "in a far-away northwestern corner of the gilliken country of oz lies the rugged little kingdom of r(...TRUNCATED)
|
58765.txt
|
The Cowardly Lion of Oz
| "\"tazzywaller, i must have another lion,\" said mustafa of mudge, giving his blue whiskers a terrib(...TRUNCATED)
|
61681.txt
|
Grampa in Oz
| "king fumbo of ragbad shook in his carpet slippers. he had removed his red shoes, so he could not ve(...TRUNCATED)
|
65849.txt
|
The Lost King of Oz
| "the king of kimbaloo was kind'a jolly, and kinda jolly was the king of kimbaloo. and no wonder he w(...TRUNCATED)
|
70152.txt
|
The Hungry Tiger of Oz
| "\"burnt again!\" roared the pasha of rash, flinging his bowl of pudding across the table. \"vassals(...TRUNCATED)
|
ContextLab Ruth Plumly Thompson Corpus
Dataset Description
This dataset contains works of Ruth Plumly Thompson (1891-1976), preprocessed for computational stylometry research. The texts were sourced from Project Gutenberg and cleaned for use in the paper "A Stylometric Application of Large Language Models" (Stropkay et al., 2025).
The corpus includes 13 books by Ruth Plumly Thompson, including The Oz book series (books 15-35, continuing Baum's work). All text has been converted to lowercase and cleaned of Project Gutenberg headers, footers, and chapter headings to focus on the author's prose style.
Quick Stats
- Books: 13
- Total characters: 2,932,685
- Total words: 520,058 (approximate)
- Average book length: 225,591 characters
- Format: Plain text (.txt files)
- Language: English (lowercase)
Dataset Structure
Books Included
Each .txt file contains the complete text of one book:
| File | Title |
|---|---|
53765.txt |
Kabumpo in Oz |
55806.txt |
Ozoplaning with the Wizard of Oz |
55851.txt |
The Wishing Horse of Oz |
56073.txt |
Captain Salt in Oz |
56079.txt |
Handy Mandy in Oz |
56085.txt |
The Silver Princess in Oz |
58765.txt |
The Cowardly Lion of Oz |
61681.txt |
Grampa in Oz |
65849.txt |
The Lost King of Oz |
70152.txt |
The Hungry Tiger of Oz |
71273.txt |
The Gnome King of Oz |
73170.txt |
The giant horse of Oz |
75720.txt |
Jack Pumpkinhead of Oz |
Data Fields
- text: Complete book text (lowercase, cleaned)
- filename: Project Gutenberg ID
Data Format
All files are plain UTF-8 text:
- Lowercase characters only
- Punctuation and structure preserved
- Paragraph breaks maintained
- No chapter headings or non-narrative text
Usage
Load with datasets library
from datasets import load_dataset
# Load entire corpus
corpus = load_dataset("contextlab/thompson-corpus")
# Iterate through books
for book in corpus['train']:
print(f"Book length: {len(book['text']):,} characters")
print(book['text'][:200]) # First 200 characters
print()
Load specific file
# Load single book by filename
dataset = load_dataset(
"contextlab/thompson-corpus",
data_files="54.txt" # Specific Gutenberg ID
)
text = dataset['train'][0]['text']
print(f"Loaded {len(text):,} characters")
Download files directly
from huggingface_hub import hf_hub_download
# Download one book
file_path = hf_hub_download(
repo_id="contextlab/thompson-corpus",
filename="54.txt",
repo_type="dataset"
)
with open(file_path, 'r') as f:
text = f.read()
Use for training language models
from datasets import load_dataset
from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments
# Load corpus
corpus = load_dataset("contextlab/thompson-corpus")
# Combine all books into single text
full_text = " ".join([book['text'] for book in corpus['train']])
# Tokenize
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
def tokenize_function(examples):
return tokenizer(examples['text'], truncation=True, max_length=1024)
tokenized = corpus.map(tokenize_function, batched=True, remove_columns=['text'])
# Initialize model
model = GPT2LMHeadModel.from_pretrained("gpt2")
# Set up training
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=10,
per_device_train_batch_size=8,
save_steps=1000,
)
# Train
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized['train']
)
trainer.train()
Analyze text statistics
from datasets import load_dataset
import numpy as np
corpus = load_dataset("contextlab/thompson-corpus")
# Calculate statistics
lengths = [len(book['text']) for book in corpus['train']]
print(f"Books: {len(lengths)}")
print(f"Total characters: {sum(lengths):,}")
print(f"Mean length: {np.mean(lengths):,.0f} characters")
print(f"Std length: {np.std(lengths):,.0f} characters")
print(f"Min length: {min(lengths):,} characters")
print(f"Max length: {max(lengths):,} characters")
Dataset Creation
Source Data
All texts sourced from Project Gutenberg, a library of over 70,000 free eBooks in the public domain.
Project Gutenberg Links:
- Books identified by Gutenberg ID numbers (filenames)
- Example:
54.txtcorresponds to https://www.gutenberg.org/ebooks/54 - All works are in the public domain
Preprocessing Pipeline
The raw Project Gutenberg texts underwent the following preprocessing:
- Header/footer removal: Project Gutenberg license text and metadata removed
- Lowercase conversion: All text converted to lowercase for stylometry
- Chapter heading removal: Chapter titles and numbering removed
- Non-narrative text removal: Tables of contents, dedications, etc. removed
- Encoding normalization: Converted to UTF-8
- Structure preservation: Paragraph breaks and punctuation maintained
Why lowercase? Stylometric analysis focuses on word choice, syntax, and style rather than capitalization patterns. Lowercase normalization removes this variable.
Preprocessing code: Available at https://github.com/ContextLab/llm-stylometry
Considerations for Using This Dataset
Known Limitations
- Historical language: Reflects early-to-mid 20th century America vocabulary, grammar, and cultural context
- Lowercase only: All text converted to lowercase (not suitable for case-sensitive analysis)
- Incomplete corpus: May not include all of Ruth Plumly Thompson's writings (only public domain works on Gutenberg)
- Cleaning artifacts: Some formatting irregularities may remain from Gutenberg source
- Public domain only: Limited to works published before copyright restrictions
Intended Use Cases
- Stylometry research: Authorship attribution, style analysis
- Language modeling: Training author-specific models
- Literary analysis: Computational study of Ruth Plumly Thompson's writing
- Historical NLP: early-to-mid 20th century America language patterns
- Educational: Teaching computational text analysis
Out-of-Scope Uses
- Case-sensitive text analysis
- Modern language applications
- Factual information retrieval
- Complete scholarly editions (use academic sources)
Citation
If you use this dataset in your research, please cite:
@article{StroEtal25,
title={A Stylometric Application of Large Language Models},
author={Stropkay, Harrison F. and Chen, Jiayi and Jabelli, Mohammad J. L. and Rockmore, Daniel N. and Manning, Jeremy R.},
journal={arXiv preprint arXiv:2510.21958},
year={2025}
}
Additional Information
Dataset Curator
ContextLab, Dartmouth College
Licensing
MIT License - Free to use with attribution
Contact
- Paper & Code: https://github.com/ContextLab/llm-stylometry
- Issues: https://github.com/ContextLab/llm-stylometry/issues
- Contact: Jeremy R. Manning ([email protected])
Related Resources
Explore datasets for all 8 authors in the study:
- Downloads last month
- 95