Datasets:
filename
stringclasses 10
values | title
stringclasses 10
values | text
stringclasses 10
values |
|---|---|---|
10712.txt
|
White Jacket; Or, The World on a Man-of-War
| "it was not a very white jacket, but white enough, in all conscience, as the sequel will show.\nthe (...TRUNCATED)
|
11231.txt
|
Bartleby, the Scrivener: A Story of Wall-Street
| "i am a rather elderly man. the nature of my avocations for the last thirty years has brought me int(...TRUNCATED)
|
13720.txt
|
Mardi, and a voyage thither, Vol. 1 (of 2)
| "we are off! the courses and topsails are set: the coral-hung anchor swings from the bow: and togeth(...TRUNCATED)
|
13721.txt
|
Mardi, and a voyage thither, Vol. 2 (of 2)
| "we were now voyaging straight for maramma; where lived and reigned, in mystery, the high pontiff of(...TRUNCATED)
|
15.txt
|
Moby-Dick; or, The Whale
| "call me ishmael. some years ago--never mind how long precisely--having little or no money in my pur(...TRUNCATED)
|
15422.txt
|
Israel Potter: His Fifty Years of Exile
| "the traveller who at the present day is content to travel in the good old asiatic style, neither ru(...TRUNCATED)
|
21816.txt
|
The Confidence-Man: His Masquerade
| "at sunrise on a first of april, there appeared, suddenly as manco capac at the lake titicaca, a man(...TRUNCATED)
|
2694.txt
|
I and My Chimney
| "i and my chimney, two grey-headed old smokers, reside in the country. we are, i may say, old settle(...TRUNCATED)
|
28656.txt
|
Typee
| "six months at sea! yes, reader, as i live, six months out of sight of land; cruising after the sper(...TRUNCATED)
|
4045.txt
|
Omoo: Adventures in the South Seas
| "it was the middle of a bright tropical afternoon that we made good our escape from the bay. the ves(...TRUNCATED)
|
ContextLab Herman Melville Corpus
Dataset Description
This dataset contains works of Herman Melville (1819-1891), preprocessed for computational stylometry research. The texts were sourced from Project Gutenberg and cleaned for use in the paper "A Stylometric Application of Large Language Models" (Stropkay et al., 2025).
The corpus includes 10 books by Herman Melville, including Moby-Dick, Bartleby the Scrivener, and Typee. All text has been converted to lowercase and cleaned of Project Gutenberg headers, footers, and chapter headings to focus on the author's prose style.
Quick Stats
- Books: 10
- Total characters: 5,257,881
- Total words: 912,881 (approximate)
- Average book length: 525,788 characters
- Format: Plain text (.txt files)
- Language: English (lowercase)
Dataset Structure
Books Included
Each .txt file contains the complete text of one book:
| File | Title |
|---|---|
10712.txt |
White Jacket; Or, The World on a Man-of-War |
11231.txt |
Bartleby, the Scrivener: A Story of Wall-Street |
13720.txt |
Mardi, and a voyage thither, Vol. 1 (of 2) |
13721.txt |
Mardi, and a voyage thither, Vol. 2 (of 2) |
15.txt |
Moby-Dick; or, The Whale |
15422.txt |
Israel Potter: His Fifty Years of Exile |
21816.txt |
The Confidence-Man: His Masquerade |
2694.txt |
I and My Chimney |
28656.txt |
Typee |
4045.txt |
Omoo: Adventures in the South Seas |
Data Fields
- text: Complete book text (lowercase, cleaned)
- filename: Project Gutenberg ID
Data Format
All files are plain UTF-8 text:
- Lowercase characters only
- Punctuation and structure preserved
- Paragraph breaks maintained
- No chapter headings or non-narrative text
Usage
Load with datasets library
from datasets import load_dataset
# Load entire corpus
corpus = load_dataset("contextlab/melville-corpus")
# Iterate through books
for book in corpus['train']:
print(f"Book length: {len(book['text']):,} characters")
print(book['text'][:200]) # First 200 characters
print()
Load specific file
# Load single book by filename
dataset = load_dataset(
"contextlab/melville-corpus",
data_files="54.txt" # Specific Gutenberg ID
)
text = dataset['train'][0]['text']
print(f"Loaded {len(text):,} characters")
Download files directly
from huggingface_hub import hf_hub_download
# Download one book
file_path = hf_hub_download(
repo_id="contextlab/melville-corpus",
filename="54.txt",
repo_type="dataset"
)
with open(file_path, 'r') as f:
text = f.read()
Use for training language models
from datasets import load_dataset
from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments
# Load corpus
corpus = load_dataset("contextlab/melville-corpus")
# Combine all books into single text
full_text = " ".join([book['text'] for book in corpus['train']])
# Tokenize
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
def tokenize_function(examples):
return tokenizer(examples['text'], truncation=True, max_length=1024)
tokenized = corpus.map(tokenize_function, batched=True, remove_columns=['text'])
# Initialize model
model = GPT2LMHeadModel.from_pretrained("gpt2")
# Set up training
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=10,
per_device_train_batch_size=8,
save_steps=1000,
)
# Train
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized['train']
)
trainer.train()
Analyze text statistics
from datasets import load_dataset
import numpy as np
corpus = load_dataset("contextlab/melville-corpus")
# Calculate statistics
lengths = [len(book['text']) for book in corpus['train']]
print(f"Books: {len(lengths)}")
print(f"Total characters: {sum(lengths):,}")
print(f"Mean length: {np.mean(lengths):,.0f} characters")
print(f"Std length: {np.std(lengths):,.0f} characters")
print(f"Min length: {min(lengths):,} characters")
print(f"Max length: {max(lengths):,} characters")
Dataset Creation
Source Data
All texts sourced from Project Gutenberg, a library of over 70,000 free eBooks in the public domain.
Project Gutenberg Links:
- Books identified by Gutenberg ID numbers (filenames)
- Example:
54.txtcorresponds to https://www.gutenberg.org/ebooks/54 - All works are in the public domain
Preprocessing Pipeline
The raw Project Gutenberg texts underwent the following preprocessing:
- Header/footer removal: Project Gutenberg license text and metadata removed
- Lowercase conversion: All text converted to lowercase for stylometry
- Chapter heading removal: Chapter titles and numbering removed
- Non-narrative text removal: Tables of contents, dedications, etc. removed
- Encoding normalization: Converted to UTF-8
- Structure preservation: Paragraph breaks and punctuation maintained
Why lowercase? Stylometric analysis focuses on word choice, syntax, and style rather than capitalization patterns. Lowercase normalization removes this variable.
Preprocessing code: Available at https://github.com/ContextLab/llm-stylometry
Considerations for Using This Dataset
Known Limitations
- Historical language: Reflects 19th-century America vocabulary, grammar, and cultural context
- Lowercase only: All text converted to lowercase (not suitable for case-sensitive analysis)
- Incomplete corpus: May not include all of Herman Melville's writings (only public domain works on Gutenberg)
- Cleaning artifacts: Some formatting irregularities may remain from Gutenberg source
- Public domain only: Limited to works published before copyright restrictions
Intended Use Cases
- Stylometry research: Authorship attribution, style analysis
- Language modeling: Training author-specific models
- Literary analysis: Computational study of Herman Melville's writing
- Historical NLP: 19th-century America language patterns
- Educational: Teaching computational text analysis
Out-of-Scope Uses
- Case-sensitive text analysis
- Modern language applications
- Factual information retrieval
- Complete scholarly editions (use academic sources)
Citation
If you use this dataset in your research, please cite:
@article{StroEtal25,
title={A Stylometric Application of Large Language Models},
author={Stropkay, Harrison F. and Chen, Jiayi and Jabelli, Mohammad J. L. and Rockmore, Daniel N. and Manning, Jeremy R.},
journal={arXiv preprint arXiv:2510.21958},
year={2025}
}
Additional Information
Dataset Curator
ContextLab, Dartmouth College
Licensing
MIT License - Free to use with attribution
Contact
- Paper & Code: https://github.com/ContextLab/llm-stylometry
- Issues: https://github.com/ContextLab/llm-stylometry/issues
- Contact: Jeremy R. Manning ([email protected])
Related Resources
Explore datasets for all 8 authors in the study:
- Downloads last month
- 15