Dataset Viewer (First 5GB)
Auto-converted to Parquet
The dataset viewer is not available for this split.
Rows from parquet row groups are too big to be read: 4.34 GiB (max=286.10 MiB)
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

CHRONOBERG: Capturing Language Evolution and Temporal Awareness in Foundation Models

πŸ€— Dataset | πŸ™ GitHub πŸ“– Arxiv

We introduce CHRONOBERG, a temporally structured corpus of English book texts spanning 250 years, curated from Project Gutenberg and enriched with a variety of temporal annotations. We also introduce historically calibrated affective Valence-Arousal-Dominance (VAD) lexicons to support temporally grounded interpretation. With the lexicons at hand, we demonstrate a need for modern LLM-based tools to better situate their detection of discriminatory language and contextualization of sentiment across various time-periods. In fact, we show how language models trained sequentially on CHRONOBERG struggle to encode diachronic shifts in meaning, emphasizing the need for temporally aware training and evaluation pipelines, and positioning CHRONOBERG as a scalable resource for the study of linguistic change and temporal generalization. Disclaimer: This repository and dataset includes language and display of samples that could be offensive to readers.

Dataset

Dataset Catalog:

Load Dataset

from dataset import load_dataset

Chronoberg_raw = load_dataset("spaul25/Chronoberg", data_files="dataset/Chronoberg_raw.jsonl") ## Raw
Chronoberg_preprocessed = load_dataset("spaul25/Chronoberg", data_files="dataset/Chronoberg_preprocessed.jsonl") ## Pre-processed
Chronoberg_annotated = load_dataset("spaul25/Chronoberg", data_files="dataset/Chronoberg_annotated.jsonl") ## Annotated

Pretrained Checkpoints : To construct VAD lexicons on your own, we have also made available the pretrained Word2vec models on the entire dataset and the time-interval-specific slices (50 year intervals) of the dataset.

Model-Type 1750-99 1800-49 1850-99 1900-49 1950-99
word2vec word2vec_1750 word2vec_1800 word2vec_1850 word2vec_1900 word2vec_1950

Recommended Dataset Splits

We have also made available the training and test sets to reproduce the LLM experiments in our paper. More ways to produce train and tests can be found in our github

Main Results Here are a few of the main results from our paper.

A comparison of all continual learning strategies used to train an LLM model sequentially on ChronoBerg can be found below:

Method Perplexity Forward Gen. Best Case Worst Case
Sequential FT 34% ↑ 33% ↑ 4.58 (1750--99) 6.64 (1950--2000)
EWC 12% ↑ 29% ↑ 4.65 (1800--49) 6.77 (1950--2000)
LoRA 15% ↑ 27% ↑ 4.48 (1850--99) 6.19 (1950--2000)

Lexical Analysis

We have used our lexicons to analyze words that have undergone shifts from being positive to negative or negative to positive. Here are few instances of such words.

Lexical

How to cite us

@misc{hegde2025chronobergcapturinglanguageevolution,
     title={CHRONOBERG: Capturing Language Evolution and Temporal Awareness in Foundation Models}, 
     author={Niharika Hegde and Subarnaduti Paul and Lars Joel-Frey and Manuel Brack and Kristian Kersting and Martin Mundt and Patrick Schramowski},
     year={2025},
     eprint={2509.22360},
     archivePrefix={arXiv},
     primaryClass={cs.CL},
     url={https://arxiv.org/abs/2509.22360}, 
}
Downloads last month
557