tempora / README.md
sumuks's picture
Update README.md
d852080 verified
metadata
license: odc-by
task_categories:
  - text-classification
  - question-answering
  - zero-shot-classification
  - text-generation
  - text2text-generation
  - sentence-similarity
  - summarization
  - feature-extraction
language:
  - en
pretty_name: Tempora
size_categories:
  - 1K<n<10K
tags:
  - medical
  - climate
  - art
  - music
  - legal
  - chemistry
  - biology
  - finance
configs:
  - config_name: tempora-0325B
    features:
      - name: id
        dtype: string
      - name: source
        dtype: string
      - name: extracted_content
        dtype: string
    splits:
      - name: train
        num_bytes: 1505882.7022682622
        num_examples: 250
    download_size: 644917
    dataset_size: 1505882.7022682622
  - config_name: tempora-0325
    data_files:
      - split: train
        path: tempora-0325/train-*
  - config_name: tempora-0325-raw
    data_files:
      - split: train
        path: tempora-0325-raw/train-*
  - config_name: tempora-0325B
    data_files:
      - split: train
        path: tempora-0325B/train-*
dataset_info:
  - config_name: tempora-0325B
    features:
      - name: id
        dtype: string
      - name: source
        dtype: string
      - name: extracted_content
        dtype: string
    splits:
      - name: train
        num_bytes: 1505882.7022682622
        num_examples: 250
    download_size: 644917
    dataset_size: 1505882.7022682622
  - config_name: tempora-0325
    features:
      - name: id
        dtype: string
      - name: source
        dtype: string
      - name: extracted_content
        dtype: string
    splits:
      - name: train
        num_bytes: 25873346.79533116
        num_examples: 5599
    download_size: 15080523
    dataset_size: 25873346.79533116
  - config_name: tempora-0325-raw
    features:
      - name: id
        dtype: string
      - name: source
        dtype: string
      - name: raw
        dtype: string
      - name: extracted_content
        dtype: string
      - name: extracted_content_stage_2
        dtype: string
    splits:
      - name: train
        num_bytes: 4193359908
        num_examples: 7368
    download_size: 1108481319
    dataset_size: 4193359908

Tempora

Tempora Logo

A contemporary dataset of 7,368 real-world documents published after March 1, 2025, curated for testing the temporal grounding of Large Language Models.

Table of Contents

  1. Usage
  2. Dataset Overview
  3. Why a Contemporary Dataset?
  4. Scope & Diversity
  5. Evaluating Parametric vs. Contextual Knowledge
  6. Methodological Longevity
  7. Dataset Structure
  8. Licensing
  9. Citation
  10. Acknowledgments

Usage

Below are examples of how to load Tempora-0325 using the Hugging Face datasets library. Adjust the config_name as needed.

Loading with datasets

from datasets import load_dataset

# Load the balanced subset
ds_balanced = load_dataset("sumuks/tempora", name="tempora-0325B", split="train")

# Load the main unbalanced corpus
ds_full = load_dataset("sumuks/tempora", name="tempora-0325", split="train")

# Load the raw version
ds_raw = load_dataset("sumuks/tempora", name="tempora-0325-raw", split="train")

Dataset Example

A sample entry from tempora-0325 might look like:

{
  'id': 'QChCKP-ecAD',
  'source': 'https://www.theguardian.com/sport/2025/mar/09/france-captain-antoine-dupont-rugby-union-injury',
  'extracted_content': "# Antoine Dupont faces long spell out with ruptured cruciate knee ligaments\nAntoine Dupont, France’s talismanic captain and the player ..."
}

Dataset Overview

Recent advances in large language models (LLMs) have highlighted a critical gap in testing temporal and factual grounding: models are often pretrained on massive (and sometimes outdated) corpora, making it difficult to discern whether they rely on newly provided textual evidence or memorize stale facts. Tempora-0325 addresses this challenge by presenting a set of 7,368 documents published after March 1, 2025, ensuring that the vast majority of pretrained models have not seen this data during training.

Distribution of Character Lengths in Tempora-0325
Figure: Distribution of character lengths within Tempora-0325


Why a Contemporary Dataset?

When LLMs are prompted with documents containing up-to-date facts, regulations, or events, it becomes crucial to separate genuine, context-grounded outputs from those derived purely from parametric memory. Tempora-0325 focuses on this objective:

  • Temporal testing: Provides data published exclusively after March 1, 2025.
  • Unseen textual evidence: Ensures that most existing models’ pretraining does not include these documents.
  • Detection of stale knowledge: Encourages models to rely on newly provided information—or risk inconsistencies revealing outdated parametric knowledge.

Scope & Diversity

We collected 7,368 publicly available documents from:

  • Government and corporate announcements
  • Legal and medical reports
  • Sports updates, news articles, and blogs
  • Miscellaneous informational sites

Each source was verified to have been published after March 1, 2025, with manual checks to confirm the authenticity of time-sensitive information. Two key subsets are made available:

  1. Unbalanced Full Corpus (Tempora-0325): Mirrors real-world domain distribution.
  2. Balanced Subset (Tempora-0325B): Offers uniform coverage across eight categories (government, corporate, legal, medical, sports, news, blogs, miscellaneous) for controlled experimentation.

Evaluating Parametric vs. Contextual Knowledge

A central motivation behind Tempora-0325 is enabling deeper analysis into how—or even whether—an LLM updates its internal knowledge states when presented with truly novel or conflicting data. By isolating content never encountered in typical pretraining corpora, the dataset can:

  • Test retrieval-augmented generation: Determine if a model is using new evidence from a document or relying on outdated internal parameters.
  • Assess summarization and question generation tasks: See whether newly introduced information is being processed accurately or overshadowed by memorized facts.

Methodological Longevity

While Tempora-0325 is a snapshot of post March 2025 knowledge, the data collection methodology is open-sourced so future variants (e.g., Tempora-0727) can be built over time. This systematic refresh ensures the dataset remains novel for the next generation of LLMs, preserving its effectiveness for detecting when models override new information with stale, parametric knowledge.


Dataset Structure

Available Configurations

This repository offers multiple configurations, each corresponding to different data splits or processing stages:

  • tempora-0325B
    • Balanced subset of 250 training documents.
    • Equal coverage of 8 domains for controlled experiments.
  • tempora-0325
    • The full, unbalanced corpus.
    • 5,599 training documents.
  • tempora-0325-raw
    • The raw version containing minimal processing for advanced or custom use-cases.
    • 7,368 total documents.

Data Fields

Depending on the configuration, you will see some or all of the following fields:

  • id (string): A unique identifier for each document.
  • source (string): The source domain or category (e.g., legal, medical, sports), if available.
  • raw (string): Unprocessed text content (available in tempora-0325-raw only).
  • extracted_content (string): The main processed text from each document.
  • extracted_content_stage_2 (string): Additional content extraction stage (only in tempora-0325-raw).

Splits and Statistics

Config # Documents Split Size (approx.)
tempora-0325 5,599 train ~25.9 MB
tempora-0325B 250 train ~1.5 MB
tempora-0325-raw 7,368 train ~4.19 GB

Licensing

This dataset is released under the Open Data Commons Attribution License (ODC-By) v1.0.
Use of this dataset is also subject to the terms and conditions laid out by each respective source from which documents were collected.


Citation

If you use Tempora-0325 in your research or application, please cite:

@misc{shashidhar2025yourbencheasycustomevaluation,
      title={YourBench: Easy Custom Evaluation Sets for Everyone}, 
      author={Sumuk Shashidhar and Clémentine Fourrier and Alina Lozovskia and Thomas Wolf and Gokhan Tur and Dilek Hakkani-Tür},
      year={2025},
      eprint={2504.01833},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2504.01833}, 
}

Acknowledgments

Special thanks to all domain experts and contributors who helped verify publication dates and authenticity. By regularly refreshing Tempora with new data, we hope to advance the understanding of how modern language models adapt to truly novel, time-sensitive content.


(Last updated: March 17, 2025)