Datasets:
language:
- en
datasets:
- kl3m-derived
license: cc-by-4.0
tags:
- kl3m
- kl3m-derived
- legal
- sbd
- sentence-boundary-detection
- paragraph-boundary-detection
- legal-nlp
- benchmark
- evaluation
ALEA Legal Benchmark: Sentence and Paragraph Boundaries
Note: This dataset is derived from the ALEA Institute's KL3M Data Project. It builds upon the copyright-clean training resources while adding specific boundary annotations for sentence and paragraph detection.
Description
This dataset provides a comprehensive benchmark for sentence and paragraph boundary detection in legal documents. It was developed to address the unique challenges legal text poses for standard natural language processing tools. Legal documents contain specialized patterns that confound general-purpose sentence boundary detectors, including legal citations (e.g., United States v. Carroll Towing Co., 159 F.2d 169), specialized abbreviations (e.g., "Corp.", "Inc.", "U.S.C."), legal sources, numbered lists, and complex sentence structures.
This dataset is particularly valuable for improving retrieval-augmented generation (RAG) systems used in legal applications, where precise sentence boundaries are crucial for preserving logical connections between concepts and preventing reasoning failures. Each percentage point improvement in boundary detection precision yields exponentially greater reductions in context fragmentation errors.
For more information about the original KL3M Data Project, please visit the GitHub repository or refer to the KL3M Data Project paper.
Derivation Information
- Original Dataset: KL3M legal documents from various sources
- Derivation Method: Manual and semi-automated annotation via LLM
- Derivation Purpose: To provide high-quality annotations for sentence and paragraph boundaries in legal text, facilitating the development and evaluation of specialized boundary detection tools for legal documents
Derivation Process
The dataset was created through a sophisticated multi-stage annotation process:
- Source documents were extracted from the KL3M corpus, which includes public domain legal materials
- Random segments of legal text were selected from each document using a controlled token-length window (between 32-128 tokens)
- A generate-judge-correct framework was employed:
- Generate: A large language model was used to add
<|sentence|>
and<|paragraph|>
boundary markers to the text - Judge: A second LLM verified the correctness of annotations, with strict validation to ensure:
- All original text was preserved exactly (including whitespace and formatting)
- Boundary markers were placed correctly according to legal conventions
- Correct: When needed, a third LLM phase corrected any incorrectly placed boundaries
- Generate: A large language model was used to add
- Additional programmatic validation ensured character-level fidelity between input and annotated output
- The resulting dataset was reviewed for quality and consistency by legal experts
This dataset was used to develop and evaluate the NUPunkt and CharBoundary libraries described in arXiv:2504.04131, which achieved 91.1% precision and the highest F1 scores (0.782) among tested methods for legal sentence boundary detection.
Dataset Details
- Format: JSON files with input text and annotated output
- License: CC BY 4.0
- Size:
- Examples: 45,739
- Sentence tags: 107,346 (avg. 2.35 per example)
- Paragraph tags: 97,667 (avg. 2.14 per example)
- Bytes: 37,877,859
- Total characters: Approx. 15.2M (excluding tags)
Dataset Structure
Each example in the dataset contains:
{
"source_identifier": "s3://data.kl3m.ai/documents/cap/500940.json",
"input": "Avere there. Coupons as they became due Avere cashed to the amount of $315 and this sum marked " the property of Dr. William Gibson " was placed in the envelope with the bonds.\n\nOn June 3d 1880 the bank failed and an agent of the government named Young took possession as temporary receiver, until",
"output": "Avere there.<|sentence|> Coupons as they became due Avere cashed to the amount of $315 and this sum marked " the property of Dr. William Gibson " was placed in the envelope with the bonds.<|sentence|><|paragraph|>\n\nOn June 3d 1880 the bank failed and an agent of the government named Young took possession as temporary receiver, until"
}
source_identifier
: Unique identifier for the source documentinput
: Original text without boundary annotationsoutput
: Same text with explicit sentence and paragraph boundary markers
Usage Example
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("alea-institute/alea-legal-benchmark-sentence-paragraph-boundaries")
# Access a sample
sample = dataset["train"][0]
print(f"Original: {sample['input'][:100]}...")
print(f"Annotated: {sample['output'][:100]}...")
# Count sentences in a document
def count_sentences(text):
return text.count("<|sentence|>")
sentences = count_sentences(dataset["train"][0]["output"])
print(f"Number of sentences: {sentences}")
# Use for training or evaluating boundary detection models
def prepare_training_data(dataset):
inputs = dataset["input"]
outputs = dataset["output"]
# Process for your specific model...
return processed_data
Applications
This dataset enables:
- Training and evaluating sentence boundary detection models for legal text
- Developing paragraph segmentation tools for legal documents
- Benchmarking existing NLP tools on challenging legal text
- Improving information retrieval and extraction from legal corpora
- Enhancing retrieval-augmented generation (RAG) systems for legal applications
Related Libraries
For specialized sentence boundary detection in legal documents, see:
- NUPunkt: Achieves 91.1% precision while processing 10 million characters per second with modest memory requirements
- CharBoundary: Provides balanced precision-recall tradeoffs with the highest F1 scores among tested methods
Both libraries can be interactively tested at https://sentences.aleainstitute.ai/.
You can easily install these libraries via pip:
pip install nupunkt
pip install charboundary
Example usage with this dataset:
from datasets import load_dataset
import nupunkt
import charboundary
# Load dataset
dataset = load_dataset("alea-institute/alea-legal-benchmark-sentence-paragraph-boundaries")
# Initialize detectors
np_detector = nupunkt.NUPunkt()
cb_detector = charboundary.CharBoundary()
# Compare detections with ground truth
for example in dataset["train"]:
# Ground truth from dataset
true_boundaries = example["output"]
# Automated detection
np_boundaries = np_detector.segment_text(example["input"])
cb_boundaries = cb_detector.segment_text(example["input"])
# Compare and evaluate
# ...
Legal Basis
This dataset maintains the same copyright compliance as the original KL3M Data Project, as LLM
annotation is solely used to insert <|sentence|>
or <|paragraph|>
tokens, but users should
review their position on output use restrictions related to this data.
Papers
For more information about this dataset and related research, please refer to:
- The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models
- Precise Legal Sentence Boundary Detection for Retrieval at Scale: NUPunkt and CharBoundary
Citation
If you use this dataset in your research, please cite both this dataset and the original KL3M Data Project:
@misc{bommarito2025legalsbd,
title={Precise Legal Sentence Boundary Detection for Retrieval at Scale: NUPunkt and CharBoundary},
author={Bommarito II, Michael J. and Katz, Daniel Martin and Bommarito, Jillian},
year={2025},
eprint={2504.04131},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{bommarito2025kl3mdata,
title={The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models},
author={Bommarito II, Michael J. and Bommarito, Jillian and Katz, Daniel Martin},
year={2025},
eprint={2504.07854},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
About ALEA
The ALEA Institute is a non-profit research organization focused on advancing AI for business, law, and governance. Learn more at https://aleainstitute.ai/.