Datasets:
File size: 8,816 Bytes
30441c3 f9a10ad 30441c3 f9a10ad 30441c3 f9a10ad 30441c3 f9a10ad 30441c3 f9a10ad 30441c3 f9a10ad 30441c3 f9a10ad 30441c3 f9a10ad |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 |
---
language:
- en
datasets:
- kl3m-derived
license: cc-by-4.0
tags:
- kl3m
- kl3m-derived
- legal
- sbd
- sentence-boundary-detection
- paragraph-boundary-detection
- legal-nlp
- benchmark
- evaluation
---
# ALEA Legal Benchmark: Sentence and Paragraph Boundaries
> **Note**: This dataset is derived from the [ALEA Institute's](https://aleainstitute.ai/) KL3M Data Project. It builds upon the copyright-clean training resources while adding specific boundary annotations for sentence and paragraph detection.
## Description
This dataset provides a comprehensive benchmark for sentence and paragraph boundary detection in legal documents. It was developed to address the unique challenges legal text poses for standard natural language processing tools. Legal documents contain specialized patterns that confound general-purpose sentence boundary detectors, including legal citations (e.g., *United States v. Carroll Towing Co.*, 159 F.2d 169), specialized abbreviations (e.g., "Corp.", "Inc.", "U.S.C."), legal sources, numbered lists, and complex sentence structures.
This dataset is particularly valuable for improving retrieval-augmented generation (RAG) systems used in legal applications, where precise sentence boundaries are crucial for preserving logical connections between concepts and preventing reasoning failures. Each percentage point improvement in boundary detection precision yields exponentially greater reductions in context fragmentation errors.
For more information about the original KL3M Data Project, please visit the [GitHub repository](https://github.com/alea-institute/kl3m-data) or refer to the [KL3M Data Project paper](https://arxiv.org/abs/2504.07854).
## Derivation Information
- **Original Dataset**: KL3M legal documents from various sources
- **Derivation Method**: Manual and semi-automated annotation via LLM
- **Derivation Purpose**: To provide high-quality annotations for sentence and paragraph boundaries in legal text, facilitating the development and evaluation of specialized boundary detection tools for legal documents
## Derivation Process
The dataset was created through a sophisticated multi-stage annotation process:
1. Source documents were extracted from the KL3M corpus, which includes public domain legal materials
2. Random segments of legal text were selected from each document using a controlled token-length window (between 32-128 tokens)
3. A generate-judge-correct framework was employed:
- **Generate**: A large language model was used to add `<|sentence|>` and `<|paragraph|>` boundary markers to the text
- **Judge**: A second LLM verified the correctness of annotations, with strict validation to ensure:
- All original text was preserved exactly (including whitespace and formatting)
- Boundary markers were placed correctly according to legal conventions
- **Correct**: When needed, a third LLM phase corrected any incorrectly placed boundaries
4. Additional programmatic validation ensured character-level fidelity between input and annotated output
5. The resulting dataset was reviewed for quality and consistency by legal experts
This dataset was used to develop and evaluate the NUPunkt and CharBoundary libraries described in [arXiv:2504.04131](https://arxiv.org/abs/2504.04131), which achieved 91.1% precision and the highest F1 scores (0.782) among tested methods for legal sentence boundary detection.
## Dataset Details
- **Format**: JSON files with input text and annotated output
- **License**: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
- **Size**:
- Examples: 45,739
- Sentence tags: 107,346 (avg. 2.35 per example)
- Paragraph tags: 97,667 (avg. 2.14 per example)
- Bytes: 37,877,859
- Total characters: Approx. 15.2M (excluding tags)
## Dataset Structure
Each example in the dataset contains:
```python
{
"source_identifier": "s3://data.kl3m.ai/documents/cap/500940.json",
"input": "Avere there. Coupons as they became due Avere cashed to the amount of $315 and this sum marked " the property of Dr. William Gibson " was placed in the envelope with the bonds.\n\nOn June 3d 1880 the bank failed and an agent of the government named Young took possession as temporary receiver, until",
"output": "Avere there.<|sentence|> Coupons as they became due Avere cashed to the amount of $315 and this sum marked " the property of Dr. William Gibson " was placed in the envelope with the bonds.<|sentence|><|paragraph|>\n\nOn June 3d 1880 the bank failed and an agent of the government named Young took possession as temporary receiver, until"
}
```
- `source_identifier`: Unique identifier for the source document
- `input`: Original text without boundary annotations
- `output`: Same text with explicit sentence and paragraph boundary markers
## Usage Example
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("alea-institute/alea-legal-benchmark-sentence-paragraph-boundaries")
# Access a sample
sample = dataset["train"][0]
print(f"Original: {sample['input'][:100]}...")
print(f"Annotated: {sample['output'][:100]}...")
# Count sentences in a document
def count_sentences(text):
return text.count("<|sentence|>")
sentences = count_sentences(dataset["train"][0]["output"])
print(f"Number of sentences: {sentences}")
# Use for training or evaluating boundary detection models
def prepare_training_data(dataset):
inputs = dataset["input"]
outputs = dataset["output"]
# Process for your specific model...
return processed_data
```
## Applications
This dataset enables:
1. Training and evaluating sentence boundary detection models for legal text
2. Developing paragraph segmentation tools for legal documents
3. Benchmarking existing NLP tools on challenging legal text
4. Improving information retrieval and extraction from legal corpora
5. Enhancing retrieval-augmented generation (RAG) systems for legal applications
## Related Libraries
For specialized sentence boundary detection in legal documents, see:
- [NUPunkt](https://github.com/alea-institute/NUPunkt): Achieves 91.1% precision while processing 10 million characters per second with modest memory requirements
- [CharBoundary](https://github.com/alea-institute/CharBoundary): Provides balanced precision-recall tradeoffs with the highest F1 scores among tested methods
Both libraries can be interactively tested at [https://sentences.aleainstitute.ai/](https://sentences.aleainstitute.ai/).
You can easily install these libraries via pip:
```bash
pip install nupunkt
pip install charboundary
```
Example usage with this dataset:
```python
from datasets import load_dataset
import nupunkt
import charboundary
# Load dataset
dataset = load_dataset("alea-institute/alea-legal-benchmark-sentence-paragraph-boundaries")
# Initialize detectors
np_detector = nupunkt.NUPunkt()
cb_detector = charboundary.CharBoundary()
# Compare detections with ground truth
for example in dataset["train"]:
# Ground truth from dataset
true_boundaries = example["output"]
# Automated detection
np_boundaries = np_detector.segment_text(example["input"])
cb_boundaries = cb_detector.segment_text(example["input"])
# Compare and evaluate
# ...
```
## Legal Basis
This dataset maintains the same copyright compliance as the original KL3M Data Project, as LLM
annotation is solely used to insert `<|sentence|>` or `<|paragraph|>` tokens, but users should
review their position on output use restrictions related to this data.
## Papers
For more information about this dataset and related research, please refer to:
- [The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models](https://arxiv.org/abs/2504.07854)
- [Precise Legal Sentence Boundary Detection for Retrieval at Scale: NUPunkt and CharBoundary](https://arxiv.org/abs/2504.04131)
## Citation
If you use this dataset in your research, please cite both this dataset and the original KL3M Data Project:
```bibtex
@misc{bommarito2025legalsbd,
title={Precise Legal Sentence Boundary Detection for Retrieval at Scale: NUPunkt and CharBoundary},
author={Bommarito II, Michael J. and Katz, Daniel Martin and Bommarito, Jillian},
year={2025},
eprint={2504.04131},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{bommarito2025kl3mdata,
title={The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models},
author={Bommarito II, Michael J. and Bommarito, Jillian and Katz, Daniel Martin},
year={2025},
eprint={2504.07854},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## About ALEA
The ALEA Institute is a non-profit research organization focused on advancing AI for business, law, and governance. Learn more at [https://aleainstitute.ai/](https://aleainstitute.ai/). |