alea-institute commited on
Commit
30441c3
·
verified ·
1 Parent(s): f9a10ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -53
README.md CHANGED
@@ -1,44 +1,20 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: source_identifier
5
- dtype: string
6
- - name: input
7
- dtype: string
8
- - name: output
9
- dtype: string
10
- splits:
11
- - name: train
12
- num_bytes: 37877859
13
- num_examples: 45739
14
- download_size: 21921650
15
- dataset_size: 37877859
16
- configs:
17
- - config_name: default
18
- data_files:
19
- - split: train
20
- path: data/train-*
21
- language:
22
- - en
23
- datasets:
24
- - kl3m-derived
25
- license: cc-by-4.0
26
- tags:
27
- - kl3m
28
- - kl3m-derived
29
- - legal
30
- - sbd
31
- - sentence-boundary-detection
32
- - paragraph-boundary-detection
33
- - legal-nlp
34
- - benchmark
35
- - evaluation
36
- task_categories:
37
- - token-classification
38
- - text2text-generation
39
- size_categories:
40
- - 10K<n<100K
41
- ---
42
 
43
  # ALEA Legal Benchmark: Sentence and Paragraph Boundaries
44
 
@@ -62,8 +38,8 @@ For more information about the original KL3M Data Project, please visit the [Git
62
 
63
  The dataset was created through a sophisticated multi-stage annotation process:
64
 
65
- 1. Source documents were extracted from the KL3M Data Project
66
- 2. Random segments of text were selected from each document using a controlled token-length window (between 32-128 tokens)
67
  3. A generate-judge-correct framework was employed:
68
  - **Generate**: A large language model was used to add `<|sentence|>` and `<|paragraph|>` boundary markers to the text
69
  - **Judge**: A second LLM verified the correctness of annotations, with strict validation to ensure:
@@ -71,7 +47,7 @@ The dataset was created through a sophisticated multi-stage annotation process:
71
  - Boundary markers were placed correctly according to legal conventions
72
  - **Correct**: When needed, a third LLM phase corrected any incorrectly placed boundaries
73
  4. Additional programmatic validation ensured character-level fidelity between input and annotated output
74
- 5. The resulting dataset was reviewed for quality and consistency
75
 
76
  This dataset was used to develop and evaluate the NUPunkt and CharBoundary libraries described in [arXiv:2504.04131](https://arxiv.org/abs/2504.04131), which achieved 91.1% precision and the highest F1 scores (0.782) among tested methods for legal sentence boundary detection.
77
 
@@ -134,10 +110,11 @@ def prepare_training_data(dataset):
134
 
135
  This dataset enables:
136
 
137
- 1. Training and evaluating sentence boundary detection models for specialized domains like legal or financial
138
- 2. Developing other hierarchical segmentation models like paragraph and section models
139
  3. Benchmarking existing NLP tools on challenging legal text
140
- 4. Improving RAG and information retrieval and extraction, especially in legal and financial contexts
 
141
 
142
  ## Related Libraries
143
 
@@ -155,14 +132,38 @@ pip install nupunkt
155
  pip install charboundary
156
  ```
157
 
158
- ## Legal Basis
159
 
160
- This dataset maintains the same copyright compliance as the original KL3M Data Project:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
161
 
162
- - Public domain materials
163
- - US government works
164
- - Open access content under permissive licenses
165
- - Content explicitly licensed for AI training
166
 
167
  ## Papers
168
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ datasets:
5
+ - kl3m-derived
6
+ license: cc-by-4.0
7
+ tags:
8
+ - kl3m
9
+ - kl3m-derived
10
+ - legal
11
+ - sbd
12
+ - sentence-boundary-detection
13
+ - paragraph-boundary-detection
14
+ - legal-nlp
15
+ - benchmark
16
+ - evaluation
17
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
  # ALEA Legal Benchmark: Sentence and Paragraph Boundaries
20
 
 
38
 
39
  The dataset was created through a sophisticated multi-stage annotation process:
40
 
41
+ 1. Source documents were extracted from the KL3M corpus, which includes public domain legal materials
42
+ 2. Random segments of legal text were selected from each document using a controlled token-length window (between 32-128 tokens)
43
  3. A generate-judge-correct framework was employed:
44
  - **Generate**: A large language model was used to add `<|sentence|>` and `<|paragraph|>` boundary markers to the text
45
  - **Judge**: A second LLM verified the correctness of annotations, with strict validation to ensure:
 
47
  - Boundary markers were placed correctly according to legal conventions
48
  - **Correct**: When needed, a third LLM phase corrected any incorrectly placed boundaries
49
  4. Additional programmatic validation ensured character-level fidelity between input and annotated output
50
+ 5. The resulting dataset was reviewed for quality and consistency by legal experts
51
 
52
  This dataset was used to develop and evaluate the NUPunkt and CharBoundary libraries described in [arXiv:2504.04131](https://arxiv.org/abs/2504.04131), which achieved 91.1% precision and the highest F1 scores (0.782) among tested methods for legal sentence boundary detection.
53
 
 
110
 
111
  This dataset enables:
112
 
113
+ 1. Training and evaluating sentence boundary detection models for legal text
114
+ 2. Developing paragraph segmentation tools for legal documents
115
  3. Benchmarking existing NLP tools on challenging legal text
116
+ 4. Improving information retrieval and extraction from legal corpora
117
+ 5. Enhancing retrieval-augmented generation (RAG) systems for legal applications
118
 
119
  ## Related Libraries
120
 
 
132
  pip install charboundary
133
  ```
134
 
135
+ Example usage with this dataset:
136
 
137
+ ```python
138
+ from datasets import load_dataset
139
+ import nupunkt
140
+ import charboundary
141
+
142
+ # Load dataset
143
+ dataset = load_dataset("alea-institute/alea-legal-benchmark-sentence-paragraph-boundaries")
144
+
145
+ # Initialize detectors
146
+ np_detector = nupunkt.NUPunkt()
147
+ cb_detector = charboundary.CharBoundary()
148
+
149
+ # Compare detections with ground truth
150
+ for example in dataset["train"]:
151
+ # Ground truth from dataset
152
+ true_boundaries = example["output"]
153
+
154
+ # Automated detection
155
+ np_boundaries = np_detector.segment_text(example["input"])
156
+ cb_boundaries = cb_detector.segment_text(example["input"])
157
+
158
+ # Compare and evaluate
159
+ # ...
160
+ ```
161
+
162
+ ## Legal Basis
163
 
164
+ This dataset maintains the same copyright compliance as the original KL3M Data Project, as LLM
165
+ annotation is solely used to insert `<|sentence|>` or `<|paragraph|>` tokens, but users should
166
+ review their position on output use restrictions related to this data.
 
167
 
168
  ## Papers
169