dennlinger commited on
Commit
b56476e
·
1 Parent(s): f8593c1

Add dataset card.

Browse files
Files changed (1) hide show
  1. README.md +178 -3
README.md CHANGED
@@ -1,3 +1,178 @@
1
- ---
2
- license: cc-by-sa-3.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ annotations_creators:
2
+ - machine-generated
3
+ language:
4
+ - en
5
+ language_creators:
6
+ - crowdsourced
7
+ license:
8
+ - cc-by-sa-3.0
9
+ multilinguality:
10
+ - monolingual
11
+ pretty_name: wiki-paragraphs
12
+ size_categories:
13
+ - 10M<n<100M
14
+ source_datasets:
15
+ - original
16
+ tags:
17
+ - wikipedia
18
+ - self-similarity
19
+ task_categories:
20
+ - text-classification
21
+ - sentence-similarity
22
+ task_ids:
23
+ - semantic-similarity-scoring
24
+
25
+ # Dataset Card for [Needs More Information]
26
+
27
+ ## Table of Contents
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-instances)
35
+ - [Data Splits](#data-instances)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:** [Needs More Information]
53
+ - **Repository:** https://github.com/dennlinger/TopicalChange
54
+ - **Paper:** https://arxiv.org/abs/2012.03619
55
+ - **Leaderboard:** [Needs More Information]
56
+ - **Point of Contact:** [Dennis Aumiller]([email protected])
57
+
58
+ ### Dataset Summary
59
+
60
+ The wiki-paragraphs dataset is constructed by automatically sampling two paragraphs from a Wikipedia article. If they are from the same section, they will be considered a "semantic match", otherwise as "dissimilar". Dissimilar paragraphs can in theory also be sampled from other documents, but have not shown any improvement in the particular evaluation of the linked work.
61
+ The alignment is in no way meant as an accurate depiction of similarity, but allows to quickly mine large amounts of samples.
62
+
63
+ ### Supported Tasks and Leaderboards
64
+
65
+ The dataset can be used for "same-section classification", which is a binary classification task (either two sentences/paragraphs belong to the same section or not).
66
+ This can be combined with document-level coherency measures, where we can check how many misclassifications appear within a single document.
67
+ Please refer to [our paper](https://arxiv.org/abs/2012.03619) for more details.
68
+
69
+ ### Languages
70
+
71
+ The data was extracted from English Wikipedia, therefore predominantly in English.
72
+
73
+ ## Dataset Structure
74
+
75
+ ### Data Instances
76
+
77
+ A single instance contains three attributes:
78
+
79
+ ```
80
+ {
81
+ "sentence1": "<Sentence from the first paragraph>",
82
+ "sentence2": "<Sentence from the second paragraph>",
83
+ "label": 0/1 # 1 indicates two belong to the same section
84
+ }
85
+ ```
86
+
87
+ ### Data Fields
88
+
89
+ - sentence1: String containing the first paragraph
90
+ - sentence2: String containing the second paragraph
91
+ - label: Integer, either 0 or 1. Indicates whether two paragraphs belong to the same section (1) or come from different sections (0)
92
+
93
+ ### Data Splits
94
+
95
+ We provide train, validation and test splits, which were split as 80/10/10 from a randomly shuffled original data source.
96
+ In total, we provide 25375583 training pairs, as well as 3163685 validation and test instances, respectively.
97
+
98
+ ## Dataset Creation
99
+
100
+ ### Curation Rationale
101
+
102
+ The original idea was applied to self-segmentation of Terms of Service documents. Given that these are of domain-specific nature, we wanted to provide a more generally applicable model trained on Wikipedia data.
103
+ It is meant as a cheap-to-acquire pre-training strategy for large-scale experimentation with semantic similarity for long texts (paragraph-level).
104
+ Based on our experiments, it is not necessarily sufficient by itself to replace traditional hand-labeled semantic similarity datasets.
105
+
106
+ ### Source Data
107
+
108
+ #### Initial Data Collection and Normalization
109
+
110
+ The data was collected based on the articles considered in the Wiki-727k dataset by Koshorek et al. The dump of their dataset can be found through the [respective Github repository](https://github.com/koomri/text-segmentation). Note that we did *not* use the pre-processed data, but rather only information on the considered articles, which were re-acquired from Wikipedia at a more recent state.
111
+ This is due to the fact that paragraph information was not retained by the original Wiki-727k authors.
112
+ We did not verify the particular focus of considered pages.
113
+
114
+ #### Who are the source language producers?
115
+
116
+ We do not have any further information on the contributors; these are volunteers contributing to en.wikipedia.org.
117
+
118
+ ### Annotations
119
+
120
+ #### Annotation process
121
+
122
+ No manual annotation was added to the dataset.
123
+ We automatically sampled two sections from within the same article; if these belong to the same section, they were assigned a label indicating the "similarity" (1), otherwise the label indicates that they are not belonging to the same section (0).
124
+ We sample three positive and three negative samples per section, per article.
125
+
126
+ #### Who are the annotators?
127
+
128
+ No annotators were involved in the process.
129
+
130
+ ### Personal and Sensitive Information
131
+
132
+ We did not modify the original Wikipedia text in any way. Given that personal information, such as dates of birth (e.g., for a person of interest) may be on Wikipedia, this information is also considered in our dataset.
133
+
134
+ ## Considerations for Using the Data
135
+
136
+ ### Social Impact of Dataset
137
+
138
+ The purpose of the dataset is to serve as a *pre-training addition* for semantic similarity learning.
139
+
140
+ Systems building on this dataset should consider additional, manually annotated data, before using a system in production.
141
+
142
+ ### Discussion of Biases
143
+
144
+ To our knowledge, there are some works indicating that male people have a several times larger chance of having a Wikipedia page created (especially in historical contexts). Therefore, a slight bias towards over-representation might be left in this dataset.
145
+
146
+ ### Other Known Limitations
147
+
148
+ As previously stated, the automatically extracted semantic similarity is not perfect; it should be treated as such.
149
+
150
+ ## Additional Information
151
+
152
+ ### Dataset Curators
153
+
154
+ The dataset was originally developed as a practical project by Lucienne-Sophie Marm� under the supervision of Dennis Aumiller.
155
+ Contributions to the original sampling strategy were made by Satya Almasian and Michael Gertz
156
+
157
+ ### Licensing Information
158
+
159
+ Wikipedia data is available under the CC-BY-SA 3.0 license.
160
+
161
+ ### Citation Information
162
+
163
+ @inproceedings{DBLP:conf/icail/AumillerAL021,
164
+ author = {Dennis Aumiller and
165
+ Satya Almasian and
166
+ Sebastian Lackner and
167
+ Michael Gertz},
168
+ editor = {Juliano Maranh{\~{a}}o and
169
+ Adam Zachary Wyner},
170
+ title = {Structural text segmentation of legal documents},
171
+ booktitle = {{ICAIL} '21: Eighteenth International Conference for Artificial Intelligence
172
+ and Law, S{\~{a}}o Paulo Brazil, June 21 - 25, 2021},
173
+ pages = {2--11},
174
+ publisher = {{ACM}},
175
+ year = {2021},
176
+ url = {https://doi.org/10.1145/3462757.3466085},
177
+ doi = {10.1145/3462757.3466085}
178
+ }