Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
parquet-converter commited on
Commit
498e6fb
·
1 Parent(s): 7686699

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,39 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
38
- multinli-es-train.jsonl filter=lfs diff=lfs merge=lfs -text
39
- snli-es-train.jsonl filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,186 +0,0 @@
1
- annotations_creators:
2
- - crowdsourced
3
- - other
4
- language_creators:
5
- - other
6
- - crowdsourced
7
- languages:
8
- - es
9
- licenses:
10
- - cc-by-sa-4.0
11
- multilinguality:
12
- - monolingual
13
- pretty_name: ESnli
14
- size_categories:
15
- - unknown
16
- source_datasets:
17
- - extended|snli
18
- - extended|xnli
19
- - extended|multi_nli
20
- task_categories:
21
- - text-classification
22
- task_ids:
23
- - natural-language-inference
24
-
25
- # Dataset Card for nli-es
26
-
27
- ## Table of Contents
28
- - [Dataset Description](#dataset-description)
29
- - [Dataset Summary](#dataset-summary)
30
- - [Supported Tasks](#supported-tasks-and-leaderboards)
31
- - [Languages](#languages)
32
- - [Dataset Structure](#dataset-structure)
33
- - [Data Instances](#data-instances)
34
- - [Data Fields](#data-instances)
35
- - [Data Splits](#data-instances)
36
- - [Dataset Creation](#dataset-creation)
37
- - [Curation Rationale](#curation-rationale)
38
- - [Source Data](#source-data)
39
- - [Annotations](#annotations)
40
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
- - [Considerations for Using the Data](#considerations-for-using-the-data)
42
- - [Social Impact of Dataset](#social-impact-of-dataset)
43
- - [Discussion of Biases](#discussion-of-biases)
44
- - [Other Known Limitations](#other-known-limitations)
45
- - [Additional Information](#additional-information)
46
- - [Dataset Curators](#dataset-curators)
47
- - [Licensing Information](#licensing-information)
48
- - [Citation Information](#citation-information)
49
-
50
- ## Dataset Description
51
-
52
- - **Homepage:** [Needs More Information]
53
- - **Repository:** https://huggingface.co/datasets/hackathon-pln-es/nli-es/
54
- - **Paper:** [Needs More Information]
55
- - **Leaderboard:** [Needs More Information]
56
- - **Point of Contact:** [Needs More Information]
57
-
58
- ### Dataset Summary
59
-
60
- A Spanish Natural Language Inference dataset put together from the sources:
61
- - the Spanish slice of the XNLI dataset;
62
- - machine-translated Spanish version of the SNLI dataset
63
- - machine-translated Spanish version of the Multinli dataset
64
-
65
- ### Supported Tasks and Leaderboards
66
-
67
- [Needs More Information]
68
-
69
- ### Languages
70
-
71
- A small percentage of the dataset contains original Spanish text by human speakers. The rest was generated by automatic translation.
72
-
73
- ## Dataset Structure
74
-
75
- ### Data Instances
76
-
77
- A line includes four values: a sentence1 (the premise); a sentence2 (the hypothesis); a label specifying the relationship between the two ("gold_label") and the ID number of the pair of sentences as given in the original dataset.
78
-
79
- Labels can be "entailment" if the premise entails the hypothesis, "contradiction" if it contradicts it or "neutral" if it neither implies it nor denies it.
80
-
81
- {
82
- "gold_label": "neutral",
83
- "pairID": 1,
84
- "sentence1": "A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos.",
85
- "sentence2": "La huelga es el método de lucha más eficaz para conseguir mejoras en el salario."
86
- }
87
-
88
- ### Data Fields
89
-
90
- gold_label: A string defining the relation between the sentence pair. Labels can be "entailment" if the premise entails the hypothesis, "contradiction" if it contradicts it or "neutral" if it neither implies it nor denies it.
91
-
92
- pairID: A string identifying a pair sentence. It was inherited from the original datasets. NOTE: For the moment we are having trouble loading this column so we replaced every string with an int 0 as a placeholder. We hope to have the pairID back up soon.
93
-
94
- sentence1: A string containing one sentence in Spanish, the premise. (See gold_label.)
95
-
96
- sentence2: A string containing one sentence in Spanish, the hypothesis. (See gold_label.)
97
-
98
- ### Data Splits
99
-
100
- The whole dataset was used for training. We did not use an evaluation split as we used the SemEval-2015 Task 2.
101
-
102
- ## Dataset Creation
103
-
104
- ### Curation Rationale
105
-
106
- This corpus was built to remedy the scarcity of annotated Spanish-language datasets for NLI. It was generated by translating from the SNLI original dataset to Spanish using Argos. While machine translation is far from an ideal source for semantic classification, it is an aid to enlarging the data available.
107
-
108
- ### Source Data
109
-
110
- #### Initial Data Collection and Normalization
111
-
112
- Please refer to the respective documentations of the original datasets:
113
- https://nlp.stanford.edu/projects/snli/
114
- https://arxiv.org/pdf/1809.05053.pdf
115
- https://cims.nyu.edu/~sbowman/multinli/
116
-
117
-
118
- #### Who are the source language producers?
119
-
120
- Please refer to the respective documentations of the original datasets:
121
- https://nlp.stanford.edu/projects/snli/
122
- https://arxiv.org/pdf/1809.05053.pdf
123
- https://cims.nyu.edu/~sbowman/multinli/
124
-
125
- ### Annotations
126
-
127
- #### Annotation process
128
-
129
- Please refer to the respective documentations of the original datasets:
130
- https://nlp.stanford.edu/projects/snli/
131
- https://arxiv.org/pdf/1809.05053.pdf
132
- https://cims.nyu.edu/~sbowman/multinli/
133
-
134
- #### Who are the annotators?
135
-
136
- Please refer to the respective documentations of the original datasets:
137
- https://nlp.stanford.edu/projects/snli/
138
- https://arxiv.org/pdf/1809.05053.pdf
139
- https://cims.nyu.edu/~sbowman/multinli/
140
-
141
- ### Personal and Sensitive Information
142
-
143
- In general, no sensitive information is conveyed in the sentences.
144
- Please refer to the respective documentations of the original datasets:
145
- https://nlp.stanford.edu/projects/snli/
146
- https://arxiv.org/pdf/1809.05053.pdf
147
- https://cims.nyu.edu/~sbowman/multinli/
148
-
149
- ## Considerations for Using the Data
150
-
151
- ### Social Impact of Dataset
152
-
153
- The purpose of this dataset is to offer new tools for semantic textual similarity analysis of Spanish sentences.
154
-
155
- ### Discussion of Biases
156
-
157
- Please refer to the respective documentations of the original datasets:
158
- https://nlp.stanford.edu/projects/snli/
159
- https://arxiv.org/pdf/1809.05053.pdf
160
- https://cims.nyu.edu/~sbowman/multinli/
161
-
162
- ### Other Known Limitations
163
-
164
- The translation of the sentences was mostly unsupervised and may introduce some noise in the corpus. Machine translation from an English-language corpus is likely to generate syntactic and lexical forms that differ from those a human Spanish speaker would produce.
165
- For discussion on the biases and limitations of the original datasets, please refer to their respective documentations:
166
- https://nlp.stanford.edu/projects/snli/
167
- https://arxiv.org/pdf/1809.05053.pdf
168
- https://cims.nyu.edu/~sbowman/multinli/
169
-
170
- ## Additional Information
171
-
172
- ### Dataset Curators
173
-
174
- The nli-es dataset was put together by Anibal Pérez, Lautaro Gesuelli, Mauricio Mazuecos and Emilio Tomás Ariza.
175
-
176
- ### Licensing Information
177
-
178
- This corpus is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0).
179
- Please refer to the respective documentations of the original datasets for information on their licenses:
180
- https://nlp.stanford.edu/projects/snli/
181
- https://arxiv.org/pdf/1809.05053.pdf
182
- https://cims.nyu.edu/~sbowman/multinli/
183
-
184
- ### Citation Information
185
-
186
- If you need to cite this dataset, you can link to this readme.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
esxnli-es-train.jsonl DELETED
The diff for this file is too large to render. See raw diff
 
snli-es-train.jsonl → hackathon-pln-es--nli-es/json-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:05c03de3bb7708ff712221f999040c0b07a91870cdd29db6843473a7aa9be170
3
- size 22299894
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc585246aa68a107757d2e300041339a86e09695cc8db6c1e35a2b473eeef04e
3
+ size 34904026
multinli-es-train.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:3eda6cfe2b7e6eb8c461d9e2e6cf99b04e09c3f68c35092489870e902025dfa0
3
- size 56949633