Datasets:
Tasks:
Sentence Similarity
Formats:
csv
Languages:
German
Size:
10M - 100M
ArXiv:
Tags:
sentence-transformers
License:
change order
Browse files
README.md
CHANGED
@@ -17,6 +17,12 @@ This is a record of German language paraphrases. These are text pairs that have
|
|
17 |
The source of the paraphrases are different parallel German / English text corpora.
|
18 |
The English texts were machine translated back into German. This is how the paraphrases were obtained.
|
19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
## Columns description
|
21 |
- **`uuid`**: a uuid calculated with Python `uuid.uuid4()`
|
22 |
- **`de`**: the original German texts from the corpus
|
@@ -28,12 +34,6 @@ The English texts were machine translated back into German. This is how the para
|
|
28 |
- **`en_de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
|
29 |
- **`cos_sim`**: the [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) of both sentences measured with [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
|
30 |
|
31 |
-
## Load this dataset with Pandas
|
32 |
-
If you want to download the csv file and then load it with Pandas you can do it like this:
|
33 |
-
```python
|
34 |
-
df = pd.read_csv("train.csv")
|
35 |
-
```
|
36 |
-
|
37 |
## Parallel text corpora used
|
38 |
| Corpus name & link | Number of paraphrases |
|
39 |
|-----------------------------------------------------------------------|----------------------:|
|
@@ -45,10 +45,6 @@ df = pd.read_csv("train.csv")
|
|
45 |
| [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) | 70,547 |
|
46 |
| **sum** |. **21,292,789** |
|
47 |
|
48 |
-
## To-do
|
49 |
-
- add column description
|
50 |
-
- upload dataset
|
51 |
-
|
52 |
## Back translation
|
53 |
We have made the back translation from English to German with the help of [Fairseq](https://github.com/facebookresearch/fairseq).
|
54 |
We used the `transformer.wmt19.en-de` model for this purpose:
|
@@ -90,6 +86,12 @@ def jaccard_similarity(text1, text2, somajo_tokenizer):
|
|
90 |
return jaccard_similarity
|
91 |
```
|
92 |
|
|
|
|
|
|
|
|
|
|
|
|
|
93 |
## Citations & Acknowledgements
|
94 |
|
95 |
**OpenSubtitles**
|
|
|
17 |
The source of the paraphrases are different parallel German / English text corpora.
|
18 |
The English texts were machine translated back into German. This is how the paraphrases were obtained.
|
19 |
|
20 |
+
## To-do
|
21 |
+
- upload dataset
|
22 |
+
- explain out preprocessing
|
23 |
+
- suggest further postprocessing
|
24 |
+
- explain dirty "texts" in OpenSubtitles
|
25 |
+
|
26 |
## Columns description
|
27 |
- **`uuid`**: a uuid calculated with Python `uuid.uuid4()`
|
28 |
- **`de`**: the original German texts from the corpus
|
|
|
34 |
- **`en_de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
|
35 |
- **`cos_sim`**: the [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) of both sentences measured with [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
|
36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
## Parallel text corpora used
|
38 |
| Corpus name & link | Number of paraphrases |
|
39 |
|-----------------------------------------------------------------------|----------------------:|
|
|
|
45 |
| [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) | 70,547 |
|
46 |
| **sum** |. **21,292,789** |
|
47 |
|
|
|
|
|
|
|
|
|
48 |
## Back translation
|
49 |
We have made the back translation from English to German with the help of [Fairseq](https://github.com/facebookresearch/fairseq).
|
50 |
We used the `transformer.wmt19.en-de` model for this purpose:
|
|
|
86 |
return jaccard_similarity
|
87 |
```
|
88 |
|
89 |
+
## Load this dataset with Pandas
|
90 |
+
If you want to download the csv file and then load it with Pandas you can do it like this:
|
91 |
+
```python
|
92 |
+
df = pd.read_csv("train.csv")
|
93 |
+
```
|
94 |
+
|
95 |
## Citations & Acknowledgements
|
96 |
|
97 |
**OpenSubtitles**
|