Datasets:
Tasks:
Sentence Similarity
Modalities:
Text
Formats:
json
Sub-tasks:
semantic-similarity-classification
Languages:
English
Size:
100K - 1M
License:
metadata
license: mit
language:
- en
paperswithcode_id: embedding-data/simple-wiki
pretty_name: simple-wiki
task_categories:
- sentence-similarity
- paraphrase-mining
task_ids:
- semantic-similarity-classification
Dataset Card for "simple-wiki"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://cs.pomona.edu/~dkauchak/simplification/
- Repository: More Information Needed
- Paper: https://aclanthology.org/P11-2117/
- Point of Contact: David Kauchak
Dataset Summary
This dataset contains pairs of equivalent sentences obtained from Wikipedia.
Supported Tasks
- Sentence Transformers training; useful for semantic search and sentence similarity.
Languages
- English.
Dataset Structure
Each example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value".
{"set": [sentence_1, sentence_2]}
{"set": [sentence_1, sentence_2]}
...
{"set": [sentence_1, sentence_2]}
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.
Usage Example
Install the 🤗 Datasets library with pip install datasets
and load the dataset from the Hub with:
from datasets import load_dataset
dataset = load_dataset("embedding-data/simple-wiki")
The dataset is loaded as a DatasetDict
and has the format:
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: 102225
})
})
Review an example i
with:
dataset["train"][i]["set"]