Datasets:
license: cc-by-sa-4.0
language:
- en
size_categories:
- 10M<n<100M
Dataset Summary
Paragraph embeddings for every article in English Wikipedia (not the Simple English version). Based on wikimedia/wikipedia, 20231101.en.
Embeddings were generated with avsolatorio/GIST-small-Embedding-v0 and are quantized to int8.
You can load the data with the following:
from datasets import load_dataset
ds = load_dataset(path="Abrak/wikipedia-paragraph-embeddings-en-gist-complete", data-dir="20231101.en")
Dataset Structure
The structure of the dataset is designed to minimize necessary storage and calculations but still cover the breadth of Wikipedia.
Data instances
An example looks as follows:
{ 'id': '12.1',
'embedding': [[10, -14, -42, -3, 5, 4, 7, 17, -8, 18, ...]
}
Data Fields
The data fields are the same for all records:
id(str)
: The ID of the same article in wikimedia/wikipedia, '.' as a separator, and the sequential number of the paragraphs in the article. These are not left-padded.embedding
: A list of 384 int8 values (from -128 to 127)
Details
Source Data
The data is sourced directly from the wikimedia/wikipedia dataset, in the 20231101.en directory. This is English-language article text content, taken from a snapshot on November 1, 2023. The source data was already stripped of formatting and other content that is not language. See the wikimedia/wikipedia model card for more information.
As part of this dataset's processing, article text was split into paragraphs on two newlines (\n\n
).
Embedding Calculation
Embeddings were calculated in batches of 1300 paragraphs with sentence_transformers and the unquantized GIST-small-Embedding-v0 model. Precision was set to int8. Complete processing took about 20 hours on an Nvidia A40. The full calculation code used is in commit 5132104f1fa59d9b212844f6f7a93232193958f2 of setup.py in the Github repo for my project, The Archive.
Licensing information
These embeddings are a derivative of Wikipedia article text, which is under CC-BY-SA-4.0, a copyleft license, as well as GFDL. These embeddings inherit the same licenses. See the Wikipedia Copyrights page for details.