Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
USearchWiki
Multi-model embedding dataset built on HuggingFace FineWiki, designed for approximate nearest neighbor (ANN) search benchmarking with USearch and other vector search engines.
The same Wikipedia corpus — chunked, cleaned, and enriched with graph metadata — is embedded by multiple models spanning dense BERT-like encoders, GPT-style decoder-based LLMs, and late-interaction ColBERT-style architectures. Each model's embeddings ship with precomputed ground-truth k-nearest neighbors, enabling reproducible recall and throughput benchmarks without re-running expensive exact search.
Why USearchWiki?
Existing ANN benchmarks suffer from three gaps:
- Stale descriptors. The most popular benchmarks (SIFT-1B, Deep-1B, GloVe) use features from 2014-2021 — image descriptors and word vectors, not modern text embeddings.
- Single-model datasets. Each benchmark is produced by one model. You cannot compare how the same retrieval engine handles different vector distributions without re-embedding.
- No decoder embeddings. State-of-the-art embedding models (GTE-Qwen, Llama-Embed-Nemotron, Qwen3-Embedding) are decoder-based LLMs, yet no ANN benchmark uses their outputs.
USearchWiki fixes all three: one corpus, multiple models, modern architectures, with graph-structured metadata for filtered search.
Source Corpus
HuggingFaceFW/finewiki — August 2025 snapshot, 325 languages, 61.5M articles.
FineWiki is extracted from Wikimedia's Enterprise HTML dumps (not raw wikitext), so templates are fully rendered by MediaWiki's own engine.
This avoids the well-known content loss that plagues wikitext-based parsers, as mwparserfromhell cannot expand templates.
Section headings, tables, math, and lists are preserved as Markdown.
Bot-generated stubs, disambiguation pages, and cross-language leakage are filtered out.
Text processing
No chunking is used. Short-content models only process the abstract. Long-context models are prioritized and receive the whole document in the original form.
Scale
| Scope | Articles | Parquet, GB | Avg Bytes/Article |
|---|---|---|---|
| English | 6.6M | 38 | 5,700 |
| Top 5: EN, DE, FR, ES, RU | 15.7M | 86 | 5,460 |
| Top 10 by text volume¹ | 22.9M | 120 | 5,250 |
| Top 20 | 41.6M | 149 | 3,580 |
| All 325 languages | 61.6M | ~170 | 2,740 |
¹ EN, DE, FR, ES, RU, IT, JA, ZH, PL, UK — excluding bot-generated wikis (Cebuano, Swedish, Waray, Egyptian Arabic) which inflate article counts with minimal text.
Parquet weight includes both text and wikitext columns; pure text is roughly half.
Average bytes/article drops at wider scope because smaller wikis are dominated by stubs.
Corpus Structure
Measured by scanning every parquet shard, including rendered Markdown text and raw wikitext columns:
| Quantity | Total | Per article |
|---|---|---|
| Articles | 61.55M | — |
Rendered text bytes: text column |
195.2 GB | 3.2 KB |
Wikitext bytes: wikitext column |
337.1 GB | 5.5 KB |
| Markdown paragraphs: blank-line-separated blocks | 254.2M | 4.13 |
Section headings: #, ##, ### |
206.3M | 3.35 |
35% of articles have a single paragraph (stubs), 65% have ≤ 3, only 10% have 8+. Paragraph length distribution:
- 12%: under 50 bytes, mostly headings or one-liners,
- 38%: in 200–800 bytes, the prose sweet spot,
- 4%: over 3.2 KB, long lists or tables rendered as one block.
Annotation density extracted from raw wikitext:
| Annotation kind | Total | Articles touched |
|---|---|---|
Plain [[wikilinks]] |
1.42 B | 99.4% |
Templates {{...}} |
0.998 B | 98.6% |
Piped links [[T|d]] |
0.648 B | 89.3% |
Citations <ref>... |
0.400 B | 71.0% |
External URLs [https://...] |
84M | 41.8% |
Categories [[Category:...]] |
55M | 16.3% |
Tables {...} |
19M | 14.6% |
Files / images [[File:...]] |
14M | 7.2% |
Section anchors [[Article#Section]] |
11.5M | 6.4% |
Math <math>... |
6.4M | 0.5% |
Self anchors [[#Section]] |
2.6M | 0.7% |
Galleries <gallery> |
2.5M | 3.2% |
Inline interwiki [[lang:...]] |
0.83M | 1.1% |
Section anchors deserve attention: they form a 11.5M-edge paragraph-level link graph already curated by editors — a built-in supervision signal for sub-article retrieval evaluation.
Embedding Models
Each model embeds the same article corpus independently. No chunking is applied — short-context models see truncated articles, long-context models see the full text. Dense models produce one vector per article. ColBERT models produce one vector per token (~2,000 vectors per average article).
| Model | Year | Type | Dims | Context | Params | License | Base / Fine-tuned by | Perf |
|---|---|---|---|---|---|---|---|---|
| Qwen3-Embedding-0.6B | 2025 | Dense (decoder) | 1024 | 32 K | 600 M | Apache 2.0 | Qwen3 (Alibaba) | 70.7 MTEB v2 |
| GTE-ModernColBERT-v1 | 2025 | ColBERT (encoder) | 128 | 8-32 K | 139 M | Apache 2.0 | ModernBERT (Answer.AI) / LightOn | 88.4 LongEmbed |
| arctic-embed-l-v2.0 | 2024 | Dense (encoder) | 1024 | 8 K | 568 M | Apache 2.0 | XLM-R (Meta) → BGE-M3 (BAAI) / Snowflake | 55.6 BEIR |
| nomic-embed-text-v1.5 | 2024 | Dense (encoder) | 768 | 8 K | 137 M | Apache 2.0 | NomicBERT (Nomic) | 62.3 MTEB v1 |
| e5-mistral-7b-instruct | 2023 | Dense (decoder) | 4096 | 4 K | 7.1 B | MIT | Mistral-7B (Mistral AI) / Microsoft | 66.6 MTEB v1 |
Compute Estimates
All embeddings are computed and stored in half-precision - Float16, to maximize space efficiency and compatibility with off-the-shelf tools like NumPy, which have np.float16, but not the brein-float variant.
FP8 quantization can improve throughput ~1.5× with negligible quality loss.
Encoder models use TEI with the Hopper Docker image.
Decoder models use vLLM with --task embed.
Token counts vary by tokenizer — CJK text produces ~1 token per 2-3 bytes, Latin/Cyrillic ~1 per 4-5 bytes. Average article length across all languages is ~400 tokens, but this is dragged down by millions of stubs in smaller wikis; English articles average ~2,700 tokens.
| Model | Throughput | Total tokens | Time | Vectors | Storage | Notes |
|---|---|---|---|---|---|---|
| Qwen3-Embedding-0.6B | 500 doc/s | 24 B | 1.4 d | 61.6 M | 126 GB | Full articles, one vector per article |
| GTE-ModernColBERT-v1, section-pooled | 800 doc/s | 24 B | 0.9 d | 206.3 M | 53 GB | Mean-pool tokens within each section |
| arctic-embed-l-v2.0 | 800 doc/s | 28 B | 0.9 d | 61.6 M | 126 GB | Truncated at 8K tokens |
| nomic-embed-text-v1.5 | 1200 doc/s | 21 B | 0.6 d | 61.6 M | 95 GB | Truncated at 8K tokens |
| e5-mistral-7b-instruct | 50 doc/s | 21 B | 14.3 d | 61.6 M | 505 GB | Truncated at 4K tokens |
Dataset Layout
Layout mirrors FineWiki's data/<wiki>/<group>_<shard>.parquet structure: one directory per Wikipedia language, with shard filenames preserved 1:1.
Each .f16bin is row-aligned with its source parquet — .f16bin row N is the embedding of parquet row N, in native order.
If the source text was empty or null the row is a zero vector (norm == 0); the parquet's id column provides the doc identifier, so no separate ids file is needed.
Binary format: u32 rows count, u32 columns count, then rows × cols little-endian f16 values — directly compatible with USearch's and the Big-ANN benchmark ecosystem.
.body.f16bin is the article-body embedding; .title.f16bin is the title-only embedding (short-context, useful for title-vs-body retrieval studies).
unum-cloud/USearchWiki/
├── README.md
├── LICENSE
├── .gitattributes
├── usearchwiki.py # consumer module: load_lang, read_bin, discover_collection, ...
├── embed_articles.py # one dense vector per article, via TEI
├── embed_sections.py # late-chunking ColBERT: one vector per section
├── late_chunking.py # section-aware windowing primitives
├── ground_truth.py # exact global k-NN via tiled CuPy GEMMs
├── build_index.py # build a USearch HNSW index from per-shard f16bin
├── eval_recall.py # measure recall@k of an index against the ground truth
│
├── qwen3-embedding-0.6b/ # 1024-dim, decoder, float16
│ ├── enwiki/
│ │ ├── 000_00000.body.f16bin # mirrors enwiki/000_00000.parquet
│ │ ├── 000_00000.title.f16bin
│ │ ├── 000_00001.body.f16bin
│ │ ├── 000_00001.title.f16bin
│ │ └── ...
│ ├── dewiki/
│ │ └── ...
│ └── ... # one dir per Wikipedia language
│
├── snowflake-arctic-embed-l-v2.0/ # 1024-dim, encoder, float16
│ └── <wiki>/<group>_<shard>.{body,title}.f16bin
│
├── nomic-embed-text-v1.5/ # 768-dim, encoder, float16
│ └── <wiki>/<group>_<shard>.{body,title}.f16bin
│
├── e5-mistral-7b-instruct/ # 4096-dim, decoder, float16 (planned)
│ └── <wiki>/<group>_<shard>.{body,title}.f16bin
│
└── gte-moderncolbert-v1/ # 128-dim per token, ColBERT (planned)
└── <wiki>/<group>_<shard>.{body,title}.f16bin
Downloading
USearchWiki uses an unusual distribution policy. Single repository, no separation of code and data. USearchWiki lives on three coordinated mirrors, all sharing the same single-branch Git history:
| Mirror | Holds | Best for |
|---|---|---|
| HuggingFace Hub | code + LFS bytes (canonical) | git clone, hf CLI, streaming |
| GitHub | code + LFS pointers (no bytes) | reading the code, contributing |
| Nebius S3 | flat byte mirror of LFS blobs | bulk downloads, batch jobs |
.f16bin files are tracked via Git LFS; on GitHub, the LFS server is rerouted to HuggingFace, so GitHub clones receive only ~200-byte pointer files.
From HuggingFace
The default and the simplest path — full code, full data, single command:
git clone https://huggingface.co/datasets/unum-cloud/USearchWiki
To skip the ~600 GB of binaries and get only code + pointers:
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/unum-cloud/USearchWiki
From GitHub
The GitHub repo holds only code and LFS pointers; the actual binaries live on HuggingFace. After cloning, point Git LFS at HuggingFace and pull:
git clone https://github.com/unum-cloud/USearchWiki
cd USearchWiki
git config lfs.url https://huggingface.co/datasets/unum-cloud/USearchWiki.git/info/lfs
git lfs pull
From Nebius S3
The fastest path for bulk downloads — pulls byte-identical LFS objects directly from object storage, then materializes the .f16bin files into the working tree.
Via s5cmd - a parallel, single Go binary, often ~5–10× faster than aws s3 sync for many-files workloads:
# One-time install
curl -sL https://github.com/peak/s5cmd/releases/download/v2.3.0/s5cmd_2.3.0_linux_amd64.deb -o /tmp/s5cmd.deb
sudo dpkg -i /tmp/s5cmd.deb
# Sync the byte mirror, then materialize the working tree
s5cmd --endpoint-url https://storage.us-central1.nebius.cloud --no-sign-request \
sync 's3://usearch-wiki/lfs/*' ./.git/lfs/objects/
git lfs checkout
The bucket is configured for anonymous read access, so no Nebius account or credentials are needed — --no-sign-request tells s5cmd to skip request signing.
aws s3 sync --no-sign-request works equivalently with the same endpoint.
Loading embeddings in Python
from usearchwiki import read_bin
matrix = read_bin("qwen3-embedding-0.6b/enwiki/000_00000.body.f16bin", dtype="f16")
# matrix.shape == (rows_in_shard, 1024)
Or pull just one model's embeddings for a single language:
hf download unum-cloud/USearchWiki \
--repo-type dataset \
--include "qwen3-embedding-0.6b/enwiki/*"
Workflow
The embedding pipeline is designed for multi-day runs on GPU servers with checkpoint/resume:
# 1. Download FineWiki articles
python corpus.py --lang en --output corpus/
# 2. Embed with each model (resume-safe — rerun after interruptions)
python embed_articles.py --model qwen3-0.6b --input corpus/ --output embeddings/ --resume
python embed_articles.py --model e5-mistral-7b --input corpus/ --output embeddings/ --resume
python embed_articles.py --model arctic-embed-l-v2 --input corpus/ --output embeddings/ --resume
python embed_articles.py --model nomic-v1.5 --input corpus/ --output embeddings/ --resume
# Section-pooled ColBERT uses a different pipeline (late chunking)
python embed_sections.py --model gte-moderncolbert --input corpus/ --output embeddings/ --resume
# 3. Extract graph metadata
python graph.py --lang en --output graph/
# 4. Compute ground truth for each model
python ground_truth.py --embeddings embeddings/qwen3-0.6b/ --k 100 --queries 10000
python ground_truth.py --embeddings embeddings/e5-mistral-7b/ --k 100 --queries 10000
# 5. Upload to HuggingFace
python upload.py --repo unum-cloud/USearchWiki
Each step is idempotent.
Progress is tracked in state/*.json files — if a job dies (OOM, SSH drop, GPU error), rerunning the same command picks up from the last checkpoint.
Adding a new embedding model requires only step 2 + step 4 — the corpus and graph are shared.
Hosting
| Location | Storage/mo (1 TB) | Egress/GB | Notes |
|---|---|---|---|
| HuggingFace Hub | Free | Free | Primary. Xet storage, unlimited public downloads |
| AWS S3 Standard | $23.00 | $0.09 | S3-compatible mirror. Egress adds up fast for popular datasets |
| Nebius Object Storage | $15.05 | $0.015 | S3-compatible. ~35% cheaper storage, ~6× cheaper egress than AWS |
License
The embedding pipeline code in this repository is licensed under Apache 2.0.
Dataset licensing depends on the components:
- Wikipedia text: CC BY-SA 4.0
- FineWiki extraction: Apache 2.0
- Embeddings: Governed by each model's license (see table above — all selected models use Apache 2.0 or MIT)
- Graph metadata: Derived from Wikimedia/Wikidata dumps (CC0 for Wikidata, CC BY-SA 4.0 for Wikipedia)
- Downloads last month
- 25