The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: RuntimeError Message: Dataset scripts are no longer supported, but found JMTEB-lite.py Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names dataset_module = dataset_module_factory( ^^^^^^^^^^^^^^^^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/load.py", line 989, in dataset_module_factory raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}") RuntimeError: Dataset scripts are no longer supported, but found JMTEB-lite.py
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
JMTEB-lite: The Lightweight Version of JMTEB
JMTEB-lite is a lightweight version of JMTEB. It makes agile evaluation possible by reaching an average of 5x faster evaluation comparing with JMTEB. The result of JMTEB-lite is proved to be highly relevant with that of JMTEB, making it a faithful preview of JMTEB.
TL;DR
from datasets import load_dataset
dataset = load_dataset("sbintuitions/JMTEB-lite", name="<dataset_name>", split="<split>")
SPLITS = ("train", "validation", "test", "corpus")
JMTEB_LITE_DATASET_NAMES = (
'livedoor_news',
'mewsc16_ja',
'sib200_japanese_clustering',
'amazon_review_classification',
'amazon_counterfactual_classification',
'massive_intent_classification',
'massive_scenario_classification',
'japanese_sentiment_classification',
'sib200_japanese_classification',
'wrime_classification',
'jsts',
'jsick',
'jaqket-query',
'jaqket-corpus', # lightweight
'mrtydi-query',
'mrtydi-corpus', # lightweight
'jagovfaqs_22k-query',
'jagovfaqs_22k-corpus',
'nlp_journal_title_abs-query',
'nlp_journal_title_abs-corpus',
'nlp_journal_title_intro-query',
'nlp_journal_title_intro-corpus',
'nlp_journal_abs_intro-query',
'nlp_journal_abs_intro-corpus',
'nlp_journal_abs_article-query',
'nlp_journal_abs_article-corpus',
'jacwir-retrieval-query',
'jacwir-retrieval-corpus', # lightweight
'miracl-retrieval-query',
'miracl-retrieval-corpus', # lightweight
'mldr-retrieval-query',
'mldr-retrieval-corpus',
'mintaka-retrieval-query',
'mintaka-retrieval-corpus',
'esci-query',
'esci-corpus',
'jqara-query',
'jqara-corpus',
'jacwir-reranking-query', # lightweight
'jacwir-reranking-corpus', # lightweight
'miracl-reranking-query', # lightweight
'miracl-reranking-corpus', # lightweight
'mldr-reranking-query',
'mldr-reranking-corpus',
)
Introduction
We introduced JMTEB (Japanese Massive Text Embedding Benchmark), a comprehensive evaluation benchmark of Japanese text embedding models. However, the massive size of JMTEB makes evaluation slow and resource demanding. To address this, we now introduce JMTEB-lite, a lightweight version of JMTEB constructed by substaintially reducing corpus size in retrieval and reranking tasks. We have also verified that JMTEB-lite significantly accelerates evaluation while maintaining high fidelity to the full JMTEB.
We recommand to use JMTEB-lite to obtain the preview evaluation results in agile development, and use JMTEB for full and final evaluation.
JMTEB-lite is compatible with the evaluation script of JMTEB: https://github.com/sbintuitions/JMTEB.
Tasks and Datasets
Here is an overview of the tasks and datasets currently included in JMTEB-lite.
Note that only datasets in bold are lightweight, and the rest are exactly the same with the counterparts in JMTEB.
Task | Dataset | Train | Dev | Test | Document (Retrieval) |
---|---|---|---|---|---|
Clustering | Livedoor-News | 5,163 | 1,106 | 1,107 | - |
MewsC-16-ja | - | 992 | 992 | - | |
SIB200 Japanese Clustering | 701 | 99 | 204 | - | |
Classification | AmazonCounterfactualClassification | 5,600 | 466 | 934 | - |
AmazonReviewClassification | 200,000 | 5,000 | 5,000 | - | |
MassiveIntentClassification | 11,514 | 2,033 | 2,974 | - | |
MassiveScenarioClassification | 11,514 | 2,033 | 2,974 | - | |
Japanese Sentiment Classification | 9,831 | 1,677 | 2,552 | - | |
SIB200 Japanese Classification | 701 | 99 | 204 | - | |
WRIME Classification | 30,000 | 2,500 | 2,500 | - | |
STS | JSTS | 12,451 | - | 1,457 | - |
JSICK | 5,956 | 1,985 | 1,986 | - | |
Retrieval | JAQKET | 13,061 | 995 | 997 | 65,802 |
Mr.TyDi-ja | 3,697 | 928 | 720 | 93,382 | |
NLP Journal title-abs | - | 127 | 510 | 637 | |
NLP Journal title-intro | - | 127 | 510 | 637 | |
NLP Journal abs-intro | - | 127 | 510 | 637 | |
NLP Journal abs-abstract | - | 127 | 510 | 637 | |
JaGovFaqs-22k | 15,955 | 3,419 | 3,420 | 22,794 | |
JaCWIR-Retrieval | - | 1,000 | 4,000 | 302,638 | |
MIRACL-Retrieval | 2,433 | 1,044 | 860 | 105,064 | |
MLDR-Retrieval | 2,262 | 200 | 200 | 10,000 | |
Mintaka-Retrieval | - | 2,313[^1] | 2,313 | 2,313 | |
Reranking | Esci | 10,141 | 1,790 | 4,206 | 149,999 |
JaCWIR-Reranking | - | 1,000 | 4,000 | 188,033 | |
JQaRA | 498 | 1,737 | 1,667 | 172,897 | |
MIRACL-Reranking | 2,433 | 1,044 | 860 | 37,124 | |
MLDR-Reranking | 2,262 | 200 | 200 | 5,339 |
[^1]: To keep consistent with MTEB where Mintaka-Retrieval doesn't have a validation set, we set our validation set the same as the test set.
Construction Process
For the 4 retrieval datasets (JAQKET, Mr.TyDi, JaCWIR-Retrieval, MIRACL-Retrieval), we use 5 highly performant models to predict hard negative documents for each query (the query's most 50 semantically similar documents in the corpus), and merge these hard negatives along with golden documents.
For the 2 reranking datasets (JQaRA, JaCWIR-Reranking), we use 5 highly performant models to rerank the documents for each query, and retain top-50 hard negative documents for each query. Then we merge these hard negatives with golden documents.
For the rest, they are kept exactly the same with their counterparts in JMTEB.
Reference
@misc{jmteb_lite,
author = {Li, Shengzhe and Ohagi, Masaya and Ri, Ryokan and Fukuchi, Akihiko and Shibata, Tomohide and Kawahara, Daisuke},
title = {{J}{M}{T}{E}{B}-lite: {T}he {L}ightweight {V}ersion of {JMTEB}},
howpublished = {\url{https://huggingface.co/datasets/sbintuitions/JMTEB-lite}},
year = {2025},
}
@techreport{jmteb_lite,
author = {Li, Shengzhe and Ohagi, Masaya and Ri, Ryokan and Fukuchi, Akihiko and Shibata, Tomohide and Kawahara, Daisuke},
title = {{JMTEB} and {JMTEB}-lite: {J}apanese {M}assive {T}ext {E}mbedding {B}enchmark and {I}ts {L}ightweight {V}ersion},
institution = {SB Intuitions / Waseda University},
number = {IPSJ SIG Technical Reports Vol.2025-NL-265 No.3},
year = {2025},
month = {09},
}
License
Regarding the license information of datasets, please refer to the individual datasets.
Our code is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.
- Downloads last month
- 69