Dataset Viewer
Auto-converted to Parquet
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
_id: string
text: string
embedding: fixed_size_list<element: float>[768]
  child 0, element: float
-- schema metadata --
huggingface: '{"info": {"features": {"_id": {"dtype": "string", "_type": ' + 156
to
{'text': Value('string'), 'embedding': List(Value('float32'), length=768)}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2431, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1975, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              _id: string
              text: string
              embedding: fixed_size_list<element: float>[768]
                child 0, element: float
              -- schema metadata --
              huggingface: '{"info": {"features": {"_id": {"dtype": "string", "_type": ' + 156
              to
              {'text': Value('string'), 'embedding': List(Value('float32'), length=768)}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Embedpress: Alibaba-NLP/gte-modernbert-base on the mteb/lotte dataset

This is the mteb/lotte dataset, embedded with Alibaba-NLP/gte-modernbert-base.

For each example, we embed the text directly (no additional instruction prompt). Embeddings have dimensionality 768.

These embeddings are intended for tasks like large-scale distillation, retrieval, and similarity search. Because the raw text may exceed the model’s limit, we recommend truncating to the model’s maximum token length at build time.

Schema

  • text (string) — the query text used for embedding
  • embedding (float32[768]) — the vector representation from Alibaba-NLP/gte-modernbert-base

Split

  • train13028 examples

Notes

  • Produced with Alibaba-NLP/gte-modernbert-base from Hugging Face Hub.
  • If you need a smaller embedding size (e.g., matryoshka/truncated vectors), you can safely slice the embeddings without re-embedding.

Acknowledgments

Thanks Mixedbread AI for a GPU grant for research into small retrieval models.

Downloads last month
30

Collection including stephantulkens/lotte-query-gte-modernbert-pooled