Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
mteb-GovReport / README.md
umarbutler's picture
docs: update title and license
0ce14c8 verified
metadata
license: cc-by-4.0
task_categories:
  - text-retrieval
  - summarization
language:
  - en
tags:
  - legal
  - law
size_categories:
  - n<1K
source_datasets:
  - launch/gov_reports
dataset_info:
  - config_name: default
    features:
      - name: query-id
        dtype: string
      - name: corpus-id
        dtype: string
      - name: score
        dtype: float64
    splits:
      - name: test
        num_examples: 973
  - config_name: corpus
    features:
      - name: _id
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: corpus
        num_examples: 973
  - config_name: queries
    features:
      - name: _id
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: queries
        num_examples: 970
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/default.jsonl
  - config_name: corpus
    data_files:
      - split: corpus
        path: data/corpus.jsonl
  - config_name: queries
    data_files:
      - split: queries
        path: data/queries.jsonl
pretty_name: GovReport (MTEB format)

GovReport (MTEB format)

This is the test split of the GovReport dataset formatted in the Massive Text Embedding Benchmark (MTEB) information retrieval dataset format.

This dataset is intended to facilitate the consistent and reproducible evaluation of information retrieval models on GovReport with the mteb embedding model evaluation framework.

More specifically, this dataset tests the ability of information retrieval models to retrieve US government reports from their summaries.

This dataset has been processed into the MTEB format by Isaacus, a legal AI research company.

Methodology πŸ§ͺ

To understand how GovReport was created, refer to its creators' paper.

This dataset was formatted by treating the summary column of GovReport as queries (or anchors) and the document column as relevant (or positive) passages.

Structure πŸ—‚οΈ

As per the MTEB information retrieval dataset format, this dataset comprises three splits, default, corpus and queries.

The default split pairs summaries (query-id, linked to the _id column of the queries split) with government reports (corpus-id, linked to the _id column of the corpus split), each pair having a score of 1.

The corpus split contains government reports, with the text of a report being stored in the text key and its id being stored in the _id key.

The queries split contains summaries, with the text of a summary being stored in the text key and its id being stored in the _id key.

License πŸ“œ

This dataset is licensed under CC BY 4.0.

Citation πŸ”–

@inproceedings{huang-etal-2021-efficient,
    title = "Efficient Attentions for Long Document Summarization",
    author = "Huang, Luyang  and
      Cao, Shuyang  and
      Parulian, Nikolaus  and
      Ji, Heng  and
      Wang, Lu",
    booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
    month = jun,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.naacl-main.112",
    doi = "10.18653/v1/2021.naacl-main.112",
    pages = "1419--1436",
    abstract = "The quadratic computational and memory complexities of large Transformers have limited their scalability for long document summarization. In this paper, we propose Hepos, a novel efficient encoder-decoder attention with head-wise positional strides to effectively pinpoint salient information from the source. We further conduct a systematic study of existing efficient self-attentions. Combined with Hepos, we are able to process ten times more tokens than existing models that use full attentions. For evaluation, we present a new dataset, GovReport, with significantly longer documents and summaries. Results show that our models produce significantly higher ROUGE scores than competitive comparisons, including new state-of-the-art results on PubMed. Human evaluation also shows that our models generate more informative summaries with fewer unfaithful errors.",
    eprint={2104.02112}
}