Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -617,35 +617,35 @@ The datasets are divided into subsets based on context lengths: 4k, 8k, 16k, 32k
|
|
| 617 |
<img src="table.png" width="800" />
|
| 618 |
|
| 619 |
### Group I: Simple Information Retrieval
|
| 620 |
-
- **Passkey**: Extract a relevant piece of code number from a long text fragment.
|
| 621 |
- **PasskeyWithLibrusec**: Similar to Passkey but with added noise from Librusec texts.
|
| 622 |
|
| 623 |
### Group II: Question Answering and Multiple Choice
|
| 624 |
-
- **MatreshkaNames**: Identify the person in dialogues based on the discussed topic.
|
| 625 |
- **MatreshkaYesNo**: Indicate whether a specific topic was mentioned in the dialog.
|
| 626 |
- **LibrusecHistory**: Answer questions based on historical texts.
|
| 627 |
-
- **ruTREC**: Few-shot in-context learning for topic classification. Created by translating the TREC dataset
|
| 628 |
-
- **ruSciFi**: Answer true/false based on context and general world knowledge. Translation of SciFi dataset from
|
| 629 |
- **ruSciAbstractRetrieval**: Retrieve relevant paragraphs from scientific abstracts.
|
| 630 |
-
- **ruTPO**: Multiple-choice questions similar to TOEFL exams. Translation of the TPO dataset
|
| 631 |
-
- **ruQuALITY**: Multiple-choice QA tasks based on detailed texts. Created by translating the QuALITY dataset
|
| 632 |
|
| 633 |
### Group III: Multi-hop Question Answering
|
| 634 |
- **ruBABILongQA**: 5 long-context reasoning tasks for QA using facts hidden among irrelevant information.
|
| 635 |
- **LongContextMultiQ**: Multi-hop QA based on Wikidata and Wikipedia.
|
| 636 |
- **LibrusecMHQA**: Multi-hop QA requiring information distributed across several text parts.
|
| 637 |
-
- **ru2WikiMultihopQA**: Translation of the 2WikiMultihopQA dataset
|
| 638 |
|
| 639 |
### Group IV: Complex Reasoning and Mathematical Problems
|
| 640 |
- **ruSciPassageCount**: Count unique paragraphs in a long text.
|
| 641 |
-
- **ruQasper**: Question Answering over academic research papers. Created by translating the Qasper dataset
|
| 642 |
-
- **ruGSM100**: Solve math problems using Chain-of-Thought reasoning.
|
| 643 |
|
| 644 |
|
| 645 |
|
| 646 |
## Usage
|
| 647 |
|
| 648 |
-
|
| 649 |
|
| 650 |
## Citation
|
| 651 |
|
|
@@ -664,6 +664,6 @@ year={2024}
|
|
| 664 |
|
| 665 |
The datasets are published under the MIT license.
|
| 666 |
|
| 667 |
-
##
|
| 668 |
|
| 669 |
For more details and code, please visit our [GitHub repository](https://github.com/ai-forever/LIBRA/).
|
|
|
|
| 617 |
<img src="table.png" width="800" />
|
| 618 |
|
| 619 |
### Group I: Simple Information Retrieval
|
| 620 |
+
- **Passkey**: Extract a relevant piece of code number from a long text fragment. Based on the original [PassKey test](https://github.com/CStanKonrad/long_llama/blob/main/examples/passkey.py) from the m LongLLaMA’s GitHub repo.
|
| 621 |
- **PasskeyWithLibrusec**: Similar to Passkey but with added noise from Librusec texts.
|
| 622 |
|
| 623 |
### Group II: Question Answering and Multiple Choice
|
| 624 |
+
- **MatreshkaNames**: Identify the person in dialogues based on the discussed topic. We used [Matreshka](https://huggingface.co/datasets/zjkarina/matreshka) dataset and [Russian Names](https://www.kaggle.com/datasets/rai220/russian-cyrillic-names-and-sex/data) dataset to create this and the next task.
|
| 625 |
- **MatreshkaYesNo**: Indicate whether a specific topic was mentioned in the dialog.
|
| 626 |
- **LibrusecHistory**: Answer questions based on historical texts.
|
| 627 |
+
- **ruTREC**: Few-shot in-context learning for topic classification. Created by translating the [TREC dataset](https://huggingface.co/datasets/THUDM/LongBench/viewer/trec_e) from LongBench.
|
| 628 |
+
- **ruSciFi**: Answer true/false based on context and general world knowledge. Translation of [SciFi dataset](https://huggingface.co/datasets/L4NLP/LEval/viewer/sci_f) from L-Eval which originally was based on [SF-Gram](https://github.com/nschaetti/SFGram-dataset).
|
| 629 |
- **ruSciAbstractRetrieval**: Retrieve relevant paragraphs from scientific abstracts.
|
| 630 |
+
- **ruTPO**: Multiple-choice questions similar to TOEFL exams. Translation of the [TPO dataset](https://huggingface.co/datasets/L4NLP/LEval/viewer/tpo) from L-Eval.
|
| 631 |
+
- **ruQuALITY**: Multiple-choice QA tasks based on detailed texts. Created by translating the [QuALITY dataset](https://huggingface.co/datasets/L4NLP/LEval/viewer/quality) from L-Eval.
|
| 632 |
|
| 633 |
### Group III: Multi-hop Question Answering
|
| 634 |
- **ruBABILongQA**: 5 long-context reasoning tasks for QA using facts hidden among irrelevant information.
|
| 635 |
- **LongContextMultiQ**: Multi-hop QA based on Wikidata and Wikipedia.
|
| 636 |
- **LibrusecMHQA**: Multi-hop QA requiring information distributed across several text parts.
|
| 637 |
+
- **ru2WikiMultihopQA**: Translation of the [2WikiMultihopQA dataset](https://huggingface.co/datasets/THUDM/LongBench/viewer/2wikimqa_e) from LongBench.
|
| 638 |
|
| 639 |
### Group IV: Complex Reasoning and Mathematical Problems
|
| 640 |
- **ruSciPassageCount**: Count unique paragraphs in a long text.
|
| 641 |
+
- **ruQasper**: Question Answering over academic research papers. Created by translating the [Qasper dataset](https://huggingface.co/datasets/THUDM/LongBench/viewer/qasper_e) from LongBench.
|
| 642 |
+
- **ruGSM100**: Solve math problems using Chain-of-Thought reasoning. Created by translating the [GSM100](https://huggingface.co/datasets/L4NLP/LEval/viewer/gsm100) dataset from L-Eval.
|
| 643 |
|
| 644 |
|
| 645 |
|
| 646 |
## Usage
|
| 647 |
|
| 648 |
+
Researchers and developers can use these datasets to evaluate the long-context understanding abilities of various LLMs. The datasets, codebase, and public leaderboard are open-source to guide forthcoming research in this area.
|
| 649 |
|
| 650 |
## Citation
|
| 651 |
|
|
|
|
| 664 |
|
| 665 |
The datasets are published under the MIT license.
|
| 666 |
|
| 667 |
+
## GitHub
|
| 668 |
|
| 669 |
For more details and code, please visit our [GitHub repository](https://github.com/ai-forever/LIBRA/).
|