Datasets:
Add files using upload-large-folder tool
Browse files
README.md
ADDED
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
license: cc-by-nc-sa-4.0
|
5 |
+
datasets:
|
6 |
+
- bioasq
|
7 |
+
task_categories:
|
8 |
+
- question-answering
|
9 |
+
- sentence-similarity
|
10 |
+
tags:
|
11 |
+
- biomedical
|
12 |
+
- rag
|
13 |
+
- pubmed
|
14 |
+
- bioasq
|
15 |
+
- biomedical-qa
|
16 |
+
library_name: huggingface
|
17 |
+
pretty_name: BioASQ 12B RAG Dataset
|
18 |
+
---
|
19 |
+
|
20 |
+
# BioASQ 12B RAG Dataset
|
21 |
+
|
22 |
+
A processed version of the BioASQ 12B dataset optimized for Retrieval-Augmented Generation (RAG) applications in biomedical question answering.
|
23 |
+
|
24 |
+
This dataset provides a structured collection of biomedical questions paired with relevant PubMed abstracts and gold standard answers. It is specifically formatted for RAG pipelines, making it ideal for training and evaluating systems that need to retrieve relevant biomedical information from a corpus and generate accurate, evidence-based answers to complex biomedical questions.
|
25 |
+
|
26 |
+
## Dataset Structure
|
27 |
+
|
28 |
+
The dataset contains three main components:
|
29 |
+
|
30 |
+
1. **Corpus** (`data/corpus.jsonl`): A collection of PubMed abstracts including metadata.
|
31 |
+
|
32 |
+
- Each line is a JSON object containing:
|
33 |
+
- `id`: PubMed ID
|
34 |
+
- `title`: Title of the paper
|
35 |
+
- `text`: Abstract text
|
36 |
+
- `url`: PubMed URL
|
37 |
+
- `publication_date`: Publication date
|
38 |
+
- `journal`: Journal name
|
39 |
+
- `authors`: List of authors
|
40 |
+
- `doi`: Digital Object Identifier (if available)
|
41 |
+
- `keywords`: Keywords
|
42 |
+
- `mesh_terms`: MeSH terms
|
43 |
+
|
44 |
+
2. **Dev Questions** (`data/dev.jsonl`): Development set of biomedical questions.
|
45 |
+
|
46 |
+
- Each line is a JSON object containing:
|
47 |
+
- `question_id`: Unique identifier for the question
|
48 |
+
- `question`: The question text
|
49 |
+
- `answer`: Ideal answer
|
50 |
+
- `relevant_passage_ids`: List of PubMed IDs for relevant abstracts
|
51 |
+
- `type`: Question type (e.g., factoid, list, yes/no, summary)
|
52 |
+
- `snippets`: Relevant snippets from abstracts
|
53 |
+
|
54 |
+
3. **Test Questions** (`data/test.jsonl`): Test set of biomedical questions.
|
55 |
+
- Same structure as dev questions
|
56 |
+
|
57 |
+
## Usage
|
58 |
+
|
59 |
+
This dataset is designed for training and evaluating RAG systems for biomedical question answering.
|
60 |
+
|
61 |
+
### Loading the Dataset
|
62 |
+
|
63 |
+
You can load the dataset using the Hugging Face `datasets` library:
|
64 |
+
|
65 |
+
```python
|
66 |
+
from datasets import load_dataset
|
67 |
+
|
68 |
+
# Load the entire dataset
|
69 |
+
dataset = load_dataset("mattmorgis/bioasq-12b-rag-dataset")
|
70 |
+
|
71 |
+
# Access the corpus
|
72 |
+
corpus = dataset["corpus"]
|
73 |
+
|
74 |
+
# Access the development questions
|
75 |
+
dev_questions = dataset["dev"]
|
76 |
+
|
77 |
+
# Access the test questions
|
78 |
+
test_questions = dataset["test"]
|
79 |
+
```
|
80 |
+
|
81 |
+
### Example RAG Application
|
82 |
+
|
83 |
+
This dataset can be used to build a biomedical RAG system:
|
84 |
+
|
85 |
+
1. Index the corpus using a vector database (e.g., FAISS, Chroma)
|
86 |
+
2. Embed questions using a biomedical or general purpose text embedding model
|
87 |
+
3. Retrieve relevant documents from the corpus based on question embeddings
|
88 |
+
4. Generate answers using a large language model (LLM) with the retrieved context
|
89 |
+
|
90 |
+
### Evaluation
|
91 |
+
|
92 |
+
The dataset provides gold standard answers and relevant passage IDs that can be used to evaluate:
|
93 |
+
|
94 |
+
- Retrieval accuracy
|
95 |
+
- Answer quality
|
96 |
+
- Domain-specific knowledge incorporation
|
97 |
+
|
98 |
+
## Source
|
99 |
+
|
100 |
+
This dataset is derived from the [BioASQ Challenge](http://bioasq.org/) task 12b dataset.
|
101 |
+
|
102 |
+
Anastasia Krithara, Anastasios Nentidis, Konstantinos Bougiatiotis, Georgios Paliouras. BioASQ-QA: A manually curated corpus for Biomedical Question Answering. bioRxiv 2022.12.14.520213; doi: https://doi.org/10.1101/2022.12.14.520213
|