Datasets:
File size: 1,591 Bytes
edd2280 5e39a5a edd2280 5e39a5a edd2280 9c0708b edd2280 9c0708b edd2280 9c0708b edd2280 50ca1bf 5e39a5a c8495d8 56f9a38 c8495d8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
language:
- en
size_categories:
- 10K<n<100K
task_categories:
- conversational
pretty_name: Doctor & Patient
dataset_info:
features:
- name: prompt
dtype: string
- name: input_ids
sequence: int32
- name: length
dtype: int64
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 42127351.778204426
num_examples: 13125
- name: test
num_bytes: 10534245.221795576
num_examples: 3282
download_size: 10917910
dataset_size: 52661597.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
tags:
- biology
- medical
---
### Dataset
This is an edited and tokenized version of the MedQuad-MedicalQnADataset dataset by keivalya.
The original dataset contains 16K+ questions and answers between patient and doctor, which have been converted into a full prompt to train BioGPT by Microsoft.
##### Tokenizer used
microsoft/BioGPT-Large (BPE tokenizer)
### Full prompt
```py
prompt = f"""You are a helpful AI Doctor who answers medical questions. Below is a question from a patient. Your task is to answer the questions as truthfully as you can.
### Patient:
{sample['Question']}
### Doctor:
{sample['Answer']}"""
```
### Notes
Since bioGPT has a max input of 1024, the full prompt was truncated to stay below this limit.
The truncation strategy I used made sure that only full sentences were produced.
Please note that this dataset is for research/testing only, it should not be used in a real setting or used to give medical advice to people.
|