File size: 3,549 Bytes
b9ae868 35a9448 b9ae868 b43b677 b9ae868 4fe0bd4 79b7b28 b9ae868 2b82a9d b9ae868 2b82a9d b9ae868 581a13c 257a35b 581a13c 2b82a9d b9ae868 e381db0 b9ae868 341b91f 128da05 a54b80b 39f6e52 b9ae868 f94bd42 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
---
language: en
tags:
- summarization
- bart
- medical question answering
- medical question understanding
- consumer health question
- prompt engineering
- LLM
license: apache-2.0
datasets:
- bigbio/meqsum
widget:
- text: '
SUBJECT: high inner eye pressure above 21 possible glaucoma
MESSAGE: have seen inner eye pressure increase as I have begin taking
Rizatriptan. I understand the med narrows blood vessels. Can this med. cause
or effect the closed or wide angle issues with the eyelense/glacoma.'
model-index:
- name: medqsum-bart-large-xsum-meqsum
results:
- task:
type: summarization
name: Summarization
dataset:
name: 'Dataset for medical question summarization'
type: bigbio/meqsum
split: valid
metrics:
- type: rogue-1
value: 54.32
name: Validation ROGUE-1
- type: rogue-2
value: 38.08
name: Validation ROGUE-2
- type: rogue-l
value: 51.98
name: Validation ROGUE-L
- type: rogue-l-sum
value: 51.99
name: Validation ROGUE-L-SUM
library_name: transformers
---
[](https://github.com/zekaouinoureddine/MedQSum)
## MedQSum
<a href="https://github.com/zekaouinoureddine/MedQSum">
<img src="https://raw.githubusercontent.com/zekaouinoureddine/MedQSum/master/assets/models.png" alt="drawing" width="600"/>
</a>
## TL;DR
**medqsum-bart-large-xsum-meqsum** is the best fine-tuned model in the paper [Enhancing Large Language Models' Utility for Medical Question-Answering: A Patient Health Question Summarization Approach](https://doi.org/10.1109/SITA60746.2023.10373720), which introduces a solution to get the most out of LLMs, when answering health-related questions. We address the challenge of crafting accurate prompts by summarizing consumer health questions (CHQs) to generate clear and concise medical questions. Our approach involves fine-tuning Transformer-based models, including Flan-T5 in resource-constrained environments and three medical question summarization datasets.
## Hyperparameters
```json
{
"dataset_name": "MeQSum",
"learning_rate": 3e-05,
"model_name_or_path": "facebook/bart-large-xsum",
"num_train_epochs": 4,
"per_device_eval_batch_size": 4,
"per_device_train_batch_size": 4,
"predict_with_generate": true,
}
```
## Usage
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="NouRed/medqsum-bart-large-xsum-meqsum")
chq = '''SUBJECT: high inner eye pressure above 21 possible glaucoma
MESSAGE: have seen inner eye pressure increase as I have begin taking
Rizatriptan. I understand the med narrows blood vessels. Can this med.
cause or effect the closed or wide angle issues with the eyelense/glacoma.
'''
summarizer(chq)
```
## Results
| key | value |
| --- | ----- |
| eval_rouge1 | 54.32 |
| eval_rouge2 | 38.08 |
| eval_rougeL | 51.98 |
| eval_rougeLsum | 51.99 |
## Cite This
```
@INPROCEEDINGS{10373720,
author={Zekaoui, Nour Eddine and Yousfi, Siham and Mikram, Mounia and Rhanoui, Maryem},
booktitle={2023 14th International Conference on Intelligent Systems: Theories and Applications (SITA)},
title={Enhancing Large Language Models’ Utility for Medical Question-Answering: A Patient Health Question Summarization Approach},
year={2023},
volume={},
number={},
pages={1-8},
doi={10.1109/SITA60746.2023.10373720}}
``` |