File size: 4,226 Bytes
a918828 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 |
---
library_name: transformers
base_model: medGemma4B
tags:
- medical
- medical-coding
- icd10
- cpt
- hcpcs
- healthcare
- clinical
- fine-tuned
- peft
- lora
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
# medgemma-4b-medical-coding
## Model Description
This is a **LoRA adapter** fine-tuned on **medGemma4B** for medical coding tasks. The model is specifically designed to:
- Extract diseases and medical conditions from discharge summaries
- Identify medical procedures and interventions
- Assign appropriate medical codes (ICD-10, CPT, HCPCS)
- Process clinical documentation with high accuracy
**Base Model:** `medGemma4B`
**Fine-tuning Method:** LoRA (Low-Rank Adaptation)
## Training Details
### LoRA Configuration
- **Rank (r):** 16
- **Alpha:** 32
- **Dropout:** 0.1
- **Target Modules:** v_proj, up_proj, o_proj, gate_proj, k_proj, down_proj, q_proj
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("TachyHealthResearch/medgemma-4b-medical-coding")
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"medGemma4B",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "TachyHealthResearch/medgemma-4b-medical-coding")
```
### Example Usage
```python
# Define the system prompt for medical coding
system_prompt = """You are an expert medical coding specialist.
Analyze the discharge summary to extract diseases, procedures, and assign appropriate medical codes.
Return the response in JSON format with this structure:
{"diseases": ["disease1", "disease2"], "icd10_codes": ["code1", "code2"],
"procedures": ["procedure1", "procedure2"], "cpt_codes": ["code1", "code2"],
"hcpcs_codes": ["code1", "code2"]}"""
# Example discharge summary
discharge_summary = """
Patient admitted with chest pain and shortness of breath.
Diagnosed with acute myocardial infarction and congestive heart failure.
Underwent percutaneous coronary intervention with stent placement.
"""
# Prepare the conversation
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"Please analyze this discharge summary:\n\n{discharge_summary}"}
]
# Apply chat template and generate
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.1,
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
generated_response = response[len(text):].strip()
print("Generated Medical Codes:")
print(generated_response)
```
## Model Performance
This model has been specifically fine-tuned for medical coding tasks and demonstrates strong performance in:
- Disease extraction from clinical text
- Medical procedure identification
- Medical code assignment (ICD-10, CPT, HCPCS)
- Structured JSON response generation
## Intended Use
### Primary Use Cases
- Medical coding automation
- Clinical documentation analysis
- Healthcare data processing
### Limitations
- Always verify generated codes with qualified medical coding professionals
- Performance may vary on clinical documents significantly different from training data
- Intended for use in appropriate healthcare environments only
## License
This model is released under the Apache 2.0 License.
## Citation
If you use this model in your research or applications, please cite:
```bibtex
@misc{TachyHealthResearch_medgemma_4b_medical_coding_2024},
title = {medgemma-4b-medical-coding: Medical Coding Model},
author = {TachyHealthResearch},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/TachyHealthResearch/medgemma-4b-medical-coding}
}
```
---
**Important**: This model is intended for research and healthcare applications. Always ensure proper validation and human oversight when using AI models in medical contexts.
|