|
--- |
|
library_name: transformers |
|
tags: |
|
- medical |
|
datasets: |
|
- stefan-m-lenz/ICDOPS-QA-2024 |
|
language: |
|
- de |
|
base_model: |
|
- Qwen/Qwen2.5-7B-Instruct-1M |
|
--- |
|
|
|
# Model Card for Model stefan-m-lenz/Qwen2.5-7B-Instruct |
|
|
|
This model is a PEFT adapter (e.g., LoRA) fine-tuned using the dataset [ICDOPS-QA-2024](https://huggingface.co/datasets/stefan-m-lenz/ICDOPS-QA-2024) based on [Qwen/Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M). |
|
For more information about the training, see the [dataset card](https://huggingface.co/datasets/stefan-m-lenz/ICDOPS-QA-2024). |
|
|
|
# Usage |
|
|
|
Package prerequisites: |
|
|
|
``` |
|
pip install transformers accelerate peft |
|
``` |
|
|
|
Load the model. |
|
```{python} |
|
repo_id = "stefan-m-lenz/Qwen-2.5-7B-ICDOPS-QA-2024" |
|
config = PeftConfig.from_pretrained(repo_id, device_map="auto") |
|
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, device_map="auto") |
|
model = PeftModel.from_pretrained(model, repo_id, device_map="auto") |
|
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path, device_map="auto") |
|
``` |
|
|
|
```{python} |
|
# Test input |
|
test_input = """Was ist der ICD-10-Code für die Tumordiagnose „Bronchialkarzinom, Hauptbronchus“? Antworte nur kurz mit dem ICD-10 Code.""" |
|
|
|
# Generate response |
|
inputs = tokenizer(test_input, return_tensors="pt").to("cuda") |
|
outputs = model.generate( |
|
**inputs, |
|
max_new_tokens=7, |
|
do_sample=False, |
|
pad_token_id=tokenizer.eos_token_id, |
|
temperature=None, |
|
top_p=None, |
|
top_k=None, |
|
) |
|
response = tokenizer.decode(outputs[0], skip_special_tokens=True) |
|
response = response[len(test_input):].strip() |
|
|
|
print("Test Input:", test_input) |
|
print("Model Response:", response) |
|
``` |