Model Card for Turkish-Medical-R1

Model Details

This model is a fine-tuned version of Qwen2.5-1.5B-Instruct for medical reasoning in Turkish. The model was trained on ituperceptron/turkish_medical_reasoning dataset, which contains instruction-tuned examples focused on clinical reasoning, diagnosis, patient care, and medical decision-making.

Model Description

  • Developed by: Rustam Shiriyev
  • Language(s) (NLP): Turkish
  • License: MIT
  • Finetuned from model: unsloth/Qwen2.5-1.5B-Instruct

Uses

Direct Use

  • Medical Q&A in Turkish
  • Clinical reasoning tasks (educational or non-diagnostic)
  • Research on medical domain adaptation and multilingual LLMs

Out-of-Scope Use

This model is intended for research and educational purposes only. It should not be used for real-world medical decision-making or patient care.

How to Get Started with the Model

Use the code below to get started with the model.


from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

login(token="")

tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen2.5-1.5B-Instruct",)
base_model = AutoModelForCausalLM.from_pretrained(
    "unsloth/Qwen2.5-1.5B-Instruct",
    device_map={"": 0}, token=""
)

model = PeftModel.from_pretrained(base_model,"Rustamshry/Rustamshry/Turkish-Medical-R1")


question = "Medüller tiroid karsinomu örneklerinin elektron mikroskopisinde gözlemlenen spesifik özellik nedir?"

prompt = (

    "### Talimat:\n"
    "Siz bir tıbb alanında uzmanlaşmış yapay zeka asistanısınız. Gelen soruları yalnızca Türkçe olarak, "
    "açıklayıcı bir şekilde yanıtlayın.\n\n"
     f"### Soru:\n{question.strip()}\n\n"
     f"### Cevap:\n"

)

input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(
    **input_ids,
    max_new_tokens=2048,
)

print(tokenizer.decode(outputs[0]))

Training Data

  • Dataset: ituperceptron/turkish_medical_reasoning; Translated version of FreedomIntelligence/medical-o1-reasoning-SFT (Turkish, ~7K examples)

Evaluation

No formal quantitative evaluation yet.

Framework versions

  • PEFT 0.15.2
Downloads last month
35
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Rustamshry/Turkish-Medical-R1

Base model

Qwen/Qwen2.5-1.5B
Adapter
(287)
this model

Dataset used to train Rustamshry/Turkish-Medical-R1