Model Card: QwenMedic-v1
Overview
QwenMedic-v1 is a medical-specialty adaptation of the Qwen3-1.7B causal language model, fine-tuned for clinical reasoning and instruction-following tasks. It was trained for 1 epoch on two curated medical datasets to improve diagnostic Q&A and clinical summarization.
Base Model
- Architecture: Qwen3-1.7B (28 layers, 16 Q / 8 KV attention heads, 32 768-token context)
- Parameters: 1.7 billion
- Quantization: float16 and int4 supported
Fine-Tuning Data
Medical Reasoning SFT (
FreedomIntelligence/medical-o1-reasoning-SFT
)- Chain-of-thought reasoning examples on verifiable medical problems
- Language: English
- Split used:
train
General Medical Instruction (
jtatman/medical-sci-instruct-1m-sharegpt
)- Conversational Q&A prompts across medical topics
- Sampled first 100 000 examples via
train[:100000]
Training Configuration
- Framework: PyTorch + Hugging Face Transformers
- Optimizer: AdamW
- Learning Rate: 2 × 10⁻⁵
- Batch Size: 16 (with gradient accumulation)
- Precision: bfloat16 mixed precision
- Hardware: NVIDIA RTX 3090 (24 GB)
Intended Uses
- Clinical question answering & differential diagnosis
- Summarization of patient notes
- Medical education & decision support
Limitations & Risks
- May produce hallucinations or plausible-sounding but incorrect advice
- Biases due to training-data coverage
- Not FDA-approved—should not replace professional medical judgment
- Avoid feeding patient-identifiable data without proper de-identification
Summary of Final Training Metrics
Metric | Step | Smoothed | Raw Value |
---|---|---|---|
Epoch | 1539 | 0.9979 | 0.9997 |
Gradient Norm | 1539 | 0.3882 | 0.3974 |
Learning Rate | 1539 | — | 0 |
Training Loss | 1539 | 1.5216 | 1.4703 |
Inference Example
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/QwenMedic-v1"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "A 55-year-old male with Type 2 diabetes presents with sudden chest pain "
"and diaphoresis. What are the top differential diagnoses?"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
Contact
- Creator: Andre Ross
- Company: Ross Technologies
- Email: [email protected]
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support