Safetensors
qwen3
medical

🧠 Qwen3-1.7B-MedicalDataset

A fine-tuned version of Qwen3-1.7B on a curated medical dataset, designed to assist with medical Q&A, clinical documentation, and healthcare-related reasoning. Developed by XformAI-India.


πŸ“Œ Model Details

  • Base Model: Qwen3-1.7B (from Qwen by Alibaba)
  • Architecture: Transformer Decoder (GPT-like)
  • Parameters: 1.7 Billion
  • Precision: bfloat16 / float16
  • Fine-tuning method: Supervised fine-tuning (SFT)
  • Dataset: FreedomIntelligence/medical-o1-reasoning-SFT

πŸ§ͺ Intended Use

This model is intended for research and educational purposes in the following domains:

  • Medical question answering
  • Summarizing patient records
  • Medical reasoning and triage
  • Chatbot integration for healthcare support (non-diagnostic)

🚫 Limitations & Disclaimer

⚠️ Not for clinical use. This model is not a replacement for professional medical advice, diagnosis, or treatment.

  • May hallucinate or produce incorrect medical information.
  • Trained only on publicly available or synthetic datasets β€” not real patient data.
  • Should not be used in emergency or high-stakes settings.

πŸ’‘ Example Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("XformAI-india/Qwen3-1.7B-medicaldataset", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("XformAI-india/Qwen3-1.7B-medicaldataset", trust_remote_code=True)

prompt = "Patient presents with chest pain and shortness of breath. What are possible differential diagnoses?"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ— Training Configuration

  • Epochs: 3
  • Batch Size: 8
  • Optimizer: AdamW
  • Learning Rate: 2e-5
  • Gradient Accumulation: 4
  • Framework: PyTorch, Hugging Face Transformers

🧠 Citation

If you use this model, please cite:

@misc{qwen3medical2025,
  title={Qwen3-1.7B-MedicalDataset: A Fine-Tuned Transformer for Medical AI Research},
  author={XformAI-India},
  year={2025},
  url={https://huggingface.co/XformAI-india/Qwen3-1.7B-medicaldataset}
}

Downloads last month
4
Safetensors
Model size
1.72B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for XformAI-india/Qwen3-1.7B-medicaldataset

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(127)
this model
Quantizations
1 model

Dataset used to train XformAI-india/Qwen3-1.7B-medicaldataset