Qwen2.5-1.5B-Instruct Fine-tuned Model
This model is a fine-tuned version of Qwen/Qwen2.5-1.5B-Instruct using LoRA (Low-Rank Adaptation).
Training Details
- Model was trained for 2 epochs on a custom dataset
- Used 4-bit quantization for efficient training
- Used the LoRA+ technique with 16.0 ratio
- Trained with a batch size of 1 and gradient accumulation steps of 12
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support