summary_tags_qwen2-vl-2b

This is a merged fine-tuned model based on Qwen/Qwen2-VL-2B. The LoRA adapters have been merged into the base model, creating a standalone fine-tuned model.

Model Description

This language model has been fine-tuned using LLaMA-Factory and then merged with the base model. It specializes in email search and related tasks.

Model Details

  • Base Model: Qwen/Qwen2-VL-2B
  • Model Size: ~2B parameters
  • Architecture: Qwen2-VL
  • Training Method: LoRA fine-tuning + model merging
  • Dataset: tags_and_summary
  • Use Case: Email search and analysis
Downloads last month
3
Safetensors
Model size
2.21B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for trl-algo/summary_tags_qwen2-vl-2b

Base model

Qwen/Qwen2-VL-2B
Finetuned
(14)
this model