Model Card for Model ID
๐ Usage Instructions:
============================================================
To use this LoRA adapter:
import torch
from transformers import AutoTokenizer, Qwen2_5_VLForConditionalGeneration
from peft import PeftModel
# Load base model
base_model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-VL-32B-Instruct",
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "sungnyun/Flex-VL-32B-LoRA")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("sungnyun/Flex-VL-32B-LoRA")
Framework versions
- PEFT 0.15.2
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for sungnyun/Flex-VL-32B-LoRA
Base model
Qwen/Qwen2.5-VL-32B-Instruct