metadata
license: apache-2.0
language:
- zh
- en
base_model:
- huihui-ai/Qwen2.5-VL-7B-Instruct-abliterated
pipeline_tag: image-text-to-text
tags:
- qwen2_5_vl
- multimodal
- abliterated
- uncensored
- text-generation-inference
Quantized gguf file from https://huggingface.co/huihui-ai/Qwen2.5-VL-7B-Instruct-abliterated
Using "--leave-output-tensor" in quantizing to keep output layer precision at FP16.
LM Studio is recommended to deploy it.