This is the 7b Qwen2-VL image model exported via https://github.com/pdufour/llm-export.

Also see https://huggingface.co/pdufour/Qwen2-VL-2B-Instruct-ONNX-Q4-F16 for a 2b model that is onnxruntime-webgpu compatible.

Downloads last month
6
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the HF Inference API does not support transformers.js models with pipeline type image-text-to-text

Model tree for pdufour/Qwen2-VL-7B-Instruct-onnx

Base model

Qwen/Qwen2-VL-7B
Quantized
(39)
this model

Collection including pdufour/Qwen2-VL-7B-Instruct-onnx