InternVL3-14B-6bit / README.md
prince-canuma's picture
Upload folder using huggingface_hub
c2d7d1a verified
|
raw
history blame contribute delete
834 Bytes
---
license: apache-2.0
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- OpenGVLab/InternVL3-1B-Instruct
base_model_relation: finetune
datasets:
- OpenGVLab/MMPR-v1.2
language:
- multilingual
tags:
- internvl
- custom_code
- mlx
---
# mlx-community/InternVL3-14B-6bit
This model was converted to MLX format from [`models/InternVL3-14B`]() using mlx-vlm version **0.1.25**.
Refer to the [original model card](https://huggingface.co/models/InternVL3-14B) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/InternVL3-14B-6bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```